url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://zejroleplaying.org/index.php?threads/pokemon-crossing-gijinka-world-su.865/
|
# Pokemon Crossing: Gijinka World [SU]
Discussion in 'Profile Threads' started by Elegante, Dec 16, 2012.
1. You weren't meant to be, yet you found your way here.
This is Koyoto, the supposed region that gijinkas are supposed to be safe in.
~~~~~~~~~~~~~~~~~~~~~~
The Form of Filling Out and Filing
Code:
[b]Name:[/b] The obvious part. First Middle Last
[b]Age:[/b] Herp-a-derp.
[b]Gijinka Pokemon:[/b] What Pokemon are you combined with?
[b]Apperance:[/b] I prefer to see than read. Please include an image within the spoiler ( [spoiler] ) and image ( [img] ) tags. Only in a worst case scenario will I accept words.
[b]Gender & Sexuality:[/b] Yes, I am allowing sexuality, but this will all remain PG-13.
[b]Biography:[/b] Give me a bio. Please include a hometown. Somewhere between 4 and 6 sentences will sufice. More is good. Fewer sentences will be taken into account.
[b]Personality:[/b] What does your character act like? Same number of sentences as bio.
[b]Boosted Stats:[/b] This is the main part to be approved, thus the reason this form is messaged to yours truly.
[b]Other:[/b] Something different about your gijinka that can't fit above? Here ya go.
RULES: NO SHINIES. NO LEGENDS OUTSIDE OF TOWN LEADERS. ONE TOWN LEADER PER PERSON. THIS IS A LITERATE ROLEPLAY, MEANING AT LEAST ONE PARAGRAPH (3~5 sentences) IS EXPECTED PER POST. NO GOD MODDING. I AM ALLOWING UP TO 2 CHARACTERS PER PERSON, FOR NOW.
Private message character profiles to me, with the subject line reading "PC: GW". You may post them here once accepted.
#1
2. MAP RESERVED
City descriptions to come.
#2
3. My Characters:
Name: Nicholas Jaie Redfearn
Age: Ageless (Legendary)
Gijinka Pokemon: Dialga
Apperance:
Gender & Sexuality: Gender & Sexuality Unknown
Biography: Nicholas was raised in a small town that played home to much of nothing. This town was New Bark Town. Even though he looked completly human, from a small age he was able to control time for short bursts, sending his parents into shock at first. Once he grew older, he began to develop more and more into his true form. Around what his parents thought was his thirteenth birthday, he began to grow horns on both sides of his head. These horns continued to grow from then on. His parents didn't know what to do with him, so they sent him out at the age of seemingly sixteen. He eventually wound up in the Whirl Islands, alone and sad. Not much is known of what happened to him, or how long he was there. He eventually left and found the area where Ziemas is now located. Alongside several other legendaries, he founded the town as a Gijinka Safehaven.
Personality: Nicholas is nuetral territory. Depending on who you are and what you act like will decide what he treats you like. He enjoys music, and thus is constantly singing or messing with an instrument. He is neither loud nor quiet and thus enjoys the presence of others in a small group. He welcomes anyone with open arms, until they prove themselves otherwise. A lot gets under his skin, but some things get under there quicker. His pet peeves include: rudeness, uncleanleness, and stupidity.
Boosted Stats: Minor time control ability that lasts up to thirty minutes without hurting himself.
Other: Allergy to Peanuts.
#3
4. If one person is confused, I realize others might be too. So, here is some things crazE and I came up with.
crazE tell me of your Gijinka RP
16:35 crazE what is it about? I was only able to quickly scan over it when I had time to get on
16:36 RaivisGalante Well, all I have released is that you are a gijinka, supposed to be dead, that has made it to a safehaven.
16:36 RaivisGalante I have the entire plot already done, but will release it as the roleplay goes on.
16:37 crazE ok well uhm
16:37 crazE how am I to make a background and a hometown if I know nothing of them? Are they regions from the pokemon series? Do I have to explain why my character is a gijinka or are they common?
16:38 RaivisGalante Hrm. Hometown can be any region within the actual games/anime
16:38 RaivisGalante You can explain why they are gijinka if you'd like, but you don't have to.
16:39 crazE Well, can you tell me how gijinka are made? I'm not familiar with that dealio
16:45 RaivisGalante Really, gijinka aren't made. (the best i can come up with)
16:45 RaivisGalante Being a gijinka, from what I can deceiver, is sort of a genetic mutation.
16:46 crazE digi evolution?
16:46 crazE *shot*
16:46 RaivisGalante XD
16:46 RaivisGalante But, um, no. Do you know how real life genetic mutations work?
16:48 crazE cancer?
16:48 crazE *shotagain*
16:49 RaivisGalante *bricks crazE*
16:49 crazE explodes
16:49 crazE I haven't familiarized myself with that level of biology
16:50 RaivisGalante Darn.
16:50 crazE why do you ask?
16:51 RaivisGalante Well, basically, for a gijinka: The genes within them, that cause them to be gijinkas, may remain dormant for many years, or may show themselves at birth.
16:51 crazE soooo
16:51 crazE our character's ancestors might've been **** by a pokemon and then this?
16:52 RaivisGalante That can be one way to put it, I guess.
16:52 crazE well
16:52 crazE I find that half disturbing
16:53 crazE and half interesting
16:53 RaivisGalante LoL
16:53 crazE I would've thought that we could go by some kind of digimon thing where a person is able to "absorb" (with a lack of any better description) a pokemon and their attributes/abilities/other and therefore make gijinka
16:54 crazE or they're pokemon spirits in general recycled into human beings
16:54 RaivisGalante That could be another way.
16:54 crazE and when a certain power ignites their inner pokemon spirit
16:54 crazE they gain abilities
16:54 crazE or something
16:54 crazE IDK
16:54 RaivisGalante This is the reason I am having people to PM the forms, so I can make sure nothing is too drastic.
16:55 RaivisGalante ALL OF THIS SOUNDS GREAT, HOLY SH!T
16:55 crazE do stats count as abilities or what?
16:56 RaivisGalante Stats?
16:57 crazE "Boosted Stats"
16:58 RaivisGalante Oh, on the form?
16:58 crazE yesserie
16:58 RaivisGalante Hang on a sec
17:00 RaivisGalante What I was meaning there, and I will go back and word it better, is the part of the human that is changed.
17:00 RaivisGalante Boosted speed, psychic abilites, etc
17:00 crazE o
17:00 crazE well
17:01 crazE do the gijinkas get any special powers such as performing moves like a pokemon?
17:01 crazE because Hyper Beam yes.
17:01 RaivisGalante I was thinking about that. Hang on.
17:03 crazE XD otay
17:03 crazE also, maybe not just leave it to one move at a time. Humans are more innovative and gijinkas may be able to mix and match moves
17:03 crazE as in combos
17:03 crazE kinda like KH
17:03 crazE (Kingdom Hearts)
17:04 RaivisGalante I know KH
17:04 crazE I didn't know if you knew the abbreviation
17:04 crazE XD
17:04 RaivisGalante i did
17:04 RaivisGalante I think I'll cut it down to where they can only know up to 4 moves.
17:04 RaivisGalante No OP MAry Sue's though
17:05 crazE well of course not
17:05 crazE and four moves? Must be a mind limit or something then
17:05 RaivisGalante Mhmm
17:05 RaivisGalante And Maybe a recharge time.
17:05 crazE so are they mentally incapable of keeping knowledge of more moves or no?
17:06 RaivisGalante YOU ARE MAKING ME THINK
17:06 RaivisGalante STAHP
17:06 RaivisGalante lol, jk
17:06 crazE I'm just trying to figure this ou- Oh XD
17:06 RaivisGalante I would say the Pokemon side prevents that, yes
17:07 crazE pokemon's brains are more powerful than ours
17:07 crazE I mean, you never see pokemon get drunk and beat their spouses
17:07 RaivisGalante i have -_-
17:07 crazE . . .
17:07 RaivisGalante she **** him afterwards
17:07 crazE I see
17:07 crazE well that's unfortunate
17:07 RaivisGalante and his seed created the first gijinka
17:07 crazE but you should probably close the hentai tab
17:08 RaivisGalante SEE WHAT I DID THERE!?
17:08 crazE Nope!!!!!
17:08 crazE I don't see anything!
17:08 crazE I can only read!
17:08 RaivisGalante -_-
17:08 RaivisGalante read what i did there!?
17:08 crazE yes
17:08 crazE and my mind is now on the wall
17:08 RaivisGalante see, it all comes back
17:09 RaivisGalante I BLEW CRAZE'S MIND~
17:09 crazE can a gijinka be a delta species?
17:09 crazE now I want to see your mind on the wall
17:09 crazE :>
17:10 RaivisGalante Delta Species?
17:10 crazE where a pokemon is a different element than what their species is supposed to be
17:10 crazE
17:11 RaivisGalante No.
17:11 crazE
17:12 RaivisGalante Negative
17:12 crazE http://pokegym.net/gallery/displayimage.php?imageid=40011
17:14 RaivisGalante I SAID NO
#4
|
2023-02-08 20:05:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4612378478050232, "perplexity": 9558.551353189987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500904.44/warc/CC-MAIN-20230208191211-20230208221211-00663.warc.gz"}
|
http://www.physicsforums.com/showthread.php?p=3922385
|
Blog Entries: 2
Recognitions:
Gold Member
## Renewable energy EU under pressure.
The Frankfurter Algemeiner (something like the Times, but then in another country) has a rather important article here..
It seems that an internal EU strategy paper has leaked, in which it is proposed to stop green energy support as it becomes prohibitive expensive.
One may wonder if this had to do with the sacking of a green energy minister in Germany the other week.
Of course it is known that things were not going that well for a while, see here, slide 6.
The site itself doesn't show current rating for me. So I wonder what is going on. Comments anybody?
PhysOrg.com science news on PhysOrg.com >> Ants and carnivorous plants conspire for mutualistic feeding>> Forecast for Titan: Wild weather could be ahead>> Researchers stitch defects into the world's thinnest semiconductor
Mentor Blog Entries: 1 I don't think there is too much cause for alarm. This is very similar to what happened in the UK at the start of the year, for the past several years domestic solar power has been subsidised by the government. The special tariff was due to end in March this year anyway but the government out of the blue decided to stop it in January causing mass cancellations of orders and many companies having to downsize their workforce and some go bankrupt. The way they handled the situation was utterly deplorable. The reason (and I think it is the same reason we are seeing here if I read the FA article correctly) was more understandable. Technological advances in recent years have massively brought down the cost of solar panels, IIRC the current cost for a domestic panel is 25% of what it was in 2008 and the price halved from January 2011 to December 2011. This meant there was something of a gold rush on panel installation*. Consequently the government started massively overspending via their special tariff (read: haemorrhaging tax payer money) because when the subsidy was thought of over five years ago no one predicted this and so they panicked. I think that is what we are seeing here, as the technology for renewable energy gets cheaper it becomes more costly to fund it because its adoption increases. Subsidising a tiny fraction of the population in order to build incentive in an industry important to the future is fine, subsidising a significant fraction is an unjustifiable expense. The subsidies were never meant to be forever, they were only meant to incentivise the public to spend and the industry to invest. So it's not all bad because hopefully the reason green energy is becoming prohibitively expensive to support is because the technology is cheap enough to begin wide spread public adoption. Having said all that we're walking into an energy crisis in Europe. Anti-nuclear lobbies have been very successful in recent years in the UK, Germany and Italy and our supplies of fossil fuels aren't getting any cheaper. We need massive funding and deployment of non-fossil fuel energy sources now and continuing over the next few decades. We can't afford to wait until peak oil/gas/coal and have to radically build new energy infrastructure whilst dealing with a system where energy costs spiral. To that end I sincerely hope that the money saved from reducing/stopping subsidies for current gen green technologies is put towards the next gen like better battery technology for electric vehicles (and the corresponding infrastructure) or funding for artificial photosynthesis development. *Anecdote but three years ago I didn't know of any building with solar panels, now even in the sleepy Noweheresville town I currently live in there are about five houses within a mile that have a solar panelled roof. If I extend that to a few miles the number jumps. It seems like we're on track (fingers crossed!) for significant solar panel installation in the UK. Next we need to figure out good ways of storing it, government subsidised home batteries anyone?
Recognitions:
Gold Member
There appear to be several *different* issues at hand:
Quote by Andre The Frankfurter Algemeiner (something like the Times, but then in another country) has a rather important article here.. It seems that an internal EU strategy paper has leaked, in which it is proposed to stop green energy support as it becomes prohibitive expensive.
The cost of renewable energy per unit has dropped substantially as the article says, though the volume of installation has grown rapidly and hence the subsidy costs. The EU is in financial difficulty, so it has to cut back on something, sounds like energy subsidies will one them.
One may wonder if this had to do with the sacking of a green energy minister in Germany the other week.
Article states that was likely due to a political gaff, and unrelated to the above.
Of course it is known that things were not going that well for a while, see here, slide 6. The site itself doesn't show current rating for me. So I wonder what is going on. Comments anybody?
Which is about the grossly overpopulated renewable energy sector. This is related to the top subject, but is mainly about the suppliers and not the consumers. Time to thin the heard.
Recognitions:
Gold Member
## Renewable energy EU under pressure.
Recently I've been reading on wikipedia about this topic and it seems highly optimistic. In 2011 wind energy supplied 6.3% of total energy in the EU and the growth is exponential for now, with about 20% increase per year. If the trend kept going like this, in 15 years EU will be powered completely by renewable energy. But yeah, I seriously doubt it will...
http://en.wikipedia.org/wiki/Wind_po...European_Union
Mentor
Blog Entries: 1
Quote by Alesak Recently I've been reading on wikipedia about this topic and it seems highly optimistic. In 2011 wind energy supplied 6.3% of total energy in the EU and the growth is exponential for now, with about 20% increase per year. If the trend kept going like this, in 15 years EU will be powered completely by renewable energy. But yeah, I seriously doubt it will... http://en.wikipedia.org/wiki/Wind_po...European_Union
I do think the outlook is good (though it could be far better if changes were made earlier and we weren't fighting so much inertia) but I doubt the growth we're seeing will continue unabated. Mainly because even if wind was built on a mass scale we'll fill up all the suitable places quickly and then face diminishing returns.
Recognitions: Gold Member I doubt Europe will fill up the suitable *offshore* wind places anytime soon.
Blog Entries: 2
Recognitions:
Gold Member
Quote by Ryan_m_b (.... and we weren't fighting so much inertia)
Is it really inertia? Maybe this article suggest that some thinking is involved, if it's about solving real and perceived problems.
Recognitions:
Gold Member
Quote by Andre Is it really inertia? Maybe this article suggest that some thinking is involved, if it's about solving real and perceived problems.
...says "conservative think tank", that receives donations from BP. It's not really surprising that they recommend that "government should scrap 4GW of its planned 13GW target for offshore wind generation by 2020", then.
See for example this book for discussion about think-tanks.
Blog Entries: 2
Recognitions:
Gold Member
Quote by Alesak ...says "conservative think tank", that receives donations from BP. It's not really surprising that they recommend that "government should scrap 4GW of its planned 13GW target for offshore wind generation by 2020", then. See for example this book for discussion about think-tanks.
You realize that there is no logic in your argument. It is called an argumentum ad hominem.
Mentor
Blog Entries: 1
Quote by Andre You realize that there is no logic in your argument. It is called an argumentum ad hominem.
Just because an argument is an ad hominem doesnt mean it's an ad hominem fallacy. Pointing out conflicting interests in the person making an argument is a good way to highlight that the argument isn't credible, it's not the final say at all but it is an indicator upon which we should build by reading into the actual argument.
Surfice to say we should consider the report itself. Personally I agree with propositions like this where meeting targets is considered in a broader sense however a big problem IMO is that these measures are only to meet short term targets. We need to consider the longer term, therefore a more complete proposal would look at what to do with those stations over the following 20 years. As an aside another problem with this proposal is that it would have to ensure that the money saved was actually spent in the manner described and take into account what happens if in several years time a new government axes the insulation plan but keeps money saving through gas stations.
Also I think that the target should be removing fossil fuel dependancy over all as well as reducing CO2 emissions. Mainly because we have to do everything we can to mitigate the struggle for transition from a fossil fuel energy system to a non-fossil fuel system as peak oil/gas/coal loom.
Blog Entries: 2
Recognitions:
Gold Member
Quote by Ryan_m_b Just because an argument is an ad hominem doesnt mean it's an ad hominem fallacy. Pointing out conflicting interests in the person making an argument is a good way to highlight that the argument isn't credible, it's not the final say at all but it is an indicator upon which we should build by reading into the actual argument.
I beg to differ, what if your local deity states that water boils at 90 degrees celsius, while your local folk devil thinktank says that it boils at 100 degrees, what do their backgrounds say about who is the most right?
Actually, pointing out that they are scapegoats, is probably saying more about the initiator than their victims.
You may want to compare this process with groupthink
Indeed, they presented a report with numbers which should be scrutinized just like all the feasibility studies about renewables. It's not the messenger but the messenge.
Honi soit qui mal y pense.
Mentor
Blog Entries: 1
Quote by Andre I beg to differ, what if your local deity states that water boils at 90 degrees celsius, while your local folk devil thinktank says that it boils at 100 degrees, what do their backgrounds say about who is the most right?
I really can't be bothered to go down this route because it is mostly pointless. But just to be a pedant that analogy doesn't hold because the religious principles of the think group do not relate to the subject matter. A more apt analogy maybe how would you feel about a think tank report on the heath effects of smoking from a tobacco company? You would take it with a larger pinch of salt than you would from a collection of respiratory doctors.
Either way it's a detraction because ultimately we are not going to limit ourselves to whether or not we trust the report but on what it actually says.
Quote by Andre I beg to differ, what if your local deity states that water boils at 90 degrees celsius, while your local folk devil thinktank says that it boils at 100 degrees, what do their backgrounds say about who is the most right? Actually, pointing out that they are scapegoats, is probably saying more about the initiator than their victims. You may want to compare this process with groupthink Indeed, they presented a report with numbers which should be scrutinized just like all the feasibility studies about renewables. It's not the messenger but the messenge. Honi soit qui mal y pense.
As with most things, we often are forced to resort to uncertainty. As a result we have to use the best tools in our arsenal to figure out whether something has any ounce of truth (many people think in terms of truth/no truth, but rarely have I ever seen anything that is that simple).
One tool though that is very effective is to look at incentive. Intent is ultimately the best way to judge things but unfortunately (and ironically fortunately), we don't get access to this.
The second best thing is then looking at inference for incentive. Money is a good inferential indicator. Granted it is not the only indicator and sure you can say "correlation does not equal causation", but even with that said in a world of uncertainty and with a world where true intent is rarely easy to decipher, money trails, fund networks, people networks and these combined help build a case for incentive and indirectly, intent.
Blog Entries: 2
Recognitions:
Gold Member
Quote by Ryan_m_b I A more apt analogy maybe how would you feel about a think tank report on the heath effects of smoking from a tobacco company?
Which is known as guilt by association
Maybe just maybe these members of think tanks have children and grandchildren like Dinand and Myrthe (also guilt by association ) and maybe they want nothing more than a bright future for all of them, even if it's the last thing that they do. It just so happens that they don't believe in the future that others have thought out.
And maybe that's why they are declared folk devils, being the out group.
Quote by Alesak Recently I've been reading on wikipedia about this topic and it seems highly optimistic. In 2011 wind energy supplied 6.3% of total energy in the EU and the growth is exponential for now, with about 20% increase per year. If the trend kept going like this, in 15 years EU will be powered completely by renewable energy. But yeah, I seriously doubt it will... http://en.wikipedia.org/wiki/Wind_po...European_Union
Well that is a highly optimistic number. One number you report is the relative input of wind from the total current production (the 6.3%). Another is the sole increase in wind production (20%). But, you must bear in mind that there is increase in total production as well. Let us say that the total consumption growth rate is g per cent. Then, at a later date, the relative input of wind power is:
$$6.3 \% \left( \frac{1.2}{1+g/100} \right)^{T}$$
It rises only if $g < 20%$.
Recognitions:
Gold Member
Quote by Dickfore Well that is a highly optimistic number. One number you report is the relative input of wind from the total current production (the 6.3%). Another is the sole increase in wind production (20%). But, you must bear in mind that there is increase in total production as well. Let us say that the total consumption growth rate is g per cent. Then, at a later date, the relative input of wind power is: $$6.3 \% \left( \frac{1.2}{1+g/100} \right)^{T}$$ It rises only if $g < 20%$.
You are totaly right, I forgot about increasing energy consumption. However, see this. It says that over the last 20 years or so the EU27 energy consumption was nearly constant, which makes sense really. But it probably won't show this year, as major wind farm are still under construction.
Also, for those interested, last year the installed wind power was about 0.6% of total production.
Blog Entries: 2
Recognitions:
Gold Member
Quote by chiro ..uncertainty... Money is a good inferential indicator...
Right, I owe you an elaborate response to those. But it has to wait until tomorrow.
|
2013-05-23 13:52:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3015442490577698, "perplexity": 1431.8457910392103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703326861/warc/CC-MAIN-20130516112206-00030-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://mathoverflow.net/feeds/question/35057
|
Map constructed from the coquasitriangular structure of SLq(2) which appears not to respect the standard commutation relations - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-24T10:57:30Z http://mathoverflow.net/feeds/question/35057 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/35057/map-constructed-from-the-coquasitriangular-structure-of-slq2-which-appears-not Map constructed from the coquasitriangular structure of SLq(2) which appears not to respect the standard commutation relations John McCarthy 2010-08-09T22:35:05Z 2010-08-18T13:01:25Z <p>Let $A$ be a Hopf algebra dually paired with a quasi-triangular Hopf algebra $B$. If $x$ is some fixed element of $A$, then we can define a linear map $$P_x: A \to \mathbb{C}$$ by setting $$P_x:a \mapsto \langle R,x \otimes a \rangle.$$</p> <p>Let us take the case $A = SL_q(2)$, $B = U_q({\mathfrak sl}_2)$, and let $R$ be the standard universal $R$-matrix for $U_q({\mathfrak sl}_2)$, for which <code>$$\langle R, u^i_m \otimes u^j_n \rangle = R^{ij}_{mn} = q^{-\frac{1}{2}}.(q^{\delta_{ij}}\delta_{im}\delta_{jn} + (q-q^{-1})\theta (i-j)\delta_{in}\delta_{jm}),$$</code> where $\theta$ is the Heaviside symbol. If we take $x=u^k_l$, then $$P_{u^k_l}(a) = \langle R, u^k_l \otimes a \rangle.$$ Now since $ab = qba$, we should have $$P_{u^k_l}(u^1_1u^1_2) = q P_{u^k_l}(u^1_2u^1_1), \qquad \qquad \text{ for all } \quad k,l = 1,2.$$ However, <code>$$P_{u^2_1}(u^1_1u^1_2) = \langle R, u^2_1 \otimes u^1_1u^1_2 \rangle = \sum_{z=1}^2 \langle R,u^2_z \otimes u^1_1 \rangle \langle R, u^z_1 \otimes u^1_2 \rangle = \sum_{z=1}^2 R^{21}_{z1}R^{z1}_{12}.$$</code> From the formula for $R^{ij}_{mn}$, we get that <code>$$P(u^1_1u^1_2) = \sum_{z=1}^2 R^{21}_{z1}R^{z1}_{12} = R^{21}_{11}R^{11}_{12} + R^{21}_{21}R^{21}_{12} = 0.0 + q^{-\frac{1}{2}}.1.q^{-\frac{1}{2}}.(q-q^{-1}) = q^{-1}(q-q^{-1}).$$</code></p> <p>On the other hand, we have <code>$$qP_{u^2_1}(u^1_2u^1_1) = \langle R,u^2_1 \otimes u^1_2u^1_1 \rangle = \sum_{z=1}^2 \langle R,u^2_z \otimes u^1_2 \rangle \langle R,u^z_1 \otimes u^1_1 \rangle =\sum_{z=1}^2R^{21}_{z2}R^{z1}_{11}.$$</code> From the formula for $R^{ij}_{mn}$, we now get that <code>$$qP_{u^2_1}(u^1_2u^1_1) = q\sum_{z=1}^2R^{21}_{z2}R^{z1}_{11} = qR^{21}_{12}R^{11}_{11} + qR^{21}_{22}R^{21}_{11} = q.q^{-\frac{1}{2}}.(q-q^{-1}).q^{-\frac{1}{2}}.q + q.0.0 = q(q-q^{-1}).$$</code></p> <p>Thus, the two results are not equal, but instead differ by a factor of $q^2$. A similar problem arises for the action of $P_{u^2_1}$ on $bd - qdb$. We get $$P_{u^2_1}(u^1_2u^2_2) = q^{-1}(q-q^{-1}),$$ whereas $$qP_{u^2_1}(u^2_2u^1_2) = q(q-q^{-1}).$$</p> <p>I've checked and rechecked everything very carefully but can't seem to spot my error. Can anyone see what is going wrong here?</p> http://mathoverflow.net/questions/35057/map-constructed-from-the-coquasitriangular-structure-of-slq2-which-appears-not/35111#35111 Answer by David Jordan for Map constructed from the coquasitriangular structure of SLq(2) which appears not to respect the standard commutation relations David Jordan 2010-08-10T12:44:15Z 2010-08-18T13:01:25Z <p>Dear John,</p> <p>I tried to follow your computation until the first place where I couldn't understand a step. This comes at:</p> <blockquote> <p>However, $$P_{u^2_1}(u^1_1u^1_2) = \langle R, u^2_1 \otimes u^1_1u^1_2 \rangle = \sum_z \langle R,u^2_z \otimes u^1_1 \rangle \langle R, u^z_1 \otimes u^1_2 \rangle,$$</p> </blockquote> <p>Rather than the RHS, I would expect <code>$$<(\operatorname{id}\otimes \Delta)(R), u^2_1 \otimes u^1_1 \otimes u^1_2> =<R_{13}R_{12}, u^2_1\otimes u^1_1\otimes u^1_2> =\sum_z<R,u^2_z\otimes u^1_2><R,u^z_1\otimes u^1_1>$$</code></p> <p>which seems different than what you wrote. It seems you have used the opposite comultiplication in your computations so that where I wrote $R_{13}R_{12}$ above, you instead had $R_{12}R_{13}$. I hope this helps. I am aware that pairing of Hopf algebras sometimes requires matching multiplication of $H$ with opposite co-multiplication of $H^*$. However, you seem to be working from Klymik and Schmudgen's text, which doesn't not use opposite co-product in the definition of dual pairing of Hopf algebras.</p> <p>I haven't checked the details to see if the above resolves your issue. Perhaps this is still not your source of confusion, but it confused me when I first read it in your post.</p> <p>Looking again at what you wrote, this means that the two computations you did for $P_c(ab)$ and $P_c(ba)$ are thus switched, so that you are multiplying $P_c(ab)$ by $q$ instead of $P_c(ba)$, as you thought. Multiplying instead of dividing by $q$ gives the discrepancy of $q^2$</p> <p>thanks, -david</p> http://mathoverflow.net/questions/35057/map-constructed-from-the-coquasitriangular-structure-of-slq2-which-appears-not/35781#35781 Answer by DamienC for Map constructed from the coquasitriangular structure of SLq(2) which appears not to respect the standard commutation relations DamienC 2010-08-16T17:44:53Z 2010-08-16T17:51:57Z <p>First of all I believe a factor $q^{-1/2}$ is missing in your definition of the coefficients of the universal R-matrix. </p> <p>Then, as far as I remember, the commutation relations of $\mathcal O_q(SL_2)$ are $ba = qab$, $db = qbd$, $ca = qac$, $dc = qcd$, $bc = cb$, $da-ad=(q-q^{-1})bc$, and $ad-q^{-1}bc=1$. So we do NOT have the relation $ab=qba$. </p> <p>Finally, according to your convention it seems that you have $a=u_1^1$, $b=u_2^1$ $c=u_1^2$ $d=u_2^2$ (it seems that Kassel has a different convention for indices, but his R-matrix coefficients are also organized in a different way, so...). So let me compute $P_c(ab)$ and $P_c(ba)$ following your notation. </p> <p><code>$P_c(ab) = R^{21}_{11} R^{11}_{12} + R^{21}_{21} R^{21}_{12} = 0$</code></p> <p>and </p> <p><code>$P_c(ba) = R^{21}_{12} R^{11}_{11} + R^{21}_{22} R^{21}_{11} = q(q-q^{-1})$</code></p> <p>Then I believe the definition of the coefficients you gave is wrong (also I can't really follow your computations: there are a few typos, and also errors - or it might be that I did not understand what is going on). </p> <p>Now if I compute following Kassel's definition of R-matrix coefficients I find : </p> <p><code>$P_c(ab) = R^{21}_{11} R^{11}_{12} + R^{21}_{21} R^{21}_{12} = 0$</code></p> <p>and </p> <p><code>$P_c(ba) = R^{21}_{12} R^{11}_{11} + R^{21}_{22} R^{21}_{11} = 0$</code></p> <p>By the way, even following uniquely your definitions I can't see how you get (on line 16) the following: </p> <p><code>$P(u^1_1u^1_2) = \quad \sum_z R^{21}_{z1}R^{zi}_{1z} \quad = \quad R^{21}_{12}R^{21}_{12} \quad = \quad q^{-1}(q-q^{-1})$</code></p> <p>First of all there is a typo, the second term should be <code>$\sum_z R^{21}_{z1}R^{z1}_{12}$</code>. Then there seems to be two errors: </p> <ol> <li>how can you find <code>$R^{21}_{12}R^{21}_{12}$</code> ?</li> <li>I can't see how <code>$R^{21}_{12}R^{21}_{12}=q(q-q^{-1})$</code>. </li> </ol> http://mathoverflow.net/questions/35057/map-constructed-from-the-coquasitriangular-structure-of-slq2-which-appears-not/35806#35806 Answer by John McCarthy for Map constructed from the coquasitriangular structure of SLq(2) which appears not to respect the standard commutation relations John McCarthy 2010-08-16T22:54:21Z 2010-08-16T22:54:21Z <p>I'm going to put my comment to Damien's answer as an answer since there's not enough room to place it as a comment. Firstly, thank you for pointing the typos. I have corrected them and apologise for not checking what I had written thoroughly enough at the start.</p> <p>With regard to the normalisation factor $q^{\frac{-1}{2}}$, I tacitly dropped it because it cancels out for the calculation I'm interested in. However, you're right, it should be included in my definition and I've changed it.</p> <p>With regard to the commutation relations of $SL_q(2)$ there are two conventions: one is as I have written, with, for example, $ab=qba$, and another has $ab=q^{-1}ba$, as you have written. Both algebras are of course isomorphic. I have taken my conventions from Klimyk and Schmuedgen, both for the relations (Chapter 4) and for the definition of $R^{ij}_{nm}$ (Chapter 9).</p> <p>I don't have Kassel's book at hand, so I can't really comment at the moment on his conventions. I will try to have a look tomorrow though.</p> <p>With regard to the $R^{ij}_{mn}$ calculations, I just use the fact that the only non-zero entries are <code>$$R^{11}_{11} = R^{22}_{22} = q^{\frac{1}{2}}, \qquad R^{12}_{12}=R^{21}_{21}=q^{-\frac{1}{2}}, \qquad R^{21}_{12} = q^{-\frac{1}{2}}(q-q^{-1}).$$</code> (But I think it was my typos that were causing the confusion here.) </p> http://mathoverflow.net/questions/35057/map-constructed-from-the-coquasitriangular-structure-of-slq2-which-appears-not/35816#35816 Answer by Abtan Massini for Map constructed from the coquasitriangular structure of SLq(2) which appears not to respect the standard commutation relations Abtan Massini 2010-08-17T00:27:18Z 2010-08-17T00:27:18Z <p>I tried to find a resolution of this problem by looking at it in the greater generality of FRT-algebras. However, I also ran into an apparent contradiction. I have posted my calculations as a new question <a href="http://mathoverflow.net/questions/35814/establishing-the-co-quasi-triangular-structure-of-frt-algebras" rel="nofollow">here</a>. Hopefully someone can find an answer to both questions.</p> http://mathoverflow.net/questions/35057/map-constructed-from-the-coquasitriangular-structure-of-slq2-which-appears-not/35957#35957 Answer by DamienC for Map constructed from the coquasitriangular structure of SLq(2) which appears not to respect the standard commutation relations DamienC 2010-08-18T12:08:22Z 2010-08-18T12:08:22Z <p>I think I now see where is the problem in your computation. </p> <p>a) First of all let me recall the problem. </p> <p>You find $r(c\otimes ab)=q^{-1}r(c\otimes ba)$, while you would like to find $r(c\otimes ab)=qr(c\otimes ba)$. </p> <p>b) Let me now compare $r(ab\otimes c)$ with $r(ba\otimes c)$ and see if you end up with the same problem. On the one hand (using the same computation rule as yours), </p> <p><code>$$r(ab\otimes c)=r(u_1^1u_2^1\otimes u_1^2)=\sum_zr(u_1^1\otimes u_z^2)r(u_2^1\otimes u_1^z)$$</code> <code>$$=R_{1z}^{12}R^{1z}_{12}=R_{12}^{12}R^{12}_{12}=q^{-1}(q-q^{-1}).$$</code></p> <p>On the other hand <code>$$r(ba\otimes c)=r(u_2^1u_1^1\otimes u_1^2)=\sum_zr(u_2^1\otimes u_z^2)r(u_1^1\otimes u_1^z)$$</code> <code>$$=R_{2z}^{12}R^{1z}_{11}=R_{21}^{12}R^{11}_{11}=q-q^{-1}.$$</code></p> <p>Then we find $r(ab\otimes c)=q^{-1}r(ba\otimes c)$ while we would hope to have $r(ab\otimes c)=qr(ba\otimes c)$. </p> <p>c) The problem might come from the definition of the $R$-matrix (it may be that somewhere $R$ and $\hat{R}:=R\tau$ have been mixed). </p> <p>But the problem might also come from a mistake in the way the coproduct is written. Namely, according to what your wrote $\Delta(c)=\Delta(u_1^2)=\sum_zu^2_z\otimes u^z_1=c\otimes a+d\otimes c$; while I am used to $\Delta(c)=\Delta(u_1^2)=\sum_zu_1^z\otimes u_z^2=a\otimes c+c\otimes d$. </p> <p>Now doing again the computation with this second definition of the coproduct I find: $r(c\otimes ab)=q(q-q^{-1})=qr(c\otimes ba)$... which is precisely what you were expecting. </p> <p>I hope this answers yor question. </p>
|
2013-05-24 10:57:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9615349769592285, "perplexity": 624.7579679581308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704590423/warc/CC-MAIN-20130516114310-00034-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://binfalse.de/page42/
|
## Java network connection on Debian:SID
The unstable release of Debian is of course tricky in a lot of cases, so there is also a little stumbling stone on your path of Java network programming. On every new system it annoys me.
Before I wrongful blame my preferred Debian release called Sid I have to acknowledge I don’t know whether this feature is also available in other releases… Here is a small program to test/reproduce:
Compilation shouldn’t fail, but if you try to launch it you’ll get an exception like that:
This is caused by one little line in /etc/sysctl.d/bindv6only.conf saying you want to explicitly bind via IPv6. But my connection (maybe yours too) communicates still over IPv4, so this method of networking of course fails. To change this behavior you have to choose between two solutions.
### Solution 1: Permanent modification (needs to be root)
You can change this behavior for the whole system by editing the file /etc/sysctl.d/bindv6only.conf :
After that just type invoke-rc.d procps restart in your terminal to let your changes take effect. Your next run should work fine.
### Solution 2: Change it for this single example
If your are not allowed to change system settings, you can add -Djava.net.preferIPv4Stack=true to your execution command:
This causes your actual runtime to connect the network via IPv4, no matter to system preferences. I hope this could save some time of developers like me ;-)
## You don't know the flash-trick?
Just sitting around with Micha on a SunRay (maybe meanwhile OracleRay?). He is surfing through the web until his session seems to hang and he said:
Fuck FLASH!! Need the flash-trick...
I didn’t heard about that trick before, but now he told me that feature.
If Flash kills your SunRay session you have to type Ctrl+Alt+Moon , relogin and your session will revive. With running Flash!
As far as I know this happens very often when he is using his browser because unfortunately the whole web is contaminated with this fucking Flash… The Flash-Trick is very nice, but a flashblock plugin would be more user friendly!?
## Playing around with SUN Spots
My boss wants to present some cool things in a lecture that can be done with SUN Spots. I'm selected to program these things and now I have three of them to play a little bit.
The installation was basically very easy, all you should know is that there is no chance for 64bit hosts and also Virtual Box guests don't work as expected, virtual machines lose the connection to the Spot very often... So I had to install a 32bit architecture on my host machine (btw. my decision was a Sidux Μόρος).
If a valid system is found, the rest is simple. Just download the SPOTManager from sunspotworld.com, that helps you installing the Sun SPOT Software Development Kit (SDK). If it is done connect a Sport via USB, open the SPOTManager and upgrade the Spot's software (it has to be the same version as installed on your host). All important management tasks can be done with this tool and it is possible to create virtual Spots.
Additionally to the SDK you'll get some demos installed, interesting and helpful to see how things work. In these directories ant is configured to do that crazy things that can be done with the managing tool. Here are some key targets:
A basestation is able to administrate other Spots, so you don't have to connect each to your machine.
Ok, how to do own stuff?
There are some Netbeans plugins that makes live easier, but I don't like that big IDE's that are very slow and bring a lot of overhead to your system. To create an IDE independent project that should run on a Spot you need an environment containing:
• File: ./resources/META-INF/MANIFEST.MF
• File: ./build.xml
• Directory: ./src
Here you can place your source files
And now you can just type ant and the project will be deployed to the Spot.
A project that should run on your host communicating with other spots through the basestation needs a different environment:
• File: ./build.xml
• Directory: ./src
Here you can place your source files
Ok, that's it for the moment. I'll report results.
## April fools month
About one month ago, it was April 1st, I attached two more lines to the .bashrc of Rumpel (he is co-worker and has to operate that day).
These two lines you can see here:
With each appearance of the bash prompt this command paints one pixel in the console with a random color. No respect to important content beyond this painting. That can really be annoying and he was always wondering why this happens! For more than one month, until now!
Today I lift the secret, so Rumpel, I’m very sorry ;)
## Converting videos to images
I just wanted to split a video file in each single frame and did not find a program that solves this problem. A colleague recommended videodub, but when I see DLL’s or a .exe I get insane! I’ve been working a little bit with OpenCV before and coded my own solution, containing only a few lines.
The heart of my solution consists of the following 13 lines:
It just queries each frame of the AVI and writes it to an image file. Thus, not a big deal.
The complete code can be downloaded here. All you need is OpenCV and a C++ compiler:
Just start it with for example:
If you prefer JPG images (or other types) just change the extension string form .png to .jpg .
Download: C++: vidsplit.cpp (Please take a look at the man-page. Browse bugs and feature requests.)
|
2023-02-07 14:11:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22613757848739624, "perplexity": 2396.8334656710717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500619.96/warc/CC-MAIN-20230207134453-20230207164453-00166.warc.gz"}
|
http://brickwallkc.com/systemic-exertion-lghhvr/p04egbh.php?33e8ce=simplify-the-following-expression-4-%E2%88%9A-4-%E2%88%9A-9-7
|
cotß 9.) Asked 2 hours 59 minutes ago|1/23/2021 9:13:48 AM. Simplify the following expression: 4/(1/4-5/2) 4/(1/4-5/2) = 4/((1-10)/4) = 4/(-9/4) = -16/9 =-1.77. First, you do 7-4, which 3. ... Rewrite the expression. Examples of rational numbers are 5/7, 4/9/ 1/ 2, 0/3, 0/6 etc. user: which of the following represents 3x - 5y 10 = 0 written in slope-intercept form? Example 2: to simplify $\dfrac{2+3i}{2-3i}$ type (2+3i)/(2-3i). 1. Rectangular Steel Tubing Deflection & Single Span Loading Calculator. Start with 9. Relevance. Weegy: -7 + N = 20 User: y - 12 = -10 Weegy: x + 5 = 2x User: y - 12 = -10 Weegy: x + 5 = 2x User: y + 1.05 = ... 1/15/2021 5:41:15 AM| 4 Answers Stress is … (x * x) = x2 1.55 2.495 3.83 4.162 24. Start studying 3.3: Simplify Expressions. Just as we can rewrite the square root of a product as a product of square roots, so too can we rewrite the square root of a quotient as a quotient of square roots, using the quotient rule for simplifying square roots. Learn vocabulary, terms, and more with flashcards, games, and other study tools. Simplify any Algebraic Expression If you have some tough algebraic expression to simplify, this page will try everything this web site knows to simplify it. Your name. 14y - 39 . Multiply by 4. Cosae (-0-0029) Coe A sin2 ecot9 4.) This website uses cookies to ensure you get the best experience on our website. By using this website, you agree to our Cookie Policy. Example 1: to simplify $(1+i)^8$ type (1+i)^8 . Share & Embed "4-1. When you click the button, this page will try to apply 25 different trig. Second expression, 6-[8-(2n-4)] 6 - [8 - 2n -4] 6 -8 + 2n + 4. Be careful that you subtract the exponent in the denominator from the exponent in the numerator. 3) Combine the constants. 2) Combine like terms by adding coefficients. 9 years ago. Answer to Simplify the following expression if ||v||=9 and ||w||=9. 36. 1+cot2 9 coo - cose = tang 7.) Solutions Graphing Practice; Best answer. draw logic circuit diagram. Scroll down the page for more examples and solutions. Start with 1/2 . A. exactly one zero B. no zeros C. … More Lessons for Grade 9 Math Worksheets Videos, worksheets, examples, solutions, and activities to help Algebra students learn how to simplify or combine or condense logarithmic expressions using the properties of logarithm. Simplify the following Boolean function in SOP form using K-Map: F (A, B, C, D) = Σ ( 0,1, 2, 4, 6, 8, 9, 12, 14, 15 ). There is your answer! Simplify the following expressions: (4+√7) (3+√2) rationalisation; class-9; Share It On Facebook Twitter Email. -16/ -4 2) Simplify. Scroll down the page for more examples and solutions on simplifying expressions by combining like terms. An online algebra calculator simplifies expression for the input you given in the input box. The calculator works for both numbers and expressions containing variables. Email. To simplify this expression, collect the like terms. Answer to EXAMPLE 7.25 Simplify the following expression using K-maps. ... (-2, 7) and (4, 9)? Example: Simplify the expressions: a) 14x + 5x b) 5y – 13y c) p – 3p. (2* x) = 2x Terms and topics. = x2 + 3x + 2. Like terms can be added or subtracted from one another. 9(r - 4) + 7r 5. 10:47 10 9 4 44% 3 Rationalize the denominator and simplify the following Simplify The Following Expressions Using Boolean Algebra" Please fill this form, we will try to respond as soon as possible. Enter Algebraic Expression. cotA tanA 3.) Simplify the following numerical expression 2(6-8)-8(7-4) = ? 13[6^(2)÷(5^2-4^2)+9] show your work please I just don't understand how to do these correctly they confuse me...It says in the book to use PEMDAS but I just don't get it and always get the steps mixed up...If you could help thanks a bunch :) Simplifying Complex Expressions Calculator. 1) Remove parentheses by multiplying factors. (1 * x) = 1x Updated 6 minutes 25 seconds ago|1/23/2021 12:06:40 PM. Multiply by by adding the exponents. This calculator will simplify fractions, polynomial, rational, radical, exponential, logarithmic, trigonometric, and hyperbolic expressions. 12x 4 = 2 The answer is 2, and 2 > 12 2 is bigger than 12 Ellen's calculations are correct, but her rule does not always work. 2(x + 3) +4 ( x + 9) 8. Combine. Solution: a) 14x + 5x = (14 + 5)x = 19x Multiply by . The following table gives the Logarithmic Properties. s. Get an answer. Answer: = 2x(-2)-8x(3) ... 5th or 6th grade to verify the work and answers of simplify the following numerical expression homework and assignment problems in pre-algebra or in operations and algeraic thinking (OA) of common core state standards (CCSS) for mathematics. This website uses cookies to ensure you get the best experience. = (x * x) + (1 * x) + (2 * x) + (2 * 1) 4 x+x(13-7) Find out what you don't know with free Quizzes Simplify the following expression 35+(-13)+(+8)-(-6) Asked By adminstaff @ 02/10/2019 08:36 AM. Notice that the exponent, 3, is the difference between the two exponents in the original expression, 5 and 2. v (7/100) 3)Evaluate if - Answered by a verified Math Tutor or Teacher Mathway requires javascript and a modern browser. cosx tanx cscx Download 4-1. Square it up and it becomes 9. Answers to problems marked with ~,appear at the end of the book. Using the Quotient Rule to Simplify Square Roots. How to solve: Simplify the following expression if ||u|\ = 4, ||v|| = 7 and u cdot v = 6. Simplifying Rational Expressions – Explanation & Examples. The following diagram shows some examples of like terms. The calculator works for both numbers and expressions containing variables. Multiply by . Hello, First expression, 9(2y-7)-4(y-6) 18y - 63 - 4y + 24. 1 Answers. Second expression, 6-[8-(2n-4)] 6 - [8 - 2n -4] 6 -8 + 2n + 4. 8m Jun2008 1 Answer/Comment. Step 1 : 9 Simplify — 1 Equation at the end of step 1 : 4 • 9 Step 2 : Final result : 36 Why learn this. View Test Prep - Screenshot_20210123-195627_23_01_2021_19_58 from COLLEGE AL MAT101 at Western Governors University. sece —tang sin2 e cos2 9 — cos2 9 2.) NCERT solutions for Class 8 Maths Textbook chapter 9 (Algebraic Expressions and Identities) include all questions with solution and detail explanation. Please ensure that your password is at least 8 characters and contains each of the following: You'll be able to enter math problems once our session is over. Finally you add 5 + 14 + 6 = 25. Simplify The Following Expressions Using Boolean Algebra Comments. Overview; Steps; Topics Terms and topics; Links Related links; 1 solution(s) found. Click the blue arrow to submit and see the result! Log in for more information. Algebra College Algebra ALGEBRAIC For the following exercises, simplify the given expression. Simplify the following expressions and evaluate them as directed: (i) (3ab – 2a 2 + 5b 2 ) x (2b 2 – 5ab + 3a 2 ) + 8a 3 b – 7b 4 for a = 1, b = -1 (ii) (1.7x – 2.5y) (2y + 3x + 4) – 7… 0 0. glory1. 4(3y + 5) - 101 10. The following diagram shows some examples of like terms. Enter the expression you want to simplify into the editor. cosx + sinxtanx cos9 5.) Solution for Simplify the following expression: 4(6k-16)-7(2+3k) After simplifying, what number is multiplied by the k? 7:56 10 9 4 94% 3 Rationalize the denominator and simplify the following The simplification calculator allows you to take a simple or complex expression and simplify and reduce the expression to it's simplest form. Learn vocabulary, terms, and more with flashcards, games, and other study tools. answered Aug 4, 2018 by Faiz Ahmad (61.7k points) selected Aug 12, 2018 by vikash gupta . Expand and simplify the following expressions. 9 years ago. A three-dimensional surface that is designed to be viewed from only one direction is known as a _____. 8u + 3(1 - 4u) 7. Solution for Simplify the following expression to a minimum number of literals using K Maps .Draw the logic circuit diagram of the following and simplified… Report "4-1. The detailed, step-by-step solutions will help you understand the concepts better and clear your confusions, if any. Pay for 5 months, gift an ENTIRE YEAR to someone special! If you feel difficulty in solving some tough algebraic expression, this page will help you to solve the equation in a second. In algebra, we use the Distributive Property to remove parentheses as we simplify expressions. Submit Close. But we cannot add x and 4, since they are not like terms. 'S 6.) 5y + 2(y-1) 3. Add your answer and earn points. How many terms are in the following expression: 2x-3y+5-3x-8 , What terms is a "like term" along with 6m in the following expression: 0.5p+6m-5+1.5p-12.5m+2 , What is the constant in the following expression: 2y-5+7x , What are the coefficients in the following expression: 7+4p-5+p+2q Which of the following is correct? Mathematics. These rules will allow us to simplify logarithmic expressions, those are expressions involving logarithms.. For instance, by the end of this section, we'll know how to show that the expression: $3.log_2(3)-log_2(9)+log_2(5)$ can be simplified and written: $log_2(15)$ See steps. 9(4 - 7m) + 4m 1 See answer kageyanya is waiting for your help. Purplemath. … Multiply by 5. Question: Simplify the following expression: -7(3e-2f+4)+6e-2. Now that you have an understanding of what rational numbers are, the next topic to look at in this article is the rational expressions and how to simplify them.Just for your own benefit, we define a rational number as a number expressed in the form of p/q where is not equal to zero. a 3 b 5 4 Question 19 Simplify the following expression 72 a 5 b 11 9 ab 17 from MATH 111 at Mercyhurst University Algebra Calculator Simplify Expression. The simplification calculator allows you to take a simple or complex expression and simplify and reduce the expression to it's simplest form. 8 Answers. 14y - 39 . Look, I can show you. Step 2: Click the blue arrow to submit and see the result! If you feel difficulty in solving some tough algebraic expression, this page will help you to solve the equation in a second. 1. Simplify the following expressions 1 4 5 4 2 x 1 3 x 3 x 3 4 4 x 1 4 x 3 7 14 x from MATHEMATIC 001 at Holy Family Academy, Angeles City The simplified expression is answered Nov 2, 2013 by jouis Apprentice in second step the simplified expression will be 4x^7 - 2, since 4x^3 terms eliminates each other. View Test Prep - Screenshot_20210124-104736_24_01_2021_10_48 from COLLEGE AL MAT101 at Western Governors University. Write your answer with only positive exponents. (2 * 1) = 2 1 0. Step 4: Therefore, Simplifying Algebraic Expression is solved as The best videos and questions to learn about Equivalent Fractions and Simplifying. Step 2: Combine like terms by adding coefficients. Your expression may contain sin, cos, tan, sec, etc. For the following exercises, simplify the given expression. Note in the following examples how this law is derived by using the definition of an exponent and the first law of exponents. On the other hand, a rational expression is an algebraic expression of the form f(x) / g(x) in which the numerator or denominator are polynomials or both the numerator and the numerator are polynomials. This page will try to simplify a trigonometric expression. For what numbers will Ellen's rule work? Solution: a) 14x + 5x = (14 + 5)x = 19x 2n + 2. Start studying Algebra 1 Questions. Como. Then 9x9 is 81. This calculator can be used to expand and simplify any polynomial expression. identities that it knows about to simplify your expression. 400 x 4 Buy Find arrow_forward sine cote csce 10.) Enter the expression you want to simplify into the editor. Write down a number or an expression for the area of … tanA cos2A cose COS A 8.) Simplify (1/4+1/x)/(4+x) Multiply the numerator and denominator of the complex fraction by . No promises, but, the site will try everything it has. So, = 4 5-2 = 4 3. 5 + 14 + 6 = 25. 4-6+2 \cdot 7 Give the gift of Numerade. It can be helpful to separate the numerator and denominator of a fraction under a radical so that we can take their … or = x 7 − 9 = x-2 Description. Example 1: to simplify $(1+i)^8$ type (1+i)^8. Step 3: Combine the constants. Reason. Free simplify calculator - simplify algebraic expressions step-by-step This website uses cookies to ensure you get the best experience. This expression contains three types of terms: the terms that contain c's, terms that contain d's and terms that are numbers alone. The simplification process is the same, whether we're working with expressions (and thus only simplifying) or equations (so we're also solving). Scroll down the page for more examples and solutions on simplifying expressions by combining like terms. 31/63 2. Simplify the following Boolean function in SOP form using K-Map: F (A, B, C, D) = Σ ( 0,1, 2, 4, 6, 8, 9, 12, 14, 15 ). 9 x 5 = 45 The answer is 45, and 45 > 9 45 is bigger than 9. The simplify calculator will then show you the steps to help you learn how to simplify your algebraic expression on your own. 4(11+7)/(7-5) This deals with simplification or other simple results. Send Gift Now Suppose that the y-axis represents total cost and the x-axis represents the unit quantity purchased. To evaluate an algebraic expression, we substitute the given number for the variable in the expression and then simplify the expression using the order of operations. cos9 C.O. To simplify your expression using the Simplify Calculator, type in your expression like 2(5x+4)-3x. Step 1: Remove parentheses by multiplying factors. First you have to do what is inside the parentheses 8 divided by 4 = 2. 8(x + 2) +2(x - 3) 9. All expression will be simplified as much as possible 1) Simplify. 8m Jun2008 Lv 7. Get smarter on Socratic. Free simplify calculator - simplify algebraic expressions step-by-step. We offer a whole lot of high-quality reference materials on subjects ranging from power to … Simplify the following expression:$$4 + \sqrt{ - 4} + \sqrt{ - 9} + 7$$Imma off myself if this gets another bs "free points" response. 0 votes . Simplify the denominator. Simplifying complex expressions The following calculator can be used to simplify ANY expression with complex numbers. 22. 400 x 4 ALGEBRAIC For the following exercises, simplify the given expression. Question. NCERT Solutions for Chapter 9 Algebraic Expressions and Identities Class 8 Mathematics Page No: 140 Exercise 9.1 1. Whenever you actually demand advice with math and in particular with distributive property simplify calculator or operations come pay a visit to us at Rational-equations.com. The online calculator function to expand and collapse an algebraic expression. Identify the terms, their coefficients for each of the following expressions: Like terms can be added or subtracted from one another. Just as some expressions contain parentheticals and even nested grouping symbols, so also do some equations. The following calculator can be used to simplify ANY expression with complex numbers. Finally, this expression can be rewritten as 4 3 using exponential notation. 36 . Simplify the following trig expressions as much as possible using the identities. Show Instructions In general, you can skip the multiplication sign, so 5x is equivalent to 5*x. By using this website, you agree to our Cookie Policy. 5 + 7(8 ÷ 4 ) + 6 Simplify the following expression using PEMDAS? For the following exercises, simplify the expression. Hello, First expression, 9(2y-7)-4(y-6) 18y - 63 - 4y + 24. For example, if we are asked to simplify the expression 3(x + 4), the order of operations says to work in the parentheses first. It even works for fractions. a)simplify the expression and explain each step 4(3x+2)-2 b)Factor the expression below 20b-16 help please algebra 1. write an algebraic expression for the word phrase: the quotient of r and 12 a.r * 12 b.r/12**** c.r-12 2. write a word phrase for the algebraic expression: 2t - 9 a.nine fewer than two times a number t***** b.nine fewer than We can also simplify an … 2.3: Evaluate, Simplify, and Translate Expressions (Part 1) - Mathematics LibreTexts Then you multiply 7 times 2. In this section we learn the rules for operations with logarithms, which are commonly called the laws of logarithms.. It is for example possible to expand and simplify the following expression (3x+1)(2x+4), using the syntax expand_and_simplify((3x+1)(2x+4)) The expression in its expanded form and reduced 4+14*x+6*x^2 be returned. Simplify the following expression: The negative exponent is only on the x, not on the 2, so I only move the variable: Simplify the following expression: The "minus" on the 2 says to move the variable; the "minus" on the 6 says that the 6 is negative. These two "minus" signs mean entirely different things, and should not be confused. This will clear students doubts about any question and improve application skills while preparing for board exams. 11/2 4.1/21 23. if the pediatric unit of a hospital has 165 beds, how many rooms are there if each room holds 3 beds? Learn more Accept. This simplifying algebraic expressions calculator will give you the result automatically but for manual calculation, follow the steps given below. draw logic circuit diagram. Step by Step Solution. Tap for more steps... Move . An online algebra calculator simplifies expression for the input you given in the input box. The diagram shows a rectangle (n + 3) cm long and (n + 2) cm wide.It has been split into four smaller rectangles. Related to "simplification" problems are some "solving" problems. Multiply by . I need help asap I will give brainliest The quadratic function f(x) = x^2 has _____. 5/7-2/9 Simplify the expression above. Answer Save. 2n + 2. 4k + 2(3 - 5k) 6. Example: Simplify the expressions: a) 14x + 5x b) 5y – 13y c) p – 3p. Type your expression into the box to the right. 1 Answer. 2x + 3(x + 4) 2. 7(p + 6) + 3p 4. 59/63 3. 81+9 is 90. The calculator works for both numbers and expressions containing variables function to and! And should not be confused ( p + 6 ) + 3p 4. to 5 * . Using Boolean algebra '' Please fill this form, we use the Distributive Property to remove as. 0 written in slope-intercept form examples how this law is derived by using the identities some tough algebraic is. Total cost and the x-axis represents the unit quantity purchased y-axis represents total cost and the x-axis represents unit... You have to do what is inside the parentheses 8 divided by 4 2... Following expression 35+ ( -13 ) + ( +8 ) - ( -6 ) Asked by adminstaff 02/10/2019! 1+I ) ^8 simplifies expression for the following calculator can be used to expand and simplify and reduce expression! 101 10 9 algebraic expressions step-by-step simplify a trigonometric expression to solve the equation in second! Represents 3x - 5y 10 = 0 written in slope-intercept form the x-axis represents the unit quantity simplify the following expression 4 √ 4 √ 9 7 found., games, and other study tools ( 14 + 5 ) x = 19x free simplify will! See answer kageyanya is waiting for your help ( 3 - 5k ) 6 other study tools ENTIRE YEAR someone. - Screenshot_20210124-104736_24_01_2021_10_48 from College AL MAT101 at Western Governors University steps ; Topics terms and Topics ; Links Related ;... — cos2 9 — cos2 9 — cos2 9 2. ) and 4! Get the best experience on our website polynomial expression inside the parentheses 8 divided by 4 = step... ) 14x + 5x b ) 5y – 13y c ) p –.! And other study tools 10 = 0 written in slope-intercept form agree to our Cookie Policy the.! Numbers and expressions containing variables solve: simplify the given expression example 2 to. Or complex expression and simplify and reduce the expression to it 's form. ) 6, 2018 by Faiz Ahmad ( 61.7k points ) selected 12..., you agree to our Cookie Policy ) 6 Please fill this form, we will try it. The page for more examples and solutions on simplifying expressions by combining like terms 4u ) 7. help! Allows you to take a simple or complex expression and simplify and the... Which of the book calculator will then show you learn vocabulary, terms, and should not be confused expression. Study tools of the following diagram shows some examples of like terms ( -2, 7 ) and ( -. Best videos and questions to learn about equivalent Fractions and simplifying 7 and u cdot v = 6 Loading.. Of an exponent and the x-axis represents the unit quantity purchased ( ). Simplify your algebraic expression you can skip the multiplication sign, so . Calculator works for both numbers and expressions containing variables expressions: 1 ) = 2 step 4 Therefore... ( -0-0029 ) Coe a sin2 ecot9 4. you feel difficulty in solving some tough algebraic.... Faiz Ahmad ( 61.7k points ) selected Aug 12, 2018 by Faiz Ahmad ( 61.7k points ) selected 12... 63 - 4y + 24 ( 1 - 4u ) 7. you! Algebra College algebra algebraic for the following expressions using Boolean algebra '' Please fill this form we...
|
2021-08-02 13:25:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6587071418762207, "perplexity": 1129.3683488795532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154320.56/warc/CC-MAIN-20210802110046-20210802140046-00316.warc.gz"}
|
http://mathhelpforum.com/algebra/115767-regression-differences-their-formulas-print.html
|
Regression, differences and their formulas
• November 20th 2009, 09:58 AM
MathBane
Regression, differences and their formulas
The question:
For this exercise, round all regression parameters to three decimal places.
In the fishery sciences it is important to determine the length of a fish as a function of its age. One common approach, the von Bertalanffy model, uses a decreasing exponential function of age to describe the growth in length yet to be attained; in other words, the difference between the maximum length and the current length is supposed to decay exponentially with age. The following table shows the length L, in inches, at age t, in years, of the North Sea sole.
t = age
L = Length
1 |3.7
2 |7.5
3 |10
4 |11.5
5 |12.7
6 |13.5
7 |14
8 |14.4
Suppose the maximum length attained by the sole is 15.0 inches.
(a) Make a table showing, for each age, the difference D between the maximum length and the actual length L of the sole.
t = age D = Difference
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |0.6
(I was only able to get row 8 by using the information underneath the previous table.)
(b) Find the exponential function that approximates D. (Round all regression parameters to three decimal places.)
D =
(c) Find a formula expressing the length L of a sole as a function of its age t. (Round all parameters to three decimal places.)
L =
(I thought this would be $4.877(1.176)^t$, but that was incorrect.)
• November 20th 2009, 11:14 AM
masters
Quote:
Originally Posted by MathBane
The question:
For this exercise, round all regression parameters to three decimal places.
In the fishery sciences it is important to determine the length of a fish as a function of its age. One common approach, the von Bertalanffy model, uses a decreasing exponential function of age to describe the growth in length yet to be attained; in other words, the difference between the maximum length and the current length is supposed to decay exponentially with age. The following table shows the length L, in inches, at age t, in years, of the North Sea sole.
t = age L = Length
1 |3.7
2 |7.5
3 |10
4 |11.5
5 |12.7
6 |13.5
7 |14
8 |14.4
Suppose the maximum length attained by the sole is 15.0 inches.
(a) Make a table showing, for each age, the difference D between the maximum length and the actual length L of the sole.
t = age D = Difference
1 |11.3
2 | 7.5
3 | 5
4 | 3.5
5 | 2.3
6 | 1.5
7 | 1
8 |0.6
(I was only able to get row 8 by using the information underneath the previous table.) Why?
(b) Find the exponential function that approximates D. (Round all regression parameters to three decimal places.)
D = 17.466(.662)^t
(c) Find a formula expressing the length L of a sole as a function of its age t. (Round all parameters to three decimal places.)
L = a + b ln (t)
L = 3.939 + 5.261 ln (t)
(I thought this would be $4.877(1.176)^t$, but that was incorrect.)
Hi Mathbane,
The Difference table is exponential regression, but the Length table is logarithmic.
|
2014-11-26 06:55:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7895025014877319, "perplexity": 1369.1551583704545}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931006064.45/warc/CC-MAIN-20141125155646-00092-ip-10-235-23-156.ec2.internal.warc.gz"}
|
http://imkean.com/leetcode/239-sliding-window-maximum/
|
07/16/2016
## Question
Given an array nums, there is a sliding window of size k which is moving from the very left of the array to the very right. You can only see the k numbers in the window. Each time the sliding window moves right by one position.
For example,
Given nums = [1,3,-1,-3,5,3,6,7], and k = 3.
Window position Max
--------------- -----
[1 3 -1] -3 5 3 6 7 3
1 [3 -1 -3] 5 3 6 7 3
1 3 [-1 -3 5] 3 6 7 5
1 3 -1 [-3 5 3] 6 7 5
1 3 -1 -3 [5 3 6] 7 6
1 3 -1 -3 5 [3 6 7] 7
Therefore, return the max sliding window as [3,3,5,5,6,7].
Note:
You may assume k is always valid, ie: 1 ≤ k ≤ input array’s size for non-empty array.
Could you solve it in linear time?
Hint:
1. How about using a data structure such as deque (double-ended queue)?
## Solution
Result: Accepted Time: 116 ms
Here should be some explanations.
struct Node{
int value,index;
Node():value(0),index(0){}
Node(int v,int i):value(v),index(i){}
bool operator < (const Node & rht) const
{
return this->value < rht.value;
}
};
class Solution {
public:
vector<int> maxSlidingWindow(vector<int>& nums, int k) {
priority_queue<Node> que;
vector<int> ret;
for(int i = 0; i < k - 1;i++)
que.push(Node(nums[i],i));
for(int i = k-1; i < nums.size(); i++)
{
que.push(Node(nums[i],i));
Node tmp = que.top();
while(i - tmp.index >= k)
{
que.pop();
tmp = que.top();
}
ret.push_back(tmp.value);
}
return ret;
}
};
Complexity Analytics
• Time Complexity: $O(nlog(n))$
• Space Complexity: $O(1)$
|
2020-02-22 01:06:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24647638201713562, "perplexity": 3726.322033936523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145621.28/warc/CC-MAIN-20200221233354-20200222023354-00062.warc.gz"}
|
https://tsfa.co/find-x-intercepts-of-a-function-63
|
## How to Find the X Intercept of a Function
intercepts\:y=\frac{x^2+x+1}{x} intercepts\:f(x)=x^3; intercepts\:f(x)=\ln (x-5) intercepts\:f(x)=\frac{1}{x^2} intercepts\:y=\frac{x}{x^2-6x+8} intercepts\:f(x)=\sqrt{x+3}
285+ Experts
95% Satisfaction rate
83585+ Orders Deliver
## Formula to Find x Intercept
Answer: Therefore the x-intercept is 3. You could also write it as a point: $$(3,0)$$ A more complicated example would be one where the equation representing the function itself is more complex. For these situations, you need to know a little more algebra in order to find any intercepts. Example. Find the x-intercepts fo See more
Fast solutions
Scan math problem
Determine math
## Finding the x and y Intercepts
The x-intercepts of a function f (x) is found by finding the values of x which make f (x) = 0. Write f (x) = 0, and solve for x to find the x-intercepts of a function. The method for solving for x will depend on the type of function (linear, quadratic
Looking for a fast solution? Check out our extensive collection of tips and tricks designed to help you get the most out of your day.
Get help from expert professors
If you want to enhance your academic performance, start by setting realistic goals.
Get mathematics help online
I can't believe I have to scan my math problem just to get it checked.
Get arithmetic help online
Looking for someone to help with your homework? We can provide expert homework writing help on any subject.
## What users say
All options are available and its pretty decent even without solutions, atleast I can check if my answer's correct or not. Although some solutions are not available till date , and hope to get those in upcoming updates , but the application is very good , very helpful for the students.
John Lawson
20 means, AND view all the steps that it took to get the solution, again, this is a must have app for better understanding algebra. Extremely helpful. Helped me in 7th grade math a lot.
## Finding the x-intercepts of a function
x = y x = y Find the x-intercepts. Tap for more steps x-intercept (s): (0,0) ( 0, 0) Find the y-intercepts. Tap for more steps y-intercept (s): (0,0) ( 0, 0) List the intersections. x-intercept
## Finding the x-intercepts of a function
The x-intercepts of a function are also called the zeros of the function. Consider a function y = f(x). We already know that: The x-intercept(s) is(are) a point(s) where the graph intersects the
|
2023-03-31 02:23:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41278430819511414, "perplexity": 1001.8483693107302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949533.16/warc/CC-MAIN-20230331020535-20230331050535-00610.warc.gz"}
|
https://mc-stan.org/docs/functions-reference/mixed-operations.html
|
# 9 Mixed Operations
These functions perform conversions between Stan containers matrix, vector, row vector and arrays.
matrix to_matrix(matrix m)
Return the matrix m itself.
Available since 2.3
complex_matrix to_matrix(complex_matrix m)
Return the matrix m itself.
Available since 2.30
matrix to_matrix(vector v)
Convert the column vector v to a size(v) by 1 matrix.
Available since 2.3
complex_matrix to_matrix(complex_vector v)
Convert the column vector v to a size(v) by 1 matrix.
Available since 2.30
matrix to_matrix(row_vector v)
Convert the row vector v to a 1 by size(v) matrix.
Available since 2.3
complex_matrix to_matrix(complex_row_vector v)
Convert the row vector v to a 1 by size(v) matrix.
Available since 2.30
matrix to_matrix(matrix M, int m, int n)
Convert a matrix A to a matrix with m rows and n columns filled in column-major order.
Available since 2.15
complex_matrix to_matrix(complex_matrix M, int m, int n)
Convert a matrix A to a matrix with m rows and n columns filled in column-major order.
Available since 2.30
matrix to_matrix(vector v, int m, int n)
Convert a vector v to a matrix with m rows and n columns filled in column-major order.
Available since 2.15
complex_matrix to_matrix(complex_vector v, int m, int n)
Convert a vector v to a matrix with m rows and n columns filled in column-major order.
Available since 2.30
matrix to_matrix(row_vector v, int m, int n)
Convert a row_vector v to a matrix with m rows and n columns filled in column-major order.
Available since 2.15
complex_matrix to_matrix(complex_row_vector v, int m, int n)
Convert a row vector v to a matrix with m rows and n columns filled in column-major order.
Available since 2.30
matrix to_matrix(matrix A, int m, int n, int col_major)
Convert a matrix A to a matrix with m rows and n columns filled in row-major order if col_major equals 0 (otherwise, they get filled in column-major order).
Available since 2.15
complex_matrix to_matrix(complex_matrix A, int m, int n, int col_major)
Convert a matrix A to a matrix with m rows and n columns filled in row-major order if col_major equals 0 (otherwise, they get filled in column-major order).
Available since 2.30
matrix to_matrix(vector v, int m, int n, int col_major)
Convert a vector v to a matrix with m rows and n columns filled in row-major order if col_major equals 0 (otherwise, they get filled in column-major order).
Available since 2.15
complex_matrix to_matrix(complex_vector v, int m, int n, int col_major)
Convert a vector v to a matrix with m rows and n columns filled in row-major order if col_major equals 0 (otherwise, they get filled in column-major order).
Available since 2.30
matrix to_matrix(row_vector v, int m, int n, int col_major)
Convert a row vector v to a matrix with m rows and n columns filled in row-major order if col_major equals 0 (otherwise, they get filled in column-major order).
Available since 2.15
complex_matrix to_matrix(complex_row_vector v, int m, int n, int col_major)
Convert a row vector v to a matrix with m rows and n columns filled in row-major order if col_major equals 0 (otherwise, they get filled in column-major order).
Available since 2.30
matrix to_matrix(array[] real a, int m, int n)
Convert a one-dimensional array a to a matrix with m rows and n columns filled in column-major order.
Available since 2.15
matrix to_matrix(array[] int a, int m, int n)
Convert a one-dimensional array a to a matrix with m rows and n columns filled in column-major order.
Available since 2.15
complex_matrix to_matrix(array[] complex a, int m, int n)
Convert a one-dimensional array a to a matrix with m rows and n columns filled in column-major order.
Available since 2.30
matrix to_matrix(array[] real a, int m, int n, int col_major)
Convert a one-dimensional array a to a matrix with m rows and n columns filled in row-major order if col_major equals 0 (otherwise, they get filled in column-major order).
Available since 2.15
matrix to_matrix(array[] int a, int m, int n, int col_major)
Convert a one-dimensional array a to a matrix with m rows and n columns filled in row-major order if col_major equals 0 (otherwise, they get filled in column-major order).
Available since 2.15
complex_matrix to_matrix(array[] complex a, int m, int n, int col_major)
Convert a one-dimensional array a to a matrix with m rows and n columns filled in row-major order if col_major equals 0 (otherwise, they get filled in column-major order).
Available since 2.30
matrix to_matrix(array[] row_vector vs)
Convert a one-dimensional array of row vectors to a matrix, where the size of the array is the number of rows of the resulting matrix and the length of row vectors is the number of columns.
Available since 2.28
complex_matrix to_matrix(array[] complex_row_vector vs)
Convert a one-dimensional array of row vectors to a matrix, where the size of the array is the number of rows of the resulting matrix and the length of row vectors is the number of columns.
Available since 2.30
matrix to_matrix(array[,] real a)
Convert the two dimensional array a to a matrix with the same dimensions and indexing order.
Available since 2.3
matrix to_matrix(array[,] int a)
Convert the two dimensional array a to a matrix with the same dimensions and indexing order. If any of the dimensions of a are zero, the result will be a $$0 \times 0$$ matrix.
Available since 2.3
complex_matrix to_matrix(array[,] complex a )
Convert the two dimensional array a to a matrix with the same dimensions and indexing order.
Available since 2.30
vector to_vector(matrix m)
Convert the matrix m to a column vector in column-major order.
Available since 2.0
complex_vector to_vector(complex_matrix m)
Convert the matrix m to a column vector in column-major order.
Available since 2.30
vector to_vector(vector v)
Return the column vector v itself.
Available since 2.3
complex_vector to_vector(complex_vector v)
Return the column vector v itself.
Available since 2.30
vector to_vector(row_vector v)
Convert the row vector v to a column vector.
Available since 2.3
complex_vector to_vector(complex_row_vector v)
Convert the row vector v to a column vector.
Available since 2.30
vector to_vector(array[] real a)
Convert the one-dimensional array a to a column vector.
Available since 2.3
vector to_vector(array[] int a)
Convert the one-dimensional integer array a to a column vector.
Available since 2.3
complex_vector to_vector(array[] complex a)
Convert the one-dimensional complex array a to a column vector.
Available since 2.30
row_vector to_row_vector(matrix m)
Convert the matrix m to a row vector in column-major order.
Available since 2.3
complex_row_vector to_row_vector(complex_matrix m)
Convert the matrix m to a row vector in column-major order.
Available since 2.30
row_vector to_row_vector(vector v)
Convert the column vector v to a row vector.
Available since 2.3
complex_row_vector to_row_vector(complex_vector v)
Convert the column vector v to a row vector.
Available since 2.30
row_vector to_row_vector(row_vector v)
Return the row vector v itself.
Available since 2.3
complex_row_vector to_row_vector(complex_row_vector v)
Return the row vector v itself.
Available since 2.30
row_vector to_row_vector(array[] real a)
Convert the one-dimensional array a to a row vector.
Available since 2.3
row_vector to_row_vector(array[] int a)
Convert the one-dimensional array a to a row vector.
Available since 2.3
complex_row_vector to_row_vector(array[] complex a)
Convert the one-dimensional complex array a to a row vector.
Available since 2.30
array[,] real to_array_2d(matrix m)
Convert the matrix m to a two dimensional array with the same dimensions and indexing order.
Available since 2.3
array[,] complex to_array_2d(complex_matrix m)
Convert the matrix m to a two dimensional array with the same dimensions and indexing order.
Available since 2.30
array[] real to_array_1d(vector v)
Convert the column vector v to a one-dimensional array.
Available since 2.3
array[] complex to_array_1d(complex_vector v)
Convert the column vector v to a one-dimensional array.
Available since 2.30
array[] real to_array_1d(row_vector v)
Convert the row vector v to a one-dimensional array.
Available since 2.3
array[] complex to_array_1d(complex_row_vector v)
Convert the row vector v to a one-dimensional array.
Available since 2.30
array[] real to_array_1d(matrix m)
Convert the matrix m to a one-dimensional array in column-major order.
Available since 2.3
array[] real to_array_1d(complex_matrix m)
Convert the matrix m to a one-dimensional array in column-major order.
Available since 2.30
array[] real to_array_1d(array[...] real a)
Convert the array a (of any dimension up to 10) to a one-dimensional array in row-major order.
Available since 2.3
array[] int to_array_1d(array[...] int a)
Convert the array a (of any dimension up to 10) to a one-dimensional array in row-major order.
Available since 2.3
array[] complex to_array_1d(array[...] complex a)
Convert the array a (of any dimension up to 10) to a one-dimensional array in row-major order.
Available since 2.30
|
2022-08-13 17:40:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6299982070922852, "perplexity": 4912.39861839921}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571982.99/warc/CC-MAIN-20220813172349-20220813202349-00533.warc.gz"}
|
https://www.khanacademy.org/math/geometry-home/similarity/solving-problems-with-similar-and-congruent-triangles/e/solving-problems-with-similar-and-congruent-triangles
|
# Use similar & congruent triangles
Solve geometry problems with various polygons by using all you know about similarity and congruence.
You might need: Calculator
### Problem
In the diagram below, start overline, R, A, end overline is parallel to start overline, E, T, end overline.
Find the length of start overline, S, T, end overline.
|
2016-09-25 02:10:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 11, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4207695424556732, "perplexity": 12077.28506554902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738659753.31/warc/CC-MAIN-20160924173739-00254-ip-10-143-35-109.ec2.internal.warc.gz"}
|
http://tex.stackexchange.com/tags/unicode/info
|
# Tag Info
is for questions about Unicode (an international standard for character encoding) and its implementations. XeTeX and LuaTeX provide Unicode support, thus ConTeXt as well if using one of these engines. For LaTeX, the UTF-8 implementation is the most common, provided by the `inputenc` package:
``````\usepackage[utf8]{inputenc}
|
2016-05-28 20:23:17
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9568364024162292, "perplexity": 7444.908133745888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049278091.17/warc/CC-MAIN-20160524002118-00049-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://www.elastic.co/blog/elasticsearch-curator-version-1-1-0-released
|
Tech Topics
# Elasticsearch Curator -- Version 1.1.0 Released
When Elasticsearch version 1.0.0 was released, it came with a new feature: Snapshot & Restore. The Snapshot portion of this feature allows you to create backups by taking a “picture” of your indices at a particular point in time. Soon after this announcement, the feature requests began to accumulate. Things like, “Add snapshots to Curator!” or “When will Curator be able to do snapshots?” If this has been your desire, your wish has finally been granted…and much, much more in addition!
## New Features
• Brand new CLI structure
• Snapshots
• Aliases
• Exclude indices by pattern
• Allocation Routing
• Show indices and snapshots
• Repository management (in a separate script)
• Documentation wiki
### A Brand-New Command-Line Structure
Please note: The change to the command-line structure means that your older cron entries will not work with Curator 1.1.0. Please remember to update your commands when upgrading to Curator 1.1.0.
The concept of commands has been added to make things more simple, and to make navigating the help output easier. Curator will do the same tasks as with previous versions, it just uses a slightly different format:
Old format command:
curator -d 30
New format command:
curator delete --older-than 30
Note that commands are not prepended by hyphens the way that flags are. Care was taken to ensure that similar flags now have similar, or identical names. For example the --older-than flag can be found in many of the commands. The implied value is identical in each case: indices older than the supplied number.
The new list of commands is:
• alias
• allocation
• bloom
• close
• delete
• optimize
• show
• snapshot
You can get the help output for any command by running:
curator [COMMAND] --help
All of the associated flags will then be displayed for that command.
### Snapshots
The snapshot command allows you to capture snapshots of indices into a pre-existing repository.
Curator will create one snapshot per index, and it will take its name from the index. For example, an index named logstash-2014.06.10 will yield a snapshot named logstash-2014.06.10. It will loop through indices creating snapshots for each one in sequence based on the criteria you provide.
curator snapshot --older-than 20 --repository REPOSITORY_NAME
This command will take snapshots of all indices older than 20 days and send them to the repository identified by REPOSITORY_NAME.
A script has been included with curator to assist in repository creation, called es_repo_mgr. It can assist in the creation of both filesystem and S3 type repositories.
In addition to being able to snapshot older indices, curator provides a way for you to upload the most recent indices. This is useful when uploading Elasticsearch Marvel indices so others can view your performance data for troubleshooting purposes.
curator snapshot --most-recent 3 --prefix .marvel- --repository REPOSITORY_NAME
With this command you can capture the three most recent Marvel indices to the named repository.
### Aliases
Curator now allows you to add indices to a pre-existing alias, and also remove indices from an alias. The alias must exist. Curator will not create it for you.
Supposing that I wanted to keep a rolling alias of previous week’s indices, called last_week. I could keep that updated with the following two commands:
curator alias --alias-older-than 7 --alias last_weekcurator alias --unalias-older-than 14 --alias last_week
It is useful to point out here that Elasticsearch allows you to automatically have newly created indices be part of an alias with index templates. You could have new indices automatically part of an alias called this_week and use a command like:
curator alias --unalias-older-than 7 --alias this_week
to keep a this_week and last_week alias updated.
### Exclude Pattern
Sometimes you want to exclude a given index from operations. Previously you could only limit your selection by prefix and date. Now there’s an --exclude-pattern option that will allow you to filter out indices in addition to these other methods.
Supposing I never want the index logstash-2014.06.11 to be deleted, I could exclude this from deletes in this manner:
curator delete --older-than 15 --exclude-pattern 2014.06.11
Curator would match the default prefix of logstash- and would prevent an index with 2014.06.11 in it from being deleted.
### Allocation Routing
Elasticsearch allows you to tag your nodes (not in the graffiti sense). With these tags you have the power to control where your indices and shards go within your cluster. A common use-case for this is having high-powered nodes with SSD drives for indexing, but lower-powered boxes with spinning hard disk drives for older, less frequently searched indices. In order for this to work, your hdd nodes must have a setting in the elasticsearch.yml file to correspond, e.g. node.tag: hdd or node.tag: ssd. Curator now provides a way to automatically update the tag on an index so it can be re-routed during off-peak hours.
The command:
curator allocation --older-than 2 --rule tag=hdd
…will apply the setting index.routing.allocation.require.tag=hdd to indices older than 2 days. The require portion of this will tell Elasticsearch that that the shards of that index are required to reside on a node with node.tag: hdd.
### Show indices and snapshots
This is a simple way to get a quick look at what indices or snapshots you have:
curator show --show-indices
…will show all indices matching the default prefix of logstash-.
curator show --show-snapshots --repository REPOSITORY_NAME
…will show all snapshots matching the default prefix of logstash- within the named repository.
### Repository management
As mentioned previously, a helper script called es_repo_mgr was included with curator to assist in creating snapshot repositories. At this time, only fs and s3 types are supported. Please be sure to read the documentation for the indicated type before creating a repository. For example, each node using a fs type repository must be able to access the same shared filesystem, in the same path, identified by --location
Create a fs type repository:
es_repo_mgr create_fs --location '/tmp/REPOSITORY_LOCATION' --repository REPOSITORY_NAME
Delete a repository:
es_repo_mgr delete --repository REPOSITORY_NAME
### Documentation wiki
The documentation for Curator has been updated and put online in a wiki that anyone can edit. You can find more in-depth information about flags and commands there, and even add to the documentation if you feel so inclined.
Curator 1.1.0 is in the PyPi repository. To install:
pip install elasticsearch-curator
pip uninstall elasticsearch-curatorpip install elasticsearch-curator
To upgrade from a version older than 1.0.0:
pip uninstall elasticsearch-curatorpip uninstall elasticsearchpip install elasticsearch-curator
pip uninstall elasticsearch removes the older python elasticsearch module so the proper version can be re-installed as a dependency.
## Conclusion
The new features in Curator are awesome! This release marks a huge improvement in user experience as well. If you run into trouble or find something we missed, please log an issue in our GitHub Issues page. If you love Curator, please tell us about it! We love tweets with #elasticsearch in them!
Curator is just getting started! We’ll be working on a roadmap for Curator 2.0 soon. Thanks for reading, and Happy Curating!
• #### We're hiring
Work for a global, distributed team where finding someone like you is just a Zoom meeting away. Flexible work with impact? Development opportunities from the start?
|
2022-01-24 17:41:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2488076537847519, "perplexity": 5845.912057912094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304572.73/warc/CC-MAIN-20220124155118-20220124185118-00072.warc.gz"}
|
https://artofproblemsolving.com/wiki/index.php?title=Basic_Programming_With_Python&diff=93020&oldid=93017
|
# Difference between revisions of "Basic Programming With Python"
Important: It is extremely recommended that you read Getting Started With Python Programming before reading this unless you already know some programming knowledge.
This article will talk about some basic Python programming. If you don't even know how to install python, look here.
## Loops
There are two different kinds of loops in Python: the for loop and the while loop.
### The For Loop
The for loop iterates over a list, or an array, of objects. You have probably seen this code before:
for i in range(1,51):
print(i)
This for loop iterates over the list of integers from 1 to 51, excluding the 51 and including the 1. That means it is a list from 1 to 50, inclusive. On every iteration, Python will print the number that the loop is iterating through.
For example, in the first iteration, i = 1, so Python prints 1.
In the second iteration, i = 2, so Python prints 2.
This continues so on until the number, 50, is reached. Therefore, the last number Python will print out is 50.
#### Program Example
Find $\sum_{n=1}^{50} 2^{n}.$
To do this task, we must create a for loop and loop over the integers from 1 to 50 inclusive:
for i in range(1,51):
Now what? We must keep a running total and increase it by $2^i$ every time:
total = 0
for i in range(1,51):
total += 2**i
We must not forget to print the total at the end!
total = 0
for i in range(1,51):
total += 2**i
print(total)
You must exit out of the for loop one you reach the print(total) line by pressing backspace.
Once you run your program, you should get an answer of $\boxed{2,251,799,813,685,246.}$
### The While Loop
While loops don't loop over a list. They loop over and over and over...until...a condition becomes false.
i = 3
total = 0
while i < 1000:
total += i i += 3
print(total)
In this code, the while loop loops 333 times, until i becomes greater than or equal to 1000.
#### Program Example
Find $\sum_{n=1}^{50} 2^n$ without using a for loop.
We must create a while loop that will iterate until n is greater than $50.$
n = 1
total = 0
while n <= 50:
total += 2**n n += 1
print(total)
We must not forget to include the n += 1 line at the end of the while loop!
If we run this, we will get the same answer as last time, $2,251,799,813,685,246.$
|
2020-10-20 12:09:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18968413770198822, "perplexity": 627.0638348434816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107872686.18/warc/CC-MAIN-20201020105000-20201020135000-00049.warc.gz"}
|
https://phabricator.wikimedia.org/T286294
|
# Unicode characters are now allowed inside a \text{} element in math mode, should this be publicised or prohibited?Open, Needs TriagePublicActions
Assigned To
None
Authored By
SalixAlba Jul 7 2021, 4:43 PM2021-07-07 16:43:08 (UTC+0)
Subscribers
# Description
Following the removal of texvc T188879 it is now possible to include unicode characters in $mode. For example [itex]\text{БЂЯғشفæ̃∮שא}$
does not a cause parse errors and renders correctly, (even with some right to left character in the string).
This new functionality raises some questions:
• Is is stable across different rendering modes?
• Should we publicise this for example mentioning it in Help:Formula?
• Should we modify the linter to raise parse error?
# Related Objects
### Event Timeline
Restricted Application added a subscriber: Aklapper. Jul 7 2021, 4:43 PM
Slight issue with the right to left characters. The order if different between what is typed and what is renders appears differently.
Following the removal of texvc T188879 it is now possible to include unicode characters in [itex] mode.
Note that texvc was removed and was replaced by texvcjs. Moreover, I have created a LaTeX package called https://ctan.org/pkg/texvc?lang=en which is supposed to work exactly as the math tags within wikitext. Unfortunately, I did not update it recently.
Is is stable across different rendering modes?
No. One can not guarantee that. Especially in source mode everything will work;-)
Should we publicise this for example mentioning it in Help:Formula?
I think it is a good idea to mention that this can happen in general. However, I think it would be also good to fix this particular problem and focus on documenting that particular error message and not the specific circumstances. Maybe something along the lines. "The math you entered is a valid expression but during the rendering of the expression a technical error occurred...."
Should we modify the linter to raise parse error?
This depends upon further investigation. On https://mathoid2.wmflabs.org/info.html the test passes. So chances are high that this is a problem with restbase which will hopefully be removed soon.
|
2022-01-20 06:28:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6837987303733826, "perplexity": 2730.4079851827737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301720.45/warc/CC-MAIN-20220120035934-20220120065934-00631.warc.gz"}
|
https://www.semanticscholar.org/topic/dt-divisor/1565998
|
You are currently offline. Some features of the site may not work correctly.
# dt - divisor
Known as: dt
National Institutes of Health
## Papers overview
Semantic Scholar uses AI to extract papers important to this topic.
2018
2018
• 2018
• Corpus ID: 119085191
We study constraints on thermal phase transitions of ${\rm SU}(N_c)$ gauge theories by using the 't Hooft anomaly involving the… Expand
Is this relevant?
2015
2015
• 2015
• Corpus ID: 55718284
Let X$X$ be a smooth connected projective manifold, together with an snc orbifold divisor Δ$\Delta$, such that the pair (X,Δ)\$(X… Expand
Is this relevant?
2013
2013
• 2013
• Corpus ID: 119188664
A bstractIn order to support the odd moduli in models of (type IIB) string compactification, we classify the Calabi-Yau… Expand
Is this relevant?
Highly Cited
2011
Highly Cited
2011
• 2011
• Corpus ID: 118609134
It is shown that the supersymmetry-preserving automorphisms of any non-linear σ-model on K3 generate a subgroup of the Conway… Expand
Is this relevant?
Highly Cited
2001
Highly Cited
2001
Hara [Ha3] and Smith [Sm2] independently proved that in a normal Q-Gorenstein ring of characteristic p 0, the test ideal… Expand
Is this relevant?
1999
1999
<abstract abstract-type="TeX"><p>We shall describe a canonical procedure to associate to any (germ of) holomorphic self-map <i>F… Expand
Is this relevant?
Highly Cited
1995
Highly Cited
1995
• British Journal of Cancer
• 1995
• Corpus ID: 18205694
A point mutation in the mRNA of NADP(H): quinone oxidoreductase 1 (NQO1, DT-diaphorase) is believed to be responsible for reduced… Expand
Is this relevant?
1980
1980
A system for reliably applying interrupts to a numerical control system is described. The computer which processes the control… Expand
Is this relevant?
Highly Cited
1956
Highly Cited
1956
• 1956
• Corpus ID: 5744339
An abstract ring in which all finitely generated ideals are principal will be called an F-ring. Let C(X) denote the ring of all… Expand
Is this relevant?
1956
1956
• K Tong
• 1956
• Corpus ID: 124397350
Let d_k(n) be the number of expressions of n as k factors, and let D_k(x) R_k(x) = (a_(k,o) + a_(k.1) In x + …+ a_(k,k-1 In~(k-1… Expand
Is this relevant?
|
2020-06-05 10:16:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49759653210639954, "perplexity": 10282.799186461496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348496026.74/warc/CC-MAIN-20200605080742-20200605110742-00319.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/geometry/geometry-common-core-15th-edition/chapter-13-probability-13-1-experimental-and-theoretical-probability-lesson-check-page-827/1
|
# Chapter 13 - Probability - 13-1 Experimental and Theoretical Probability - Lesson Check - Page 827: 1
$\frac{1}{2}$
#### Work Step by Step
To calculate probability, we need to divide the number of chances for a certain outcome to occur by the total number of outcomes. In this case, there are $4$ even numbers on the spinner, and $8$ numbers in total. When we divide $4$ by $8$, we get $\frac{1}{2}$, which is the probability of getting an even number.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2022-05-26 10:47:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.850943922996521, "perplexity": 326.52555642520275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662604794.68/warc/CC-MAIN-20220526100301-20220526130301-00485.warc.gz"}
|
https://mathoverflow.net/questions/221082/harmonic-extension-of-a-curve-by-different-parametrization
|
# harmonic extension of a curve by different parametrization
Let us consider a curve $\gamma :S^1 \rightarrow \mathbb{R}^3$ (or even a planar convex one if it simplifies). Then I look to the harmonic extension to the disc $h:\mathbb{D}\rightarrow \mathbb{R}^3$ (i.e $\Delta h=0$ and $h=\gamma$ on $\partial\mathbb{D}$).
If I change the parametrization of $\gamma$ into $\gamma\circ \phi$, where $\phi$ is a diffeomorphism of $S^1$ :
1)is that true that the new harmonic extension can be write $h\circ \psi$ where $\psi$ is a diffeomorphism of $\mathbb{D}$. (it seems to be true in the planar convex case thanks to Rado-Kneser-Choquet theorem).
2) is there any chance to know how to "compute" $\psi$ with respect to $\phi$?
Last but not least, in fact it is the point I reallly want to understand, even in the planar case. it is a kind a Scharwz theorem: I would like to maximize the the norm of $\vert h_x \wedge h_y \vert(0)$ (the Jacobian in the planar case) with respect to the parametrization of $\gamma$. More precisely if the area bounded by the curve is equal to $\pi$ in the planar case, else we consider the area of the unique minimal surface when the curve is not planar but nice enough to insure the uniqueness.... it is just a normalization for the following question:
3) is there (always) a parametrization of the curve such that $\vert h_x \wedge h_y \vert(0) \geq 1$ (I am sure but I get no proof). is there even an optimal one? is there is many? For the circle, the only one is the $\theta \mapsto e^{i\theta}$, up to rotation, but I can't say anything for another curve.
I am open to any suggestion, I have read quickly the book of Peter Duren, Harmonic maps into the plane, but i seems to be not enough...
Edit (10/22/15):
My problem is indeed link to minimal surfaces, but not to the Palteau problem directly, but to the exterior Plateau problem. A simple by product is the following question:
let $\Gamma$ a planar curve, is there $h:\mathbb{D} \rightarrow \mathbb{C}$ such that $\frac{dh}{dz}=1$ and $h_{\vert \partial \mathbb{D}}$ is a monotone parametrisation of $\Gamma$. The answer is probably (clearly?) yes. But the real question is how many (really different)? For $\Gamma$ the unit circle, the answer is one, you can prove it using Fourier series. But for instance, for the image of $e^{i\theta}+\frac{1}{2} e^{i2\theta}$, I don't know how to proceed.
• I would suggest that you change the title of your question into something more descriptive. This will help catch the attention of the relevant people. – André Henriques Oct 16 '15 at 14:30
The answer to (1) in the space case is a definite 'no'. The actual image 'surface' $h(\Delta)$ in $\mathbb{R}^3$ can actually change when you reparametrize with $\phi:S^1\to S^1$, in which case, there can't be a reparametrization of the disk to match.
• Thanks for the Answer. The first point was indeed "easy". In fact I know quite well the solution of Douglas-Rado (through Struwe's book). lik in lawson, he minimise the energy among conformal parametrisation. But since my goal is 3)(1) and 2) are a try to make more explicit), it can't be adapted here. If we consider $h:z\mapsto z+ z^2/4$ , $\vert h(\mathbb{D})\vert=9/8$, but $\vert h'(0)\vert=1$ and any conformal reparametrization(using mobius group) will be below $9/8$. – Paul Oct 18 '15 at 16:15
|
2019-12-08 20:32:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.925433874130249, "perplexity": 183.0535689180832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540514893.41/warc/CC-MAIN-20191208202454-20191208230454-00116.warc.gz"}
|
https://questions.examside.com/past-years/jee/question/pgiven-below-are-two-statements-one-is-labelled-as-assert-jee-main-chemistry-some-basic-concepts-of-chemistry-qpwqkjglmvwbkvqs
|
1
JEE Main 2022 (Online) 26th July Evening Shift
+4
-1
Given below are two statements: one is labelled as Assertion $$\mathbf{A}$$ and the other is labelled as Reason $$\mathbf{R}$$.
Assertion A: Phenolphthalein is a $$\mathrm{pH}$$ dependent indicator, remains colourless in acidic solution and gives pink colour in basic medium.
Reason R: Phenolphthalein is a weak acid. It doesn't dissociate in basic medium.
In the light of the above statements, choose the most appropriate answer from the options given below.
A
Both $$\mathbf{A}$$ and $$\mathbf{R}$$ are true and $$\mathbf{R}$$ is the correct explanation of $$\mathbf{A}$$.
B
Both $$\mathbf{A}$$ and $$\mathbf{R}$$ are true but $$\mathbf{R}$$ is NOT the correct explanation of $$\mathbf{A}$$.
C
A is true but R is false.
D
A is false but R is true.
2
JEE Main 2022 (Online) 26th July Morning Shift
+4
-1
Which technique among the following, is most appropriate in separation of a mixture of $$100 \,\mathrm{mg}$$ of $$p$$-nitrophenol and picric acid ?
A
Steam distillation
B
2-5 ft long column of silica gel
C
Sublimation
D
Preparative TLC (Thin Layer Chromatography)
3
JEE Main 2022 (Online) 25th July Evening Shift
+4
-1
In base vs. acid titration, at the end point methyl orange is present as
A
quinonoid form
B
heterocyclic form
C
phenolic form
D
benzenoid form
4
JEE Main 2022 (Online) 25th July Morning Shift
+4
-1
Match List I with List II
List I List II
(A) N$$_2$$(g) + 3H$$_2$$(g) $$\to$$ 2NH$$_3$$(g) (I) Cu
(B) CO(g) + 3H$$_3$$(g) $$\to$$ CH$$_4$$(g) + H$$_2$$O(g) (II) Cu/ZnO $$-$$ Cr$$_2$$O$$_3$$
(C) CO(g) + H$$_2$$(g) $$\to$$ HCHO(g) (III) Fe$$_x$$O$$_y$$ + K$$_2$$O + Al$$_2$$O$$_3$$
(D) CO(g) + 2H$$_2$$(g) $$\to$$ CH$$_3$$OH(g) (IV) Ni
Choose the correct answer from the options given below :
A
$$(\mathrm{A})-(\mathrm{II}),(\mathrm{B})-(\mathrm{IV}),(\mathrm{C})-(\mathrm{I}),(\mathrm{D})-(\mathrm{III})$$
B
$$(\mathrm{A})-(\mathrm{II}),(\mathrm{B})-(\mathrm{I}),(\mathrm{C})-(\mathrm{IV}),(\mathrm{D})-(\mathrm{III})$$
C
$$(\mathrm{~A})-(\mathrm{III}),(\mathrm{B})-(\mathrm{IV}),(\mathrm{C})-(\mathrm{I}),(\mathrm{D})-(\mathrm{II})$$
D
$$(\mathrm{A})-(\mathrm{III}),(\mathrm{B})-(\mathrm{I}),(\mathrm{C})-(\mathrm{IV}),(\mathrm{D})-(\mathrm{II})$$
JEE Main Subjects
Physics
Mechanics
Electricity
Optics
Modern Physics
Chemistry
Physical Chemistry
Inorganic Chemistry
Organic Chemistry
Mathematics
Algebra
Trigonometry
Coordinate Geometry
Calculus
EXAM MAP
Joint Entrance Examination
|
2023-04-02 12:00:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6368568539619446, "perplexity": 14947.404931499133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950528.96/warc/CC-MAIN-20230402105054-20230402135054-00535.warc.gz"}
|
https://homework.cpm.org/category/CC/textbook/cc1/chapter/4/lesson/4.1.1/problem/4-10
|
### Home > CC1 > Chapter 4 > Lesson 4.1.1 > Problem4-10
4-10.
Read the Math Notes box in this lesson. Then complete the following division problems. Homework Help ✎
1. $683 \div 4$
• Set up the problem as shown in the Math Notes box from this lesson.
• Think ''4 goes into 600 how many times?''
• At least 100 times, but not more than 200.
• Think ''4 goes into 280 how many times?''
• Exactly 70 times.
• Think ''4 goes into 3 how many times?''
• Exactly 0.75 times.
${4}\longdiv {683}$
$\begin {array} {}{4} \longdiv {683}\\ \quad \ 4\downarrow \\\hline \quad \ 28 \end{array}$
$\begin {array}{}{4}\longdiv {683}\\ \ 4\downarrow \downarrow\\\hline \ \ \ 28 \downarrow \\ \ \ 28\downarrow\\ \hline\ \ \ 03 \\ \end{array}$
$\begin {array}{}{4}\longdiv {683}\\ \ 4\downarrow \downarrow\\\hline \ \ \ 28 \downarrow \\ \ \ 28\downarrow\\ \hline\ \ \ 03 \\ \quad 3 \\\hline \quad 0 \end{array}$
1. $212 \div 9$
• Follow the steps outlined in part (a).
|
2019-08-26 03:05:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 6, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.646555483341217, "perplexity": 5198.218852874525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330962.67/warc/CC-MAIN-20190826022215-20190826044215-00213.warc.gz"}
|
http://mathhelpforum.com/calculus/222361-continuity-problem.html
|
1. ## Continuity problem
Hello MHF, please help on this problem; (and by the way, we only did the continuity lesson )
We are given :
$f:[0,1]\rightarrow [0,1]$
$g:[0,1]\rightarrow [0,1]$
both $f$ and $g$ are continuious on $[0,1]$
$f(g(x)) = g(f(x))$
If $f(a) = a$ then $f(g(a)) = g(a)$
Show that $\exists \alpha \in [0,1] f(\alpha) = g(\alpha)$
P.S : use proof by contradiction
2. ## Re: Continuity problem
I tried to do this
let h(x) = f(x) - g(x)
for every x in [0,1] h(x)≠0
then for every x in [0,1] h(x) > 0 ( or < 0)
but after this i could't find a contradiction
3. ## Re: Continuity problem
Please I need to solve it before Monday morning.
4. ## Re: Continuity problem
Hi,
Since you need this by Monday morning, I can only assume it's either an exam question or homework for credit. So here are some hints:
Let a be a fixed point of f (f(a)=a) -- you need to prove why a exists. Now consider the sequence x1 = a and xn+1 = g(xn).
By the way, the above leads to a direct proof, not a proof by contradiction.
5. ## Re: Continuity problem
To start you off, suppose without loss of generality
$f(x)>g(x) \quad \text{ for an open interval in } [0,1]$
You'll get a contradiction along the way.
Note: Using the auxiliary function h you defined, could be used for a direct proof, but at that point you guys are assumed to know Intermediate Value Theorem, etc.
Disclaimer: If my line of thought is wrong or it has missing holes, please correct and/or inform me.
Hi again,
|
2016-09-28 23:04:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8939616680145264, "perplexity": 1242.5412397675427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661768.10/warc/CC-MAIN-20160924173741-00002-ip-10-143-35-109.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/irrational-numbers.132050/
|
Irrational Numbers
1. Sep 14, 2006
Pythagorean
don't exist!!!
and spiders are your new gods, worship them...
2. Sep 14, 2006
BSMSMSTMSPHD
Um.... okay.
:uhh:
3. Sep 14, 2006
SpaceTiger
Staff Emeritus
New? Where have you been?
4. Sep 14, 2006
chroot
Staff Emeritus
Irrational numbers are just a little nutty. It's the imaginary numbers that don't exist.
- Warre
5. Sep 14, 2006
Pythagorean
Heh, i was trying to keep in character.
(The Pythagoreans tried to hide the existance of irrational numbers in nature since it assaulted their belief that the universe was made completely of integer ratios)
6. Sep 14, 2006
chroot
Staff Emeritus
I'll drown you, you dirty rat!
- Warren
7. Sep 14, 2006
Gokul43201
Staff Emeritus
Q: Why did the Pythagorean cross the road?
8. Sep 14, 2006
franznietzsche
To get to the other side?
9. Sep 14, 2006
Gokul43201
Staff Emeritus
No. It was a trick quetion. Pythagoreans are forbidden from walking on roads! <a little poetic license there>
But if I were guessing along your lines, I'd have gone for: "to get to the opposite side".
Last edited: Sep 14, 2006
10. Sep 14, 2006
chroot
Staff Emeritus
Only diagonal ones.
- Warren
11. Sep 14, 2006
Staff Emeritus
ALL numbers are imaginary! So what?
12. Sep 14, 2006
franznietzsche
No, all numbers are complex. Silly.
13. Sep 14, 2006
Pythagorean
interesting. Do you know the reasoning for this? Is it a theological constraint?
14. Sep 14, 2006
Pythagorean
this is for the generally uninformed public. Absolute geniuses like you and I know the form of perfection when we see it.
http://www.timart.be/Npaginas/foto/wolf_spider.jpg [Broken]
omg! 6:8 is 3:4 is *fap*fap*fap*drool*worship*
Last edited by a moderator: May 2, 2017
15. Sep 14, 2006
Gokul43201
Staff Emeritus
Not looked much into it.You've heard of the other crazy-@ rules they had, right?
http://everything2.com/index.pl?node=Pythagorean
16. Sep 14, 2006
Pythagorean
Oh man, this is going to take way too much energy.....
but atheism is so boring... :/
|
2017-11-21 16:42:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7563589811325073, "perplexity": 14969.815233593878}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806419.21/warc/CC-MAIN-20171121151133-20171121171133-00106.warc.gz"}
|
http://tex.stackexchange.com/questions/15498/my-first-beamerposter-portuguese-language-not-working/17064
|
# My first beamerposter, portuguese language not working
I'm trying to make a poster based off the example in this site: http://www-i6.informatik.rwth-aachen.de/~dreuw/latexbeamerposter.php and using the this theme http://www-i6.informatik.rwth-aachen.de/~dreuw/download/beamerthemeIcy.sty
When I try changing the title to something with portuguese accents (with [portuguese]{babel}), the output is this
One blank page and some gibberish. Again, I only changed the titled of the exampled posted on that website.
It's my first poster and i'm kinda lost..
forgot to mention: I'm using TeXLive 2010 under Arch Linux
Example:
.tex file http://pastebin.com/HeQe02S2 .sty theme file http://pastebin.com/mHNa0Dg4
This combination isn't working so well for me. Also: I need to use the beta symbol throught the text, but i can't make it Sans Serif (helvet).
-
Please add a minimal working example illustrating the problem. Also, add any error messages that you get after compilation of the example code. – Gonzalo Medina Apr 10 '11 at 23:07
adding `\usepackage{utf8} and \usepackage[utf8]{inputenc}` fixed the gibberish but I still don't have word breaking support, and thus proper line breaking. – Santiago Apr 11 '11 at 0:37
@Santiago: change the line input encoding: `\usepackage[utf8]{inputenc}`. That should fix the problem. (Unless you're getting errors from missing images etc. from the original example.) – Alan Munn Apr 11 '11 at 0:41
yes that fixed it, but hyphenization isn't working and, as a consequence, line breaking is ugly (columns aren't | |) – Santiago Apr 11 '11 at 0:44
@Santiago: it's hard to guess what the problem might be with no actual code (the code you posted shows no hyphenation problems); as I suggested before, post a minimal working example illustrating your hyphenation issues. – Gonzalo Medina Apr 11 '11 at 2:19
Besides loading babel with the `portuguese` option, you should also add the following to your preamble:
``````\usepackage[utf8]{inputenc}
``````
which fixes the issue with non-ascii characters being displayed as "gibberish" (original comment by Alan Munn) and
``````\usepackage[T1]{fontenc}
``````
which is necessary for the hyphenation to work (original comment by Mateus Araújo).
-
When you compile your latex, what appears into the output regarding `Babel` and hyphenation?
In my case this is what I get:
``````Babel <v3.8l> and hyphenation patterns for english, dumylang, nohyphenation, catalan, croatian, ukenglish, usenglishmax, galician, spanish, loaded.
``````
I have written documents with beamer and latex in Spanish and English, and a beamerposter in English, and I've had no problem so far related to hyphenation.
-
|
2016-05-25 20:56:06
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8679011464118958, "perplexity": 4345.497563555574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275328.63/warc/CC-MAIN-20160524002115-00068-ip-10-185-217-139.ec2.internal.warc.gz"}
|
http://programminghistorian.org/lessons/output-keywords-in-context-in-html-file
|
July 17, 2012
# Output Keywords in Context in an HTML File with Python
Reviewed by Miriam Posner and Jim Clifford
Note: You may find it easier to complete this lesson if you have already completed the previous lesson in this series.
## Lesson Goals
This lesson builds on Keywords in Context (Using N-grams), where n-grams were extracted from a text. Here, you will learn how to output all of the n-grams of a given keyword in a document downloaded from the Internet, and display them clearly in your browser window.
## Files Needed For This Lesson
• obo.py
If you do not have these files from the previous lesson, you can download a zip file from the previous lesson
## Making an N-Gram Dictionary
Our n-grams have an odd number of words in them for a reason. At this point, our n-grams don”t actually have a keyword; they’re just a list of words. However, if we have an odd numbered n-gram the middle word will always have an equal number of words to the left and to the right. We can then use that middle word as our keyword. For instance, [“it”, “was”, “the”, “best”, “of”, “times”, “it”] is a 7-gram of the keyword “best”.
Since we have a long text, we want to be able to output all n-grams for our keyword. To do this we will put each n-gram into a dictionary, using the middle word as the key. To figure out the keyword for each n-gram we can use the index positions of the list. If we are working with 5-grams, for example, the left context will consist of terms indexed by 0, 1, the keyword will be indexed by 2, and the right context terms indexed by 3, 4. Since Python indexes start at 0, a 5-gram’s keyword will always be at index position 2.
That’s fine for 5-grams, but to make the code a bit more robust, we want to make sure it will work for any length n-gram, assuming its length is an odd number. To do this we’ll take the length of the n-gram, divide it by 2 and drop the remainder. We can achieve this using Python’s floor division operator, represented by two slashes, which divides and then returns an answer to the nearest whole number, always rounding down – hence the term “floor”.
print(7 // 2)
print(5 // 2)
print(3 // 2)
Let’s build a function that can identify the index position of the keyword when given an n-gram with an odd number of words. Save the following to obo.py.
# Given a list of n-grams identify the index of the keyword.
def nGramsToKWICDict(ngrams):
keyindex = len(ngrams[0]) // 2
return keyindex
To determine the index of the keyword, we have used the len property to tell us how many items are in the first n-gram, then used floor division to isolate the middle index position. You can see if this worked by creating a new program, get-keyword.py and running it. If all goes well, since we are dealing with a 5-gram, you should get 2 as the index position of the keyword as we determined above.
#get-keyword.py
import obo
test = 'this test sentence has eight words in it'
ngrams = obo.getNGrams(test.split(), 5)
print(obo.nGramsToKWICDict(ngrams))
Now that we know the location of the keywords, let’s add everything to a dictionary that can be used to output all KWIC n-grams of a particular keyword. Study this code and then replace your nGramsToKWICDict with the following in your obo.py module.
# Given a list of n-grams, return a dictionary of KWICs,
# indexed by keyword.
def nGramsToKWICDict(ngrams):
keyindex = len(ngrams[0]) // 2
kwicdict = {}
for k in ngrams:
if k[keyindex] not in kwicdict:
kwicdict[k[keyindex]] = [k]
else:
kwicdict[k[keyindex]].append(k)
return kwicdict
A for loop and if statement checks each n-gram to see if its keyword is already stored in the dictionary. If it isn’t, it’s added as a new entry. If it is, it’s appended to the previous entry. We now have a dictionary named kwicdict that contains all the n-grams, sortable by keyword and we can turn to the task of outputting the information in a more useful format as we did in Output Data as HTML File.
Try rerunning the get-keyword.py program and you should now see what’s in your KWIC dictionary.
## Outputting to HTML
### Pretty Printing a KWIC
“Pretty printing” is the process of formatting output so that it can be easily read by human beings. In the case of our keywords in context, we want to have the keywords lined up in a column, with the terms in the left-hand context right justified, and the terms in the right-hand context left justified. In other words, we want our KWIC display to look something like this:
amongst them a black there was one
first saw the black i turned to
had observed the black in the mob
say who that black was no seeing
i saw a black at first but
swear to any black yes there is
swear to a black than to a
...
This technique is not the best way to format text from a web designer’s perspective. If you have some experience with HTML we encourage you to use another method that will create a standards compliant HTML file, but for new learners, we just can’t resist the ease of the technique we’re about to describe. After all, the point is to integrate programming principles quickly into your research.
To get this effect, we are going to need to do a number of list and string manipulations. Let’s start by figuring out what our dictionary output will look like as it currently stands. Then we can work on refining it into what we want.
# html-to-pretty-print.py
import obo
# create dictionary of n-grams
n = 7
url = 'http://www.oldbaileyonline.org/browse.jsp?id=t17800628-33&div=t17800628-33'
text = obo.webPageToText(url)
fullwordlist = obo.stripNonAlphaNum(text)
ngrams = obo.getNGrams(fullwordlist, n)
worddict = obo.nGramsToKWICDict(ngrams)
print(worddict["black"])
As you can see when you run the above program, the output is not very readable yet. What we need to do is split the n-gram into three parts: before the keyword, the keyword, and after the keyword. We can then use the techniques learned in the previous chapters to wrap everything in HTML so that it is easy to read.
Using the same slice method as above, we will create our three parts. Open a Python shell and try the following examples. Pay close attention to what appears before and after the colon in each case. Knowing how to manipulate the slice method is a powerful skill for a new programming historian.
# calculate the length of the n-gram
kwic = 'amongst them a black there was one'.split()
n = len(kwic)
print(n)
-> 7
# calculate the index position of the keyword
keyindex = n // 2
print(keyindex)
-> 3
# display the items before the keyword
print(kwic[:keyindex])
-> ['amongst', 'them', 'a']
# display the keyword only
print(kwic[keyindex])
-> black
# display the items after the keyword
print(kwic[(keyindex+1):])
-> ['there', 'was', 'one']
Now that we know how to find each of the three segments, we need to format each to one of three columns in our display.
The right-hand context is simply going to consist of a string of terms separated by blank spaces. We’ll use the join method to turn the list entries into a string.
print(' '.join(kwic[(keyindex+1):]))
-> there was one
We want the keywords to have a bit of whitespace padding around them. We can achieve this by using a string method called center, which will align the text to the middle of the screen. We can add padding by making the overall string be longer than the keyword itself. The expression below adds three blank spaces (6/2) to either side of the keyword. We’ve added hash marks at the beginning and end of the expression so you can see the leading and trailing blanks.
print('#' + str(kwic[keyindex]).center(len(kwic[keyindex])+6) + '#')
-> # black #
Finally, we want the left-hand context to be right justified. Depending on how large n is, we are going to need the overall length of this column to increase. We do this by defining a variable called width and then making the column length a multiple of this variable (we used a width of 10 characters, but you can make it larger or smaller as desired). The rjust method handles right justification. Once again, we’ve added hash marks so you can see the leading blanks.
width = 10
print('#' + ' '.join(kwic[:keyindex]).rjust(width*keyindex) + '#')
-> # amongst them a#
We can now combine these into a function that takes a KWIC and returns a pretty-printed string. Add this to the obo.py module. Study the code to make sure you understand it before moving on.
# Given a KWIC, return a string that is formatted for
# pretty printing.
def prettyPrintKWIC(kwic):
n = len(kwic)
keyindex = n // 2
width = 10
outstring = ' '.join(kwic[:keyindex]).rjust(width*keyindex)
outstring += str(kwic[keyindex]).center(len(kwic[keyindex])+6)
outstring += ' '.join(kwic[(keyindex+1):])
return outstring
## Putting it All Together
We can now create a program that, given a URL and a keyword, wraps a KWIC display in HTML and outputs it in Firefox. This program begins and ends in a similar fashion as the program that computed word frequencies. Type or copy the code into your text editor, save it as html-to-kwic.py, and execute it. You will need to choose either obo.wrapStringInHTMLMac() or obo.wrapStringInHTMLWindows() as appropriate for your system, as done before.
# html-to-kwic.py
import obo
# create dictionary of n-grams
n = 7
url = 'http://www.oldbaileyonline.org/browse.jsp?id=t17800628-33&div=t17800628-33'
text = obo.webPageToText(url)
fullwordlist = ('# ' * (n//2)).split()
fullwordlist += obo.stripNonAlphaNum(text)
fullwordlist += ('# ' * (n//2)).split()
ngrams = obo.getNGrams(fullwordlist, n)
worddict = obo.nGramsToKWICDict(ngrams)
# output KWIC and wrap with html
target = 'black'
outstr = '<pre>'
if worddict.has_key(target):
for k in worddict[target]:
outstr += obo.prettyPrintKWIC(k)
outstr += '<br />'
else:
outstr += '</pre>'
obo.wrapStringInHTML('html-to-kwic', url, outstr)
The first part is the same as above. In the second half of the program, we’ve wrapped everything in the HTML pre tag (pre-formatted), which tells the browser not to monkey with any of the spacing we’ve added.
Also, notice that we use the has_key dictionary method to make sure that the keyword actually occurs in our text. If it doesn’t, we can print a message for the user before sending the output to Firefox. Try changing the target variable to a few other keywords. Try one you know isn’t there to make sure your program doesn’t output something when it shouldn’t.
We have now created a program that looks for a keyword in a dictionary created from an HTML page on the web, and then outputs the n-grams of that keyword to a new HTML file for display on the web. All of the lessons up to this point have included parts of Python vocabulary and methods needed to create this final program. By referring to those lessons, you can now experiment with Python to create programs that accomplish specific tasks that will help in your research process.
## Code Syncing
This marks the end of this series of original lessons on python. The finished code for the series can be downloaded as a zip file. If you are following along with the Mac / Linux version you may have to open the obo.py file and change “file:///Users/username/Desktop/programming-historian/” to the path to the directory on your own computer.
There is an additional lesson on using Python to download multiple records using Query Strings, marked as the next lesson.
Note: You are now prepared to move on to the next lesson in this series.
|
2017-01-20 12:00:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30475080013275146, "perplexity": 1368.6085307118956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00419-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://aviation.stackexchange.com/questions/53583/what-is-the-average-aerodynamic-load-on-a-control-surface-of-a-commuter-sized-ai/53629
|
# What is the average aerodynamic load on a control surface of a commuter-sized airplane?
I'm doing a study about aircraft hydraulic pump sizing. In order to do that, I need to know the size of a flight control actuator, then determine the maximum flow and pressure that the actuator needs.
Therefore, I need to know how much force that the flight control surface needs to be moved.
What is the average aerodynamic load on a flight control surface of a commuter-sized airplane (10-20 passengers)?
• If you’re not too much into in aerodynamic calculations and researching associated literature (I’m not really sure where best to start there...), another approach might be to find which actuators were used in real aircraft and make conclusions from that. – Cpt Reynolds Jul 17 '18 at 10:45
I think you don't actually want to know the load but rather the hinge moment of control surfaces. The actuator load is the hinge moment divided by the length of the control horn. Below is a rather poor sketch for a typical aileron linkage, but the principle is correct (source):
It already shows one popular way of reducing control forces: The tab, a little auxiliary control surface which moves against the "real" control surface. This reduces effectivity a bit, but forces by a lot. Here, the amount of deflection of the tab is controlled by a spring in its linkage, which is a clever way to adjust its deflection such that the actuation forces become more constant over speed.
Another way of reducing control forces is a horn: An extension of the surface forward of its hinge line, so the aerodynamic loads here balance those on the surface behind the hinge line. The picture below shows the left aileron of the ATR-72 which is moved by mechanical linkage (source)
This way, the lift loads on the control surface are mostly carried by the hinge and only the actuation loads need to be carried by the control rod or actuator. If you think you don't need all those nifty tricks, your actuator and hydraulic system will become much heavier than needed.
Why are two different methods used? The tab reduces the loads from deflection changes while the horn reduces those from angle of attack changes, too. When sized properly, both together will drive the hinge moment close to zero.
Why do I explain all this? It shows that your question does not have a simple answer. Rather, you need to specify exactly how your control surface looks and is moved, and only then can you start to calculate the actuator loads. I also want to show that a subsonic airplane for 10 - 20 passengers will be perfectly flyable with manual controls. The ATR-72 needs hydraulics only for the flaps, the spoilers, the brakes and the landing gear. Avoiding hydraulics for primary flight controls also lets it get away with single redundancy in its hydraulics system.
As you noted, there are lots of complexities to be considered when sizing control actuators, including the size of the control surface, the desired deflection angle, the actual deflection angle, hinge moments, boundary layer effects etc. You can obtain a rough approximation of force on the surface though starting with the simple definition $$P = \frac{F}{A}$$
Where P is the dynamic pressure, q, and A is the exposed control surface area. Solving for force gives: $$F = PA$$ Then substituting dynamic pressure and exposed area results in: $$F = \frac{1}{2} \rho V^2A sin\delta$$
Where A is the control surface area, and multiplying it by the sine of the deflection angle yields exposed area.
For a light transport aircraft, say a Beech 1900 (19 pax), the elevator has an area of 19.3 sqft. Using the logic above, this surface when deflected 5 degrees at cruise speed would feel a force of approximately 421 lbf. Your actuator sizing will ultimately need to account for the parameters above (and more), but hopefully this is an informative starting point.
• There are two things missing: The control surface is part of an empennage which carries a load even without control surface deflections. Next, the question should be more about the hinge moment on that surface - that, divided by control horn length, is what determines actuator loads. – Peter Kämpf Jul 17 '18 at 17:46
• @peterkampf I see. When designing the control mechanisms, wouldn't the the hinge moment be a design target? In other words, wouldn't you tweak the control horn and hinge locations to obtain an adequate hinge moment, where "adequate" is driven by the amount of force the control surface must oppose when deflected? – Geoff Jul 17 '18 at 18:13
• Yes, if you add area ahead of the hinge, loads go down. The best mechanism I have personally seen was on a Canberra - its ailerons could be moved with very low forces at flight speeds up to Mach 0.8. The downside here is an increased flutter risk - such mechanisms were the result of desperation. – Peter Kämpf Jul 17 '18 at 20:22
• Sin5°=-0.9589,sin 25=-0.1324,As angle of deflection increases,exposed area(Asin sigma)should increase not decrease......do you think this invalidates the above formula(the 3rd one)? – David Teahay Nov 15 '18 at 10:38
|
2021-05-09 11:16:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6865286827087402, "perplexity": 1290.991945219227}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988966.82/warc/CC-MAIN-20210509092814-20210509122814-00282.warc.gz"}
|
https://fundacja-pe.nazwa.pl/m8u337/0c251c-how-to-find-the-perimeter
|
If you have only the width and the height, then you can easily find all four sides (two sides are each equal to the height and the other two sides are equal to the width). This math worksheet gives your child practice finding the perimeters of large rectangles and squares. 4 cm + 5 cm + 8 cm + 10 cm + 3 cm + 6 cm = 36 cm. Display result on the screen. Practise Activity 1. You can use any one from these three formulae. From the previous example we learned how to figure the perimeter of a rectangle. The word has been derived from the Latin word circumferre means to carry around. While calculus may be needed to find the perimeter of irregular shapes, geometry is sufficient for most regular shapes. The exception is the ellipse, but its perimeter may be approximated. How to Use this Worksheet . … The term may be used either for the path, or its length—in one dimension. length = 12 cm. After the substitution of the values into the formula you can quickly come up with the perimeter. Substitute the value of ‘s’ in the perimeter formula, P= 4 × 5 cm. Our program will take the inputs from the user and print out the final results. P = 20 cm. Share on linkedin. It then displays the formula for finding the perimeter of the rectangle which is the addition of both bases and both sides. Finding missing side length when given perimeter. Formula : We need the height and width of a rectangle to find its area and perimeter. Given. Perimeter Of A Circle Circumference of a Circle Circumference means, ‘the perimeter of a circle’. For some geometric shapes, you can find perimeter from the area. On the Ellipse page we looked at the definition and some of the simple properties of the ellipse, but here we look at how to more accurately calculate its perimeter.. Perimeter. The defining feature of regular polygons is that all of their sides are the same length. Share on twitter. Solution: Given: Side, s = 5 cm. There are many formulas, here are some interesting ones. Finding the area of non-standard shapes is a bit different. What's the perimeter of this rectangle? Using Finding the Perimeter of Shapes Formulas Worksheet, students find the perimeter of different shapes using a formula. Before moving to the actual program, let me quickly show you the formula to find out the rectangle area, perimeter. Find the perimeter of the rectangle? Calculating the perimeter has several practical applications. The perimeter is the distance around the outside of a shape. Step 1. A perimeter is a path that encompasses/surrounds a two-dimensional shape. To understand this example perfectly you should have the following C++ programming knowledge. You can find the perimeter of a rectangle by calculating the sum of the length of its four sides. Perimeter of an Ellipse. Work out the perimeter of the park. Multiply both the height and width by two and add the results. For people who are learning geometry and would like to be able to find the perimeter of a rectangle, this video provides and quick and simple tutorial which provides the formula as well as a good example. Perimeter of an equilateral triangle = 3 x length of one side. The perimeter of a rectangle is the length of the boundary around the figure. Question 1: Find the perimeter of a square whose side is 5 cm. Finding the side length of a rectangle given its perimeter or area - In this lesson, we solve problems where we find one missing side length while one side length and area or perimeter … The formula to find the perimeter of a square is given by: The perimeter of Square = 4s units. You now have the perimeter of the curved edge of the semi circle. Here is a word problem for finding the perimeter of a square. In this tutorial, I will show you how to find the area and perimeter of a rectangle using user inputs. Perimeter is calculated by the following formula and store in perimeter variable like perimeter = 2*(length + width). It can be thought of as the length of the outline of a shape. Step 2: Add up all the sides to find the perimeter. We often find the perimeter when putting up Christmas lights around the house or fencing the backyard garden. Breadth of the rectangle. Hence, perimeter of the rectangle is 42 cm. To find the perimeter of a semi circle, you have to know the diameter (the length of its straight edge). The perimeter is the length of the outline of a shape. Share on facebook. To find the perimeter, we add up all the outside sides of our shape. Perimeter. To find the perimeter of non-standard shapes, you still find the distance around the shape by adding together the length of each side. Tap on PRINT, PDF or IMAGE button to print or download this grade-6 geometry worksheet to practice how to find the perimeter of irregular shapes on 2-dimensional plane. x is in this case the length of the rectangle while y is the width of the rectangle. Area and Perimeter of a Trapezoid. Your students will need to know how to find the perimeter to build fences around gardens and pools. Learn how to find the perimeter of a polygon in this article. How to Use this Worksheet. Example. To solve the word problem, we first draw the shape and mark any known sides. Using the perimeter of a pentagon formula, you can find the perimeter of a regular pentagon with relative ease. Find perimeter from area. The perimeter of a circle or ellipse is called its circumference. For example, if the diameter of your semi circle is 12 centimeters, the formula becomes. Find the perimeter of a square with an area of 64 square meters. This video starts out with a basic view of a rectangle and its dimensions. We use a ruler to measure length of the sides of a small regular shape. Prerequisite. The formula for the perimeter of a square is P = 4l, where l is the length of one side of the square. Therefore, the perimeter of square = 20 cm. Sometimes you’ll see the perimeter formula as P=2l+2w, where l is the length of the rectangle, and w is the width of the rec Find perimeter and area by finding the length (1) In this math lesson, students learn how to find the length of sides and then use that to find perimeter and area by comparing coordinates. To find the perimeter of a rectangle or square you have to add the lengths of all the four sides. Calculate perimeters of any regular polygon. Then work out . To calculate perimeter of rectangle in Python, you have to ask from user to enter the length and breadth value of rectangle whose perimeter you want to find out. Go here to find out how to figure the surface area. To find the perimeter of a square, multiply the length of one side by 4. The distance around a circular region is also known as its circumference. A square is a shape that has all four sides that are the same length. Perimeter of a square $$=4×side=4s$$ Perimeter of a rectangle $$=2(width \ + \ length)$$ Perimeter of trapezoid $$= a+b+c+d$$ Perimeter of a regular hexagon $$=6a$$ Perimeter of a parallelogram $$=2(l+w)$$ Perimeter of Polygons Polygons – Example 1: Find the perimeter of … Step by step guide to find perimeter of Polygons. Or as a formula: where: a,b and c are the lengths of each side of the triangle In the figure above, drag any orange dot to resize the triangle. Since a pentagon has five equal sides, to find its perimeter you multiply the length of one side by five. When side lengths are given, add them together. Find Perimeter of Rectangle without Function. To calculate the perimeter, the same process is repeated, but in this case the Perimeter property is selected. P = \frac{1}{2} ×(3.14×12) + 12. This is the perimeter. If you had to put down a tarp to cover the entire field so it wouldn't get wet, that would be the surface area. A trapezoid is a four-sided polygon, or “quadrilateral”, that has at least one set of parallel sides.There are two types of sides in a trapezoid: legs and bases. Tap on PRINT, PDF or IMAGE button to print or download this grade-6 geometry worksheet to practice how to find the area and perimeter of irregular shapes on 2d-geometry plane. With this process you can calculate the area, perimeter, length or coordinates of a shapefile (according to its geometry). Explanation. The base is 7 inches while the sides are 4 inches. Hi, and welcome to this video on finding the area and perimeter of a trapezoid! Have a look at how this is done below. This lesson helps students understand complex math concepts in an accessible way. This worksheet helps your students understand when they will need to know perimeter outside of the classroom. Area = 108 cm² . Remember the formulas for both area and perimeter of a square. CCSS.Math: 3.MD.D.8. Solve for a missing side using the Pythagorean theorem. You need to create regions within the shape for which you can find the area, and add these areas together. Level 1 is a rectangle, level 2 is a L shaped compound shape and level 3 is a more complicated compound shape. With equilateral triangles, squares, and circles, you can use formulas to find their perimeters from their given areas. The formula for finding the perimeter of a rectangle is simple the sum of all sides, or l+l+w+w, where l is the length and w is the width of the rectangle. How to find the perimeter of a triangle Like any polygon, the perimeter is the total distance around the outside, which can be found by adding together the length of each side. In the special case of the circle, the perimeter is also known as the circumference. How to Work out the Perimeter of a Compound Shape Here per indicates to perimeter, len indicates to length and bre indicates to breadth of rectangle. There are three primary methods used to find the perimeter of a right triangle. […] Area and Perimeter A brilliant site that will help you work out area and perimeter of shapes. Rather strangely, the perimeter of an ellipse is very difficult to calculate!. How to find perimeter? Note: The ratio of circumference to diameter is approximately the same around 3.142. i.e. Problem. 3.14 × 12 = 37.68. To find the perimeter of an irregular pentagon, you must measure and add up the five sides. In this sixth grade … To find the perimeter of a rectangle, add the lengths of the rectangle's four sides. Other examples may include finding the total length of the boundary of the soccer field or the length of the crochet or ribbon required to cover the border of a table mat. To find the perimeter of rectangles, you must know … Break Down the Equation; Work out. 37.68 ÷ 2 = 18.84. How do you find the perimeter of a right triangle? Share on whatsapp. It is important to remember that if changes in geometry are caused, the area, perimeter, or length fields are not updated automatically. If we know side-angle-side information, solve for the missing side using the Law of Cosines. The area of a rectangle is 108 cm² and its length is 12 cm . Perimeter of the rectangle = 2 x ( l + b ) cm = 2 x ( 12 + 9 ) cm = ( 2 x 21 ) cm = 42 cm. Perimeter of a Pentagon Formula. Find the perimeter of irregular shapes worksheet with answers for 6th grade math curriculum is available online for free in printable and downloadable (pdf & image) format. The third one is better to go with. Find the area & perimeter of irregular shapes worksheet with answers for 6th grade math curriculum is available online for free in printable and downloadable (pdf & image) format. The rectangle which is the how to find the perimeter of the curved edge of the rectangle is! Is 12 centimeters, the formula to find the perimeter of a square with area! Christmas lights around the house or fencing the backyard garden 8 cm + cm... S = 5 cm of shapes where l is the distance around a circular region is also known the. The Latin word circumferre means to carry around some geometric shapes, can. Have to know perimeter outside of a small regular shape the height and width two... And mark any known sides is 108 cm² and its length is 12 centimeters the. Term may be approximated a shapefile ( according to its geometry ) this grade. When side lengths are given, add them together here to find the perimeter of a square a! The values into the formula for finding the perimeter is calculated by the following C++ knowledge... Calculate! step guide to find their perimeters from their given areas fences. Them together its circumference + 5 cm + 8 cm + 3 cm + cm. Rectangle using user inputs areas together 4 inches ’ in the perimeter its perimeter may approximated... A word problem, we add up all the outside of the rectangle while y the... Case the length of the outline of a trapezoid of large rectangles and squares values the... Given, add them together information, solve for the perimeter of a circle ellipse! Given by: the perimeter of a circle circumference of a circle circumference of a rectangle by the! Whose side is 5 cm + 6 cm = 36 cm, the same 3.142.. Add them together side is 5 cm have a look at how this is done below x of. Pentagon with relative ease information, solve for a missing side using the Law Cosines! Circumference to diameter is approximately the same process is repeated, but its perimeter multiply... For a missing side using the perimeter is calculated by the following C++ programming.! 4 cm + 10 cm + 3 cm + 6 cm = 36 cm the following and! Special case of the outline of a right triangle with this process you can find perimeter... The sum of the length of one side of the semi circle following C++ programming.... Its circumference for the path, or its length—in one dimension you need to know how figure... Missing side using the perimeter of a square is given by: the ratio of circumference to diameter approximately... Gives your child practice finding the perimeter formula, P= 4 × 5 cm to breadth of rectangle equal! Is approximately the same process is repeated, but in this case the length of sides... You work out area and perimeter of a circle circumference means, ‘ the perimeter of an how to find the perimeter,... For finding the area and perimeter of a circle or ellipse is very difficult calculate... Methods used to find the perimeter of the boundary around the outside sides a. Worksheet gives your child practice finding the perimeter of a semi circle, the perimeter of square 4s! How this is done below rectangle, level 2 is a rectangle is 42.. Shapefile ( according to its geometry ) therefore, the perimeter is calculated the. Is approximately the same length circumference of a shapefile ( according to its )... The ratio of circumference to diameter is approximately the same length, s = 5 cm property... Shape by adding together the length of the boundary around the house or fencing the backyard garden property is.! Their perimeters from their given areas of its four sides regular shapes: the perimeter of polygon. The inputs from the previous example we learned how to figure the perimeter of a square will you! Then displays the formula becomes, the formula for finding the perimeter the... Variable like perimeter = 2 * ( length + width ) the ellipse, but this. Concepts in an accessible way have to know how to figure the surface area up all the four.., multiply the length of each side of square = 4s units of ‘ s ’ in the special of. A semi circle the exception is the length of the curved edge of the semi circle, the same is! Using a formula rectangle which is the ellipse, but in this sixth grade … here indicates... 20 cm some interesting ones out how to find the perimeter when how to find the perimeter up Christmas around. The sides to find the perimeter of a right triangle = 20.. With an area of non-standard shapes, you must measure and add these areas together side is 5 cm repeated. Out area and perimeter of irregular shapes, geometry is sufficient for most regular.. You should have the perimeter is also known as its circumference how this is how to find the perimeter below will. Calculate the perimeter is calculated by the following formula and store in perimeter variable like perimeter = *! Value of ‘ s ’ in the perimeter is calculated by the following C++ programming knowledge, to the. 2 } × ( 3.14×12 ) + 12 its straight edge ) to solve the how to find the perimeter... Rectangle while y is the length of one side by 4 to add lengths. Can be thought of as the length of its four sides that are the same process repeated. Of regular polygons is that all of their sides are 4 inches worksheet! Has five equal sides, to find the perimeter of a circle ’ the and! User and print out the final results 3 is a rectangle to find perimeter from the previous example we how... 2 * ( length + width ) bit different ratio of circumference to diameter is approximately the same length square! This video starts out with a basic view of a square with an area of a circle circumference means ‘. + 8 cm + 6 cm = 36 cm, level 2 is more... Sides are 4 inches must measure and add these areas together have a how to find the perimeter at how this is below... Perimeters of large rectangles and squares, to find the perimeter of a square given! You must measure and add up all the sides are 4 inches, if the of. Perimeter formula, P= 4 × 5 cm finding the perimeter of a right triangle the term may needed. Done below of 64 square meters 4l, where l is the of! Ellipse is very difficult to calculate the perimeter of a right triangle ( length + width ) to diameter approximately. Your students understand when they will need to know how to figure the perimeter of equilateral.: find the perimeter of a square with an area of non-standard shapes, you must and... With a basic view of a rectangle [ … ] Question 1: find the perimeter of a rectangle calculating... Show you how to find their perimeters from their given areas before moving to the program! Height and width by two and add these areas together same length perimeter when putting up Christmas lights around outside! The ellipse, but its perimeter may be used either for the perimeter formula, P= ×! Strangely, the perimeter of a regular pentagon with relative ease a look at how this is below..., length or coordinates of a square is P = 4l, where l the. Welcome to this video starts out with a basic view of a shape around a region! Perimeter may be approximated use a ruler to measure length of the semi circle know! You need to know perimeter outside of the rectangle while y is the length of one by. Perimeters from their given areas in the special case of the sides to find the area, welcome... Diameter of your semi circle, the formula for finding the perimeters of large and., and welcome to this video starts out with a basic view of a square with area. To perimeter, we add up all the sides to find the perimeter of a rectangle is 42...., ‘ the perimeter of a shape is selected rectangle using user inputs how do you find the perimeter a! Circle is 12 cm to create regions within the shape and level 3 a... Side of the rectangle which is the length of one side of the rectangle is. Calculating the sum of the rectangle is 42 cm your child practice the! Welcome to this video starts out with a basic view of how to find the perimeter semi circle to solve the has... Like perimeter = 2 * ( length + width ) information, solve for the path, its! In an accessible way you find the distance around a circular region is also as... = 3 x length of each side the curved edge of the square of polygons inputs from the previous we... Region is also known as its circumference can use any one from these three formulae gardens pools... Around a circular region is also known as the circumference therefore, the perimeter of a rectangle its. Use a ruler to measure length of its straight edge ) substitute the value of ‘ ’. 1 } { 2 } × ( 3.14×12 ) + 12 gardens and pools up Christmas around! Rectangles and squares all four sides that are the same length its four sides geometry ) of our.! Same process is repeated, but in this article: the perimeter property is.! Of square = 4s units with the perimeter of a square is P = 4l, where l is addition! Primary methods used to find the perimeter of shapes of its four sides are! And both sides you now have the following formula and store in perimeter like!
|
2021-07-30 23:43:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5867328643798828, "perplexity": 400.7466636273825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154032.75/warc/CC-MAIN-20210730220317-20210731010317-00627.warc.gz"}
|
https://practice.geeksforgeeks.org/problems/possible-paths3834/1
|
Possible paths
Medium Accuracy: 52.31% Submissions: 1618 Points: 4
Given a directed graph and two vertices ‘u’ and ‘v’ in it. Find the number of possible walks from ‘u’ to ‘v’ with exactly k edges on the walk modulo 109+7.
Example 1:
Input 1: graph = {{0,1,1,1},{0,0,0,1},
{0,0,0,1}, {0,0,0,0}}, u = 0, v = 3, k = 2
Output: 2
Explanation: Let source ‘u’ be vertex 0,
destination ‘v’ be 3 and k be 2. The output
should be 2 as there are two walk from 0 to
3 with exactly 2 edges. The walks are {0, 2, 3}
and {0, 1, 3}.
You don't need to read or print anything. Your task is to complete the function MinimumWalk() which takes graph, u, v and k as input parameter and returns total possible paths from u to v using exactly k edges modulo 109+7.
Note: In graph, if graph[i][j] = 1, it means there is an directed edge from vertex i to j.
Expected Time Complexity: O(n3)
Expected Space Complexity: O(n3)
Constraints:
1 ≤ n ≤ 50
1 ≤ k ≤ n
0 ≤ u, v ≤ n-1
We are replacing the old Disqus forum with the new Discussions section given below.
|
2021-09-22 05:36:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2412833720445633, "perplexity": 2476.4736155784235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057329.74/warc/CC-MAIN-20210922041825-20210922071825-00046.warc.gz"}
|
http://projecteuclid.org/euclid.hha
|
## Homology, Homotopy and Applications
Homology, Homotopy and Applications (HHA) is a fully refereed international journal dealing with homology and homotopy in algebra and topology and their applications to the mathematical sciences.
The Taylor towers for rational algebraic {$K$}-theory and Hochschild homologyVolume 4, Number 1 (2002)
Classification of di-embeddings of the $n$-cube into $\mathbb {R}^n$Volume 9, Number 1 (2007)
|
2014-08-01 13:54:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6102442145347595, "perplexity": 3693.3884118964297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274987.43/warc/CC-MAIN-20140728011754-00138-ip-10-146-231-18.ec2.internal.warc.gz"}
|
https://benediktehinger.de/blog/science/matlab-performance-for-loops-vs-vectorization-vs-bsxfun/
|
# [matlab] performance for-loops vs. vectorization vs. bsxfun
From time to time I explain my students certain concepts. To archive those and as an extended memory, I share them here. We also recently had some discussion on vectorization in our research group. e.g. in python and matlab. With the second link claiming for-loops in matlab are performing much better than before.
## Goal
Show that for-loops are still quite slow in matlab. Compare bsxfun against vectorized arithmetic expansion in matlab against bsxfun
## The contenders
• good old for-loop: Easy to understand, can be found everywhere, slow
• arithmetic expansion: medium difficulty, should be general used, fast
• bsxfun: somewhat difficult to understand, I use it regularily, fast (often)
## Comparisons
While demonstrating this to my student, I noticed that subsetting an array has interesting effects on the performance differences. The same is true for different array sizes. Therefore, I decided to systematically compare those.
I subtract one row from either a subset (first 50 rows, dashed line) or all rows of an [n x m] matrix with n= [100, 1000, 10 000] and m = [10, 100, 1000, 10 000]. Mean + SE
## Three take home messages:
• for loop is very slow
• vectorization is fastest for small first dimension, then equally fast as bsxfun
• bsxfun is fastest if one needs to subset a medium sized array (n x m >100 x 1000), but see update below!
## Update:
Prompted by Anne Urai, I redid the analysis with multiplication & devision. The pattern is the same. I did notice that allocating new matrices before doing the arithmetic expansion (vectorization) results in the same behaviour as bsxfun (but more lines of code)
A = data(ix,:);
B = data(1,:);
x = A./B;
## matlab code
tAll = [];
for dim1 = [100 1000 10000]
for dim2 = [10 100 1000 10000]
tStart = tic;
for subset = [0 1]
if subset
ix = 1:50;
else
ix = 1:dim1;
end
for run = 1:10
data = rand(dim1,dim2);
% for-loop
x = data;
tic
for k= 1:size(data,2)
x(ix,k) = data(ix,k)-data(1,k);
end
t = toc;
tAll = [tAll; table(dim1,dim2,subset,{'for-loop'},t)];
%vectorized
tic
x = data(ix,:)-data(1,:);
t = toc;
tAll = [tAll; table(dim1,dim2,subset,{'vectorization'},t)];
% bsxfun
tic
x= bsxfun(@minus,data(ix,:),data(1,:));
t = toc;
tAll = [tAll; table(dim1,dim2,subset,{'bsxfun'},t)];
end
end
fprintf('finished dim1=%i,dim2=%i - took me %.2fs\n',dim1,dim2,toc(tStart))
end
end
% Plotting using the awesome GRAMM-toolbox
% https://github.com/piermorel/gramm
figure
g = gramm('x',log10(tAll.dim2),'y',log10(tAll.t),'color',tAll.Var4,'linestyle',tAll.subset);
g.facet_grid([],tAll.dim1)
g.stat_summary()
g.set_names('x','log10(second dimension [n x *M*])','y','log10(time) [log10(s)]','column','first dimension [ *N* x m]','linestyle','subset 1:50?')
g.draw()
Categorized: Blog
Tagged:
1. · 22. November 2017 Reply
In R2016b and later, Matlab automatically applies arithmetic expansion so that bsxfun is no longer necessary: https://blogs.mathworks.com/loren/2016/10/24/matlab-arithmetic-expands-in-r2016b/ would be interesting to see whether the performance difference between bsxfun and vectorisation.
• · 23. November 2017 Reply
I think
x = data(ix,:)-data(1,:);
uses already the arithmetic expansion, I clarified this in the article. All calculations were performed in R2016b and will fail in earlier version. I repeated the analysis with multiplication and division and the general shape of the results are identical. I.e. bsxfun is faster if a subset is selected, without subset-indexing, the performance seems identical.
Prompted by your comment I now tried this:
A = data(ix,:);
B = data(1,:);
x = A./B;
which shows that arithmetic expansion performs with same speed as bsxfun also for indexed matrices, except for small arrays where arithmetic expansion is faster.
|
2018-10-22 08:33:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5505030155181885, "perplexity": 6938.891887383113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514879.30/warc/CC-MAIN-20181022071304-20181022092804-00344.warc.gz"}
|
http://mathoverflow.net/revisions/48814/list
|
My favorite example from algebraic topology is Rene Thom's work on cobordism theory. The problem of classifying manifolds up to cobordism looks totally intractable at first glance. In low dimensions ($0,1,2$), it is easy, because manifolds of these dimensions are completely known. With hard manual labor, one can maybe treat dimensions 3 and 4. But in higher dimensions, there is no chance to proceed by geometric methods.
|
2013-05-24 17:50:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6876145601272583, "perplexity": 349.05478410220695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704933573/warc/CC-MAIN-20130516114853-00034-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://www.hackmath.net/en/math-problem/2164?tag_id=154,59
|
# Acceleration
The car accelerates at rate 0.5m/s2. How long travels 400 meters and what will be its speed?
Correct result:
t = 40 s
v = 20 m/s
#### Solution:
$a=0.5 \ \text{m/s}^2 \ \\ s=400 \ \text{m} \ \\ \ \\ s=\dfrac{ 1 }{ 2 } a t^2 \ \\ t=\sqrt{ 2 \cdot \ s/a }=\sqrt{ 2 \cdot \ 400/0.5 }=40 \ \text{s}$
$v=a \cdot \ t=0.5 \cdot \ 40=20 \ \text{m/s}$
Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you!
Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...):
Be the first to comment!
Tips to related online calculators
Do you want to convert velocity (speed) units?
Do you want to convert time units like minutes to seconds?
## Next similar math problems:
• The wellbore
The wellbore has a tributary of 2 m3 per hour. When there is no tapping, there are a stable 28 liters of water in the well. The pump suction basket is at the bottom of the well. At 14.00, the water was pumped out at a rate of 0.5 liters of water every se
• Compound interest 4
3600 dollars is placed in an account with an annual interest rate of 9%. How much will be in the account after 25 years, to the nearest cent?
• Two cylinders
Two cylinders are there one with oil and one with an empty oil cylinder has no fixed value assume infinitely. We are pumping out the oil into an empty cylinder having radius =1 cm height=3 cm rate of pumping oil is 9 cubic centimeters per sec and we are p
• Slow saving in banks
How long will it take to save € 9,000 by depositing € 200 at the beginning of each year at 2% interest?
• The farmer
The farmer calculated that the supply of fodder for his 20 cows was enough for 60 days. He decided to sell 2 cows and a third of the feed. How long will the feed for the rest of the peasant's herd last?
• The tractor
The tractor sows an average of 1.5 ha per hour. In how many hours does it sows a rectangular trapezoid field with the bases of 635m and 554m and a longer arm 207m?
• Between two bus stops
Wanda lives between two bus stops at three-eighths of their distance. He started the house today and found that whether he was running to one or the other stop, he would have arrived at the bus stop. The average bus speed is 60 km/h. What is the average
• What is
What is the annual percentage increase in the city when the population has tripled in 20 years?
• Volume of wood
Every year, at the same time, an increase in the volume of wood in the forest is measured. The increase is regularly p% compared to the previous year. If in 10 years the volume of wood has increased by 10%, what is the number p?
• Mortage hypo loan
The Jonáš family decided to buy an older apartment, which cost EUR 30,000. They found EUR 17,000 and took the loan with the bank for the remaining amount. What interest did they receive if they repay this amount for 15 years at EUR 120 per month?
• Positional energy
What velocity in km/h must a body weighing 60 kg have for its kinetic energy to be the same as its positional energy at the height 50 m?
• Where and when
The truck left Kremnica at 11:00 h at a speed of 60km/h. At 12:30 h, the passenger car started at an average speed of 80km/h. How many kilometers from Kremnica will the passenger car reach truck, and when?
• Hectares of forest
12 workers plant 24 ha of forest in 6 days. In how many days will 15 people and 12 people plant the same area?
• Growth of wood
The annual growth of wood in the forest is estimated at 2%. In how many years will make the forest volume double?
• Pilsen Region
Between 2000 and 2001, 14 per mille of the population decreased in the Pilsen Region. In 2000, the Pilsen Region had 551281 inhabitants. If the declining trend continues the same (i. E. , 14 per mille of inhabitants per year), how many inhabitants will th
• Fighter
A military fighter flies at an altitude of 10 km. From the ground position, it was aimed at an altitude angle of 23° and 12 seconds later at an altitude angle of 27°. Calculate the speed of the fighter in km/h.
• Age of family
The age of father and son is in the ratio 10: 3. The age of the father and daughter is in ratio 5: 2. How old are a father and a son if the daughter is 20?
|
2020-07-15 02:34:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4248201251029968, "perplexity": 1479.7712499075844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657154789.95/warc/CC-MAIN-20200715003838-20200715033838-00498.warc.gz"}
|
https://uncertainpy.readthedocs.io/en/latest/examples/interneuron.html
|
A multi-compartment model of a thalamic interneuron implemented in NEURON¶
In this example we illustrate how Uncertainpy can be used on models implemented in NEURON. For this example, we select a previously published model of an interneuron in the dorsal lateral geniculate nucleus Halnes et al., 2011. Since the model is in implemented in NEURON, the original model can be used directly with Uncertainpy with the use of NeuronModel. The code for this case study is found in /examples/interneuron/uq_interneuron.py. To be able to run this example you require both the NEURON simulator, as well as the interneuron model saved in the folder /interneuron_model/.
In the original modeling study, a set of 7 parameters were tuned manually through a series of trials and errors until the interneuron model obtained the desired response characteristics. The final parameter set is:
Parameter Value Unit Neuron variable Meaning
$$g_{\mathrm{Na}}$$ 0.09 $$\text{S/cm}^2$$ gna Max $$\text{Na}^+$$-conductance in soma
$$g_{\mathrm{Kdr}}$$ 0.37 $$\text{S/cm}^2$$ gkdr Max direct rectifying $$\text{K}^+$$-conductance in soma
$$g_{\mathrm{CaT}}$$ 1.17e-5 $$\text{S/cm}^2$$ gcat Max T-type $$\text{Ca}^{2+}$$-conductance in soma
$$g_{\mathrm{CaL}}$$ 9e-4 $$\text{S/cm}^2$$ gcal Max L-type $$\text{Ca}^{2+}$$-conductance in soma
$$g_{\mathrm{h}}$$ 1.1e-4 $$\text{S/cm}^2$$ ghbar Max conductance of a non-specific hyperpolarization activated cation channel in soma
$$g_{\mathrm{AHP}}$$ 6.4e-5 $$\text{S/cm}^2$$ gahp Max afterhyperpolarizing $$\text{K}^+$$-conductance in soma
$$g_{\mathrm{CAN}}$$ 2e-8 $$\text{S/cm}^2$$ gcanbar Max conductance of a $$\text{Ca}^{2+}$$-activated non-specific cation channel in soma
To perform an uncertainty quantification and sensitivity analysis of this model, we assume each of these 7 parameters have a uniform uncertainty distribution in the interval $$\pm 10\%$$ around their original value. We create these parameters similar to how we did in the Hodgkin-Huxley example:
# Define a parameter list
parameters= {"gna": 0.09,
"gkdr": 0.37,
"gcat": 1.17e-5,
"gcal": 0.0009,
"ghbar": 0.00011,
"gahp": 6.4e-5,
"gcanbar": 2e-8}
# Create the parameters
parameters = un.Parameters(parameters)
# Set all parameters to have a uniform distribution
# within a 20% interval around their fixed value
parameters.set_all_distributions(un.uniform(0.2))
A point-to-point comparison of voltage traces is often uninformative, and we therefore want to perform a feature based analysis of the model. Since we examine a spiking neuron model, we choose the features in SpikingFeatures:
# Initialize the features
features = un.SpikingFeatures(features_to_run="all")
We study the response of the interneuron to a somatic current injection between $$1000 \text{ ms} < t < 1900 \text{ ms}$$. SpikingFeatures needs to know the start and end time of this stimulus to be able to calculate certain features. They are specified through the stimulus_start and stimulus_end arguments when initializing NeuronModel. Additionally, the interneuron model uses adaptive time steps, meaning we have to set interpolate=True. In this way we tell Uncertainpy to perform an interpolation to get the output on a regular form before performing the analysis: We also give the path to the folder where the neuron model is stored with path="interneuron_model/". NeuronModel loads the NEURON model from mosinit.hoc, sets the parameters of the model, evaluates the model and returns the somatic membrane potential of the neuron, (the voltage of the section named "soma"). NeuronModel therefore does not require a model function.
# Initialize the model with the start and end time of the stimulus
model = un.NeuronModel(path="interneuron_model/", interpolate=True,
stimulus_start=1000, stimulus_end=1900)
We set up the problem, adding our features before we use polynomial chaos expansion with point collocation to compute the statistical metrics for the model output and all features. We also set the seed to easier be able to reproduce the result.
# Perform the uncertainty quantification
UQ = un.UncertaintyQuantification(model,
parameters=parameters,
features=features)
# We set the seed to easier be able to reproduce the result
data = UQ.quantify(seed=10)
The complete code becomes:
import uncertainpy as un
# Define a parameter list
parameters= {"gna": 0.09,
"gkdr": 0.37,
"gcat": 1.17e-5,
"gcal": 0.0009,
"ghbar": 0.00011,
"gahp": 6.4e-5,
"gcanbar": 2e-8}
# Create the parameters
parameters = un.Parameters(parameters)
# Set all parameters to have a uniform distribution
# within a 20% interval around their fixed value
parameters.set_all_distributions(un.uniform(0.2))
# Initialize the features
features = un.SpikingFeatures(features_to_run="all")
# Initialize the model with the start and end time of the stimulus
model = un.NeuronModel(path="interneuron_model/", interpolate=True,
stimulus_start=1000, stimulus_end=1900)
# Perform the uncertainty quantification
UQ = un.UncertaintyQuantification(model,
parameters=parameters,
features=features)
# We set the seed to easier be able to reproduce the result
data = UQ.quantify(seed=10)
|
2021-04-16 01:34:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6383770704269409, "perplexity": 3638.453736869785}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038088471.40/warc/CC-MAIN-20210416012946-20210416042946-00073.warc.gz"}
|
https://ec.gateoverflow.in/3144/gate-ece-2000-question-2-2
|
18 views
Use the data of the figure. The current $i$ in the circuit of the figure is
1. $-2 \mathrm{~A}$
2. $2 \mathrm{~A}$
3. $-4 \mathrm{~A}$
4. $+4 \mathrm{~A}$
|
2022-12-09 11:33:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6955088973045349, "perplexity": 261.48034664351627}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711396.19/warc/CC-MAIN-20221209112528-20221209142528-00687.warc.gz"}
|
https://lavelle.chem.ucla.edu/forum/viewtopic.php?p=53469
|
## Zero Order and k [ENDORSED]
$\frac{d[R]}{dt}=-k; [R]=-kt + [R]_{0}; t_{\frac{1}{2}}=\frac{[R]_{0}}{2k}$
Michelle_Nguyen_3F
Posts: 40
Joined: Wed Sep 21, 2016 2:59 pm
### Zero Order and k
Hello!
So I know that if we plot [A] vs time and we get a straight line, then the zero order reaction has a slope of -k. However, is it possible for k to be positive as well? Thank you!
Timothy_Yu_Dis3A
Posts: 11
Joined: Wed Sep 21, 2016 2:58 pm
Been upvoted: 1 time
### Re: Zero Order and k [ENDORSED]
For [A] vs. Time and a zero order reaction, I think k is always equal to negative slope. We only get a positive k when we have a second order reaction for the plot of 1/[A] vs. Time.
Ariana de Souza 4C
Posts: 99
Joined: Wed Sep 21, 2016 2:56 pm
### Re: Zero Order and k
but the slope is -k. So wouldn't k have to be positive, for there to be a negative slope?
|
2021-01-21 03:00:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6327031850814819, "perplexity": 1934.8152013529725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703522150.18/warc/CC-MAIN-20210121004224-20210121034224-00403.warc.gz"}
|
https://mathoverflow.net/questions/277566/what-is-the-total-space-of-a-stack-after-all
|
# What is the total space of a stack after all?
From my general experience I think for myself of what follows as some kind of taboo question for some reason: in my imagination, everybody wants an answer to this but somehow thinks it shall not be asked.
OK among many ways to present a stack, I choose this one: we are given a Grothendieck topos $\mathbf X$ represented by sheaves of sets on a site $(\mathbb C,J)$, and then we have "a large category $\mathscr C$ in the world of $\mathbf X$", that is, a presheaf $\mathbb C^{\mathrm{op}}\to\text{Categories}$ satisfying the (three-step) glueing conditions w. r. t. $J$. In this question, by stacks are meant such $\mathscr C$'s. More precisely, we speak about stacks on $\mathbf X$, or stacks on $(\mathbb C,J)$.
Given that, we may consider the notion of "presheaf of $\mathbf X$-world sets on $\mathscr C$ in the $\mathbf X$-world". Again, there are several ways to define this, for example, as another gadget $\mathscr E$ of the same kind as $\mathscr C$ together with a "functor in the $\mathbf X$-world" $\mathscr E\to\mathscr C$ which is a discrete fibration.
All such discrete fibrations form a category which I will denote by $\operatorname{Sets}(\mathbf X)^{\mathscr C^{\mathrm{op}}}$, since, I believe, it can be appropriately described as the category of contravariant functors, in the $\mathbf X$-world, from $\mathscr C$ to $\operatorname{Sets}(\mathbf X)$, the latter being yet another gadget of the same kind as $\mathscr C$, with the "underlying" $\mathbb C^{\mathrm{op}}\to\text{Categories}$ sending $c\in\mathbb C$ to $\mathbf X/a(h_c)$ (slice over the associated sheaf of $h_c:=\hom_{\mathbb C}(-,c)$).
The question now is simply this: under what conditions does it happen that there is another Grothendieck topos $\mathbf Y$ such that the category $\operatorname{Sets}(\mathbf X)^{\mathscr C^{\mathrm{op}}}$ is equivalent to $\mathbf Y$?
Remarks
I am primarily interested in the case when $\mathscr C$ is the associated stack of an internal category of $\mathbf X$. I believe in this case several things simplify.
Since $\mathscr C$ is in general not small (i. e. not the externalization, in a known way, of an internal category of $\mathbf X$), there is in general no well-defined geometric morphism $\operatorname{Sets}(\mathbf X)^{\mathscr C^{\mathrm{op}}}\to\mathbf X$, but even if there is no such morphism, I believe it is still natural to call $\mathbf Y$, when it exists, the total space of the stack $\mathscr C$.
Whereas if there is such a geometric morphism, it still might be different from the one with inverse image "constant presheaf" and direct image "$\varprojlim_{\mathscr C}$". Or it does coincide but is not bounded. Or further, although not bounded, is $\textit{locally}$ bounded. Hence subquestion: can such things happen?
There is a variation which might be needed to have more natural examples - when $\mathscr C$ comes naturally equipped with its own "$\mathbf X$-world Grothendieck topology" which one cannot ignore, i. e. one has to consider the $\mathbf X$-world $\textit{sheaves}$ rather than $\operatorname{Sets}(\mathbf X)^{\mathscr C^{\mathrm{op}}}$ to obtain something sensible.
Finally, the natural reverse questions are - which geometric morphisms $f:\mathbf Y\to\mathbf X$ are of this form? For those which are - what, if any, additional data on $f$ enable to recover the stack $\mathscr C$?
• Just a remark: if the topology $J$ is subcanonical, every representable is a sheaf, so $a(h_c)$ is the same as $h_c$ in this case. – Qfwfq Jul 30 '17 at 11:07
• @Qfwfq Right. I would be completely happy to have an answer for this case. – მამუკა ჯიბლაძე Jul 30 '17 at 11:32
Suppose $\mathscr{C}$ is the stackification of an internal category $C$ in $\mathbf{X}$. In this case, since $\mathrm{Sets}(\mathbf{X})$ is a stack, morphisms of stacks $\mathscr{C}^{\mathrm{op}} \to \mathrm{Sets}(\mathbf{X})$ are equivalent (by the universal property of stackification) to morphisms $C^{\mathrm{op}} \to \mathrm{Sets}(\mathbf{X})$, which in turn are equivalent to $\mathbf{X}$-internal discrete fibrations over $C$. (Probably I am using here the fact that by the "comparison lemma", stacks over $\mathbb{C}$ are equivalent to stacks over $\mathbf{X}$ with its canonical topology.)
The category of such is the "$\mathbf{X}$-indexed functor category" $[C^{\mathrm{op}},\mathbf{X}]$, which is a Grothendieck topos equipped with a bounded geometric morphism to $\mathbf{X}$; see sections B2.3 and B3.2 of Sketches of an Elephant. Since $[C^{\mathrm{op}},\mathbf{X}]$ is the free cocompletion of $C$ in the $\mathbf{X}$-world, by internalizing the usual arguments it determines $C$ up to "Morita equivalence", i.e. equivalence in the bicategory of $\mathbf{X}$-internal profunctors. This can equivalently be stated as "up to internal weak Cauchy completion" in $\mathbf{X}$, i.e. the equivalence relation generated by internal functors that are "fully faithful" and "surjective up to splitting idempotents" in the internal language of $\mathbf{X}$. Upon passage to associated stacks, internally "fully faithful and essentially surjective" functors get inverted, so this becomes up to "Cauchy completion in the world of stacks", i.e. simultaneously splitting idempotents and stackifying.
If we additionally consider sheaves for an internal Grothendieck topology on $C$, we obtain another Grothendieck topos $\mathrm{Sh}_{\mathbf{X}}(C)$ that also comes with a bounded geometric morphism to $\mathbf{X}$, and the internal version of Giraud's theorem says that every bounded geometric morphism to $\mathbf{X}$ is of this form; see sections B3.3 and C2.4 of Sketches of an Elephant.
|
2019-10-22 12:28:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8957762718200684, "perplexity": 360.49697033670344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987817685.87/warc/CC-MAIN-20191022104415-20191022131915-00227.warc.gz"}
|
https://math.stackexchange.com/questions/1513228/how-to-test-cnf-for-satisfiability/1513311
|
# How to test CNF for satisfiability?
If we have a conjunctive linked expression where only the following clauses are allowed: $A_i, \quad \neg A_i, \quad A_i \vee \neg A_j, \quad \neg A_i \vee A_j$
Example: $A_1 \wedge (A_2 \vee \neg A_3) \wedge \neg A_3 \wedge (\neg A_2 \vee A_4)$ What's the easiest way to check for satisfiability or non-satitsfiability?
• Given that the satisfiability problem is NP-complete, finding an algorithm that always works and at the same time is significantly better than trial-and-error would make you famous. – Arthur Nov 4 '15 at 18:38
• So is there no smarter solution than to check $2^n$ or more possibilities? (worst case) – fragant Nov 4 '15 at 18:41
• – arthur Nov 4 '15 at 19:07
• @Arthur It seems the OP is not asking about satisfiability of general conjunctive normal forms but only those with at most two disjuncts per clause, i.e., not SAT but 2-SAT. If so, then the problem is in P rather than being NP-complete. – Andreas Blass Nov 4 '15 at 20:09
• @Arthur In fact, the OP's description of the allowed clauses is even more restrictive than 2-SAT, since, in any clause with two literals, one literal is positive and the other negative. – Andreas Blass Nov 4 '15 at 20:11
Because of the very restricted sort of clauses that you allow, there is an efficient algorithm for deciding satisfiability. Make a list of all the propositional variables $A_i$ that are used in your formula. The algorithm will gradually determine that certain variables must be assigned the value T (i.e., true) in any satisfying truth assignment and that certain variables must be assigned F. The process begins by assigning T to any variable $A_i$ that occurs as one of the clauses in your formula, and assigning F to any $A_i$ such that $\neg A_i$ occurs as one of the clauses. After that, repeatedly carry out the following steps for each clause of the form $A_i\lor\neg A_j$ (or the equivalent form $(\neg A_j)\lor A_i$). If $A_j$ has already been assigned T, then also assign T to $A_i$; and if $A_i$ has already been assigned F, then also assign F to $A_j$. Repeat this process as long as any new assignments are produced. At the end, if some variable has been assigned both T and F, then your formula is not satisfiable; otherwise it is satisfiable. In fact, in the latter case, you can obtain a satisfying truth assignment by starting with the values that the algorithm has assigned to variables and then assigning T to any variable that didn't already get a value assigned.
Note that this algorithm depends very strongly on the very restricted sort of clauses that you allow. In the first place, if you allowed clauses with three disjuncts rather than only two, then, as Arthur wrote in the comments, the problem becomes NP-complete, so no such efficient algorithm is known (or is likely to exist). Furthermore, if you had allowed clauses of the form $A_i\lor A_j$ or $(\neg A_i)\lor(\neg A_j)$, then there is still an efficient algorithm for deciding satisfiability, but it requires steps for handling the new sorts of clauses, and the method for obtaining a satisfying truth assignment is not as simple as assigning T to all the variables that didn't get truth values earlier.
1. Remove all clauses that are always true, that is if it contains both the negated and non-negated litteral.
2. Remove super-clauses, that is a clause containing already stated litterals as another clause, for example $(A\vee B\vee \neg C)\wedge(\neg C\vee B) \equiv (B\vee \neg C)$.
3. Assign the litterals in singelton clauses the value $\top$.
4. Assign the litterals that are only occurring as either negated or non-negated the value $\top$.
5. Use a semantic tableau on the remaining clauses by using the rules for and and or.
As for your example: $$A_1\wedge(A_2\vee\neg A_3)\wedge\neg A_3\wedge(\neg A_2\vee A_4)\\ \equiv A_1\wedge \neg A_3\wedge(\neg A_2\vee A_4)\\ \cong \top\wedge\top\wedge(\top\vee\top)\\ \equiv \top\,\text{is satisfiable}$$
No tableau is needed in most examples, keep in mind that you may want to do step 4 more than one time (by removing clauses that contains $\top$ in between).
EDIT: This is a general solution to CNF-form.
|
2020-12-02 19:50:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.711661159992218, "perplexity": 306.18758132668364}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141715252.96/warc/CC-MAIN-20201202175113-20201202205113-00343.warc.gz"}
|
https://www.impan.pl/pl/wydawnictwa/czasopisma-i-serie-wydawnicze/bulletin-polish-acad-sci-math/all/53/4/85468/strong-transitivity-and-graph-maps
|
JEDNOSTKA NAUKOWA KATEGORII A+
# Wydawnictwa / Czasopisma IMPAN / Bulletin Polish Acad. Sci. Math. / Wszystkie zeszyty
## Strong Transitivity and Graph Maps
### Tom 53 / 2005
Bulletin Polish Acad. Sci. Math. 53 (2005), 377-388 MSC: 37E25, 37B20. DOI: 10.4064/ba53-4-3
#### Streszczenie
We study the relation between transitivity and strong transitivity, introduced by W. Parry, for graph self-maps. We establish that if a graph self-map $f$ is transitive and the set of fixed points of $f^{k}$ is finite for each $k \geq 1$, then $f$ is strongly transitive. As a corollary, if a piecewise monotone graph self-map is transitive, then it is strongly transitive.
#### Autorzy
• Katsuya YokoiDepartment of Mathematics
Shimane University
Matsue, 690-8504, Japan
e-mail
## Przeszukaj wydawnictwa IMPAN
Zbyt krótkie zapytanie. Wpisz co najmniej 4 znaki.
Odśwież obrazek
|
2022-11-26 22:33:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7498300671577454, "perplexity": 7478.6035035017585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446709929.63/warc/CC-MAIN-20221126212945-20221127002945-00193.warc.gz"}
|
https://everything.explained.today/Peano_axioms/
|
# Peano axioms explained
In mathematical logic, the Peano axioms, also known as the Dedekind–Peano axioms or the Peano postulates, are axioms for the natural numbers presented by the 19th century Italian mathematician Giuseppe Peano. These axioms have been used nearly unchanged in a number of metamathematical investigations, including research into fundamental questions of whether number theory is consistent and complete.
The need to formalize arithmetic was not well appreciated until the work of Hermann Grassmann, who showed in the 1860s that many facts in arithmetic could be derived from more basic facts about the successor operation and induction. In 1881, Charles Sanders Peirce provided an axiomatization of natural-number arithmetic.[1] In 1888, Richard Dedekind proposed another axiomatization of natural-number arithmetic, and in 1889, Peano published a simplified version of them as a collection of axioms in his book, The principles of arithmetic presented by a new method (Latin: Arithmetices principia, nova methodo exposita).
The nine Peano axioms contain three types of statements. The first axiom asserts the existence of at least one member of the set of natural numbers. The next four are general statements about equality; in modern treatments these are often not taken as part of the Peano axioms, but rather as axioms of the "underlying logic". The next three axioms are first-order statements about natural numbers expressing the fundamental properties of the successor operation. The ninth, final axiom is a second order statement of the principle of mathematical induction over the natural numbers. A weaker first-order system called Peano arithmetic is obtained by explicitly adding the addition and multiplication operation symbols and replacing the second-order induction axiom with a first-order axiom schema.
## Formulation
When Peano formulated his axioms, the language of mathematical logic was in its infancy. The system of logical notation he created to present the axioms did not prove to be popular, although it was the genesis of the modern notation for set membership (∈, which comes from Peano's ε) and implication (⊃, which comes from Peano's reversed 'C'.) Peano maintained a clear distinction between mathematical and logical symbols, which was not yet common in mathematics; such a separation had first been introduced in the Begriffsschrift by Gottlob Frege, published in 1879. Peano was unaware of Frege's work and independently recreated his logical apparatus based on the work of Boole and Schröder.
The Peano axioms define the arithmetical properties of natural numbers, usually represented as a set N or
N.
The non-logical symbols for the axioms consist of a constant symbol 0 and a unary function symbol S.
The first axiom states that the constant 0 is a natural number:The next four axioms describe the equality relation. Since they are logically valid in first-order logic with equality, they are not considered to be part of "the Peano axioms" in modern treatments.The remaining axioms define the arithmetical properties of the natural numbers. The naturals are assumed to be closed under a single-valued "successor" function S.Peano's original formulation of the axioms used 1 instead of 0 as the "first" natural number. However, because 0 is the additive identity in arithmetic, most modern formulations of the Peano axioms start from 0.
Axioms 1, 6, 7, 8 define a unary representation of the intuitive notion of natural numbers: the number 1 can be defined as S(0), 2 as S(S(0)), etc. However, considering the notion of natural numbers as being defined by these axioms, axioms 1, 6, 7, 8 do not imply that the successor function generates all the natural numbers different from 0. Put differently, they do not guarantee that every natural number other than zero must succeed some other natural number.
The intuitive notion that each natural number can be obtained by applying successor sufficiently often to zero requires an additional axiom, which is sometimes called the axiom of induction.
The induction axiom is sometimes stated in the following form:
In Peano's original formulation, the induction axiom is a second-order axiom. It is now common to replace this second-order principle with a weaker first-order induction scheme. There are important differences between the second-order and first-order formulations, as discussed in the section below.
## Arithmetic
The Peano axioms can be augmented with the operations of addition and multiplication and the usual total (linear) ordering on N. The respective functions and relations are constructed in set theory or second-order logic, and can be shown to be unique using the Peano axioms.
Addition is a function that maps two natural numbers (two elements of N) to another one. It is defined recursively as:
\begin{align} a+0&=a,&rm{(1)}\\ a+S(b)&=S(a+b).&rm{(2)} \end{align}
For example:
\begin{align} a+1&=a+S(0)&bydefinition\\ &=S(a+0)&using(2)\\ &=S(a),&using(1)\\ \\ a+2&=a+S(1)&bydefinition\\ &=S(a+1)&using(2)\\ &=S(S(a))&usinga+1=S(a)\\ \\ a+3&=a+S(2)&bydefinition\\ &=S(a+2)&using(2)\\ &=S(S(S(a)))&usinga+2=S(S(a))\\ etc.&\\ \end{align}
The structure is a commutative monoid with identity element 0. is also a cancellative magma, and thus embeddable in a group. The smallest group embedding N is the integers.
### Multiplication
Similarly, multiplication is a function mapping two natural numbers to another one. Given addition, it is defined recursively as:
\begin{align} a0&=0,\\ aS(b)&=a+(ab). \end{align}
It is easy to see that
S(0)
(or "1", in the familiar language of decimal representation) is the multiplicative right identity:
aS(0)=a+(a0)=a+0=a
To show that
S(0)
is also the multiplicative left identity requires the induction axiom due to the way multiplication is defined:
S(0)
is the left identity of 0:
S(0)0=0
.
• If
S(0)
is the left identity of
a
(that is
S(0)a=a
), then
S(0)
is also the left identity of
S(a)
:
S(0)S(a)=S(0)+S(0)a=S(0)+a=a+S(0)=S(a+0)=S(a)
.
Therefore, by the induction axiom
S(0)
is the multiplicative left identity of all natural numbers. Moreover, it can be shown that multiplication is commutative and distributes over addition:
a(b+c)=(ab)+(ac)
.
Thus,
(\N,+,0,,S(0))
is a commutative semiring.
### Inequalities
The usual total order relation ≤ on natural numbers can be defined as follows, assuming 0 is a natural number:
For all, if and only if there exists some such that .
This relation is stable under addition and multiplication: for
a,b,c\in\N
, if, then:
• a + cb + c, and
• a · cb · c.
Thus, the structure is an ordered semiring; because there is no natural number between 0 and 1, it is a discrete ordered semiring.
The axiom of induction is sometimes stated in the following form that uses a stronger hypothesis, making use of the order relation "≤":
For any predicate φ, if
• φ(0) is true, and
• for every, if implies that φ(k) is true, then φ(S(n)) is true,
then for every, φ(n) is true.
This form of the induction axiom, called strong induction, is a consequence of the standard formulation, but is often better suited for reasoning about the ≤ order. For example, to show that the naturals are well-ordered—every nonempty subset of N has a least element—one can reason as follows. Let a nonempty be given and assume X has no least element.
• Because 0 is the least element of N, it must be that .
• For any, suppose for every, . Then, for otherwise it would be the least element of X.
Thus, by the strong induction principle, for every, . Thus,, which contradicts X being a nonempty subset of N. Thus X has a least element.
## First-order theory of arithmetic
All of the Peano axioms except the ninth axiom (the induction axiom) are statements in first-order logic. The arithmetical operations of addition and multiplication and the order relation can also be defined using first-order axioms. The axiom of induction is in second-order, since it quantifies over predicates (equivalently, sets of natural numbers rather than natural numbers), but it can be transformed into a first-order axiom schema of induction. Such a schema includes one axiom per predicate definable in the first-order language of Peano arithmetic, making it weaker than the second-order axiom. The reason that it is weaker is that the number of predicates in first-order language is countable, whereas the number of sets of natural numbers is uncountable. Thus, there exist sets that cannot be described in first-order language (in fact, most sets have this property).
First-order axiomatizations of Peano arithmetic have another technical limitation. In second-order logic, it is possible to define the addition and multiplication operations from the successor operation, but this cannot be done in the more restrictive setting of first-order logic. Therefore, the addition and multiplication operations are directly included in the signature of Peano arithmetic, and axioms are included that relate the three operations to each other.
The following list of axioms (along with the usual axioms of equality), which contains six of the seven axioms of Robinson arithmetic, is sufficient for this purpose:
\forallx(0S(x))
\forallx,y(S(x)=S(y)x=y)
\forallx(x+0=x)
\forallx,y(x+S(y)=S(x+y))
\forallx(x0=0)
\forallx,y(xS(y)=xy+x)
In addition to this list of numerical axioms, Peano arithmetic contains the induction schema, which consists of a recursively enumerable set of axioms. For each formula in the language of Peano arithmetic, the first-order induction axiom for φ is the sentence
\forall\bar{y}((\varphi(0,\bar{y})\land\forallx(\varphi(x,\bar{y})\varphi(S(x),\bar{y})))\forallx\varphi(x,\bar{y}))
where
\bar{y}
is an abbreviation for y1,...,yk. The first-order induction schema includes every instance of the first-order induction axiom, that is, it includes the induction axiom for every formula φ.
### Equivalent axiomatizations
There are many different, but equivalent, axiomatizations of Peano arithmetic. While some axiomatizations, such as the one just described, use a signature that only has symbols for 0 and the successor, addition, and multiplications operations, other axiomatizations use the language of ordered semirings, including an additional order relation symbol. One such axiomatization begins with the following axioms that describe a discrete ordered semiring.
\forallx,y,z((x+y)+z=x+(y+z))
\forallx,y(x+y=y+x)
\forallx,y,z((xy)z=x(yz))
, i.e., multiplication is associative.
\forallx,y(xy=yx)
, i.e., multiplication is commutative.
\forallx,y,z(x(y+z)=(xy)+(xz))
, i.e., multiplication distributes over addition.
\forallx(x+0=x\landx0=0)
, i.e., zero is an identity for addition, and an absorbing element for multiplication (actually superfluous).
\forallx(x1=x)
, i.e., one is an identity for multiplication.
\forallx,y,z(x<y\landy<zx<z)
, i.e., the '<' operator is transitive.
\forallx(\neg(x<x))
, i.e., the '<' operator is irreflexive.
\forallx,y(x<y\lorx=y\lory<x)
, i.e., the ordering satisfies trichotomy.
\forallx,y,z(x<yx+z<y+z)
, i.e. the ordering is preserved under addition of the same element.
\forallx,y,z(0<z\landx<yxz<yz)
, i.e. the ordering is preserved under multiplication by the same positive element.
\forallx,y(x<y\existsz(x+z=y))
, i.e. given any two distinct elements, the larger is the smaller plus another element.
0<1\land\forallx(x>0x\ge1)
, i.e. zero and one are distinct and there is no element between them. In other words, 0 is covered by 1, which suggests that natural numbers are discrete.
\forallx(x\ge0)
, i.e. zero is the minimum element.
The theory defined by these axioms is known as PA; the theory PA is obtained by adding the first-order induction schema. An important property of PA is that any structure
M
satisfying this theory has an initial segment (ordered by
\le
) isomorphic to
\N
. Elements in that segment are called standard elements, while other elements are called nonstandard elements.
## Models
A model of the Peano axioms is a triple, where N is a (necessarily infinite) set, and satisfies the axioms above. Dedekind proved in his 1888 book, The Nature and Meaning of Numbers (German: Was sind und was sollen die Zahlen?, i.e., “What are the numbers and what are they good for?”) that any two models of the Peano axioms (including the second-order induction axiom) are isomorphic. In particular, given two models and of the Peano axioms, there is a unique homomorphism satisfying
\begin{align} f(0A)&=0B\\ f(SA(n))&=SB(f(n)) \end{align}
and it is a bijection. This means that the second-order Peano axioms are categorical. This is not the case with any first-order reformulation of the Peano axioms, however.
### Set-theoretic models
See main article: Set-theoretic definition of natural numbers. The Peano axioms can be derived from set theoretic constructions of the natural numbers and axioms of set theory such as ZF.[2] The standard construction of the naturals, due to John von Neumann, starts from a definition of 0 as the empty set, ∅, and an operator s on sets defined as:
s(a)=a\cup\{a\}
The set of natural numbers N is defined as the intersection of all sets closed under s that contain the empty set. Each natural number is equal (as a set) to the set of natural numbers less than it:
\begin{align} 0&=\emptyset\\ 1&=s(0)=s(\emptyset)=\emptyset\cup\{\emptyset\}=\{\emptyset\}=\{0\}\\ 2&=s(1)=s(\{0\})=\{0\}\cup\{\{0\}\}=\{0,\{0\}\}=\{0,1\}\\ 3&=s(2)=s(\{0,1\})=\{0,1\}\cup\{\{0,1\}\}=\{0,1,\{0,1\}\}=\{0,1,2\} \end{align}
and so on. The set N together with 0 and the successor function satisfies the Peano axioms.
Peano arithmetic is equiconsistent with several weak systems of set theory. One such system is ZFC with the axiom of infinity replaced by its negation. Another such system consists of general set theory (extensionality, existence of the empty set, and the axiom of adjunction), augmented by an axiom schema stating that a property that holds for the empty set and holds of an adjunction whenever it holds of the adjunct must hold for all sets.
### Interpretation in category theory
The Peano axioms can also be understood using category theory. Let C be a category with terminal object 1C, and define the category of pointed unary systems, US1(C) as follows:
• The objects of US1(C) are triples where X is an object of C, and and are C-morphisms.
• A morphism φ : (X, 0X, SX) → (Y, 0Y, SY) is a C-morphism with and .
Then C is said to satisfy the Dedekind–Peano axioms if US1(C) has an initial object; this initial object is known as a natural number object in C. If is this initial object, and is any other object, then the unique map is such that
\begin{align} u(0)&=0X,\\ u(Sx)&=SX(ux). \end{align}
This is precisely the recursive definition of 0X and SX.
## Nonstandard models
Although the usual natural numbers satisfy the axioms of PA, there are other models as well (called "non-standard models"); the compactness theorem implies that the existence of nonstandard elements cannot be excluded in first-order logic. The upward Löwenheim–Skolem theorem shows that there are nonstandard models of PA of all infinite cardinalities. This is not the case for the original (second-order) Peano axioms, which have only one model, up to isomorphism. This illustrates one way the first-order system PA is weaker than the second-order Peano axioms.
When interpreted as a proof within a first-order set theory, such as ZFC, Dedekind's categoricity proof for PA shows that each model of set theory has a unique model of the Peano axioms, up to isomorphism, that embeds as an initial segment of all other models of PA contained within that model of set theory. In the standard model of set theory, this smallest model of PA is the standard model of PA; however, in a nonstandard model of set theory, it may be a nonstandard model of PA. This situation cannot be avoided with any first-order formalization of set theory.
It is natural to ask whether a countable nonstandard model can be explicitly constructed. The answer is affirmative as Skolem in 1933 provided an explicit construction of such a nonstandard model. On the other hand, Tennenbaum's theorem, proved in 1959, shows that there is no countable nonstandard model of PA in which either the addition or multiplication operation is computable. This result shows it is difficult to be completely explicit in describing the addition and multiplication operations of a countable nonstandard model of PA. There is only one possible order type of a countable nonstandard model. Letting ω be the order type of the natural numbers, ζ be the order type of the integers, and η be the order type of the rationals, the order type of any countable nonstandard model of PA is, which can be visualized as a copy of the natural numbers followed by a dense linear ordering of copies of the integers.
### Overspill
A cut in a nonstandard model M is a nonempty subset C of M so that C is downward closed (x < y and yCxC) and C is closed under successor. A proper cut is a cut that is a proper subset of M. Each nonstandard model has many proper cuts, including one that corresponds to the standard natural numbers. However, the induction scheme in Peano arithmetic prevents any proper cut from being definable. The overspill lemma, first proved by Abraham Robinson, formalizes this fact.
## Consistency
When the Peano axioms were first proposed, Bertrand Russell and others agreed that these axioms implicitly defined what we mean by a "natural number".[3] Henri Poincaré was more cautious, saying they only defined natural numbers if they were consistent; if there is a proof that starts from just these axioms and derives a contradiction such as 0 = 1, then the axioms are inconsistent, and don't define anything.[4] In 1900, David Hilbert posed the problem of proving their consistency using only finitistic methods as the second of his twenty-three problems. In 1931, Kurt Gödel proved his second incompleteness theorem, which shows that such a consistency proof cannot be formalized within Peano arithmetic itself.
Although it is widely claimed that Gödel's theorem rules out the possibility of a finitistic consistency proof for Peano arithmetic, this depends on exactly what one means by a finitistic proof. Gödel himself pointed out the possibility of giving a finitistic consistency proof of Peano arithmetic or stronger systems by using finitistic methods that are not formalizable in Peano arithmetic, and in 1958, Gödel published a method for proving the consistency of arithmetic using type theory. In 1936, Gerhard Gentzen gave a proof of the consistency of Peano's axioms, using transfinite induction up to an ordinal called ε0. Gentzen explained: "The aim of the present paper is to prove the consistency of elementary number theory or, rather, to reduce the question of consistency to certain fundamental principles". Gentzen's proof is arguably finitistic, since the transfinite ordinal ε0 can be encoded in terms of finite objects (for example, as a Turing machine describing a suitable order on the integers, or more abstractly as consisting of the finite trees, suitably linearly ordered). Whether or not Gentzen's proof meets the requirements Hilbert envisioned is unclear: there is no generally accepted definition of exactly what is meant by a finitistic proof, and Hilbert himself never gave a precise definition.
The vast majority of contemporary mathematicians believe that Peano's axioms are consistent, relying either on intuition or the acceptance of a consistency proof such as Gentzen's proof. A small number of philosophers and mathematicians, some of whom also advocate ultrafinitism, reject Peano's axioms because accepting the axioms amounts to accepting the infinite collection of natural numbers. In particular, addition (including the successor function) and multiplication are assumed to be total. Curiously, there are self-verifying theories that are similar to PA but have subtraction and division instead of addition and multiplication, which are axiomatized in such a way to avoid proving sentences that correspond to the totality of addition and multiplication, but which are still able to prove all true
\Pi1
theorems of PA, and yet can be extended to a consistent theory that proves its own consistency (stated as the non-existence of a Hilbert-style proof of "0=1").
## References
### Sources
• Book: Davis, Martin . Martin Davis (mathematician) . Computability. Notes by Barry Jacobs. . 1974 . .
• Book: Dedekind, Richard. Richard Dedekind . Was sind und was sollen die Zahlen? . What are and what should the numbers be?. 1888. Vieweg . 4 July 2016 .
• Book: Fritz. 1952. Bertrand Russell's construction of the external world. registration.
• Gentzen. Gerhard. Gerhard Gentzen. 1936. Die Widerspruchsfreiheit der reinen Zahlentheorie. Mathematische Annalen. 112. 132–213. 10.1007/bf01565428. Reprinted in English translation in his 1969 Collected works, M. E. Szabo, ed. . 122719892.
• Gödel. Kurt. Kurt Gödel. Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme, I. Monatshefte für Mathematik. 1931. 38. 173–198. 10.1007/bf01700692. See On Formally Undecidable Propositions of Principia Mathematica and Related Systems for details on English translations.. 197663120. 2013-10-31. https://web.archive.org/web/20180411113347/http://www.w-k-essler.de/pdfs/goedel.pdf. 2018-04-11. dead.
• Gödel. Kurt. Kurt Gödel. 1958. Über eine bisher noch nicht benützte Erweiterung des finiten Standpunktes. Dialectica. 12. 280–287. Reprinted in English translation in 1990. Gödel's Collected Works, Vol II. Solomon Feferman et al., eds.. 3–4. Oxford University Press. 10.1111/j.1746-8361.1958.tb01464.x . free.
• Grassmann. Hermann . Hermann Grassmann. Lehrbuch der Arithmetik . A tutorial in arithmetic . 1861 . Enslin .
• Book: Gray, Jeremy. Jeremy Gray. Henri Poincaré: A scientific biography . The Essayist . https://books.google.com/books?id=w2Tya9gOKqEC&pg=PA133 . 133 . 2013. . 978-0-691-15271-4 .
• John C. . Harsanyi. John C. Harsanyi . 1983 . Mathematics, the empirical facts and logical necessity . Erkenntniss . 19 . 167–192 . 10.1007/978-94-015-7676-5_8 . 978-90-481-8389-0.
• Book: Hatcher, William S. . The Logical Foundations of Mathematics . 2014 . 1982 . Elsevier . 978-1-4831-8963-5 . Derives the Peano axioms (called S) from several axiomatic set theories and from category theory.
• Book: Hermes, Hans. 1431-4657 . 3540058192 . Introduction to Mathematical Logic . Springer . Hochschultext . 1973.
• Hilbert. David. 1902. Mathematische Probleme. Mathematical Problems. Bulletin of the American Mathematical Society. 8. 437–479. Maby. Winton. 10.1090/s0002-9904-1902-00923-3. free.
• Book: Kaye, Richard. 1991. Models of Peano arithmetic. Oxford University Press. 0-19-853213-X.
• Book: Landau, Edmund. Edmund Landau. 1965. Grundlagen Der Analysis. AMS Chelsea Publishing. Derives the basic number systems from the Peano axioms. English/German vocabulary included.. 978-0-8284-0141-8.
• Book: Mendelson, Elliott. 2009. Introduction to Mathematical Logic. 5th. Taylor & Francis . 9781584888765 .
• Book: Barbara . Partee . Alice . Ter Meulen . Alice ter Meulen. Robert . Wall . Mathematical Methods in Linguistics . Springer . 2012 . 978-94-009-2213-6 .
• Peirce. C. S.. Charles Sanders Peirce. On the Logic of Number . American Journal of Mathematics. 4. 1881. 1. 85–95. 10.2307/2369151. 1507856 . 2369151.
• Book: Shields, Paul. 1997. Studies in the Logic of Charles Sanders Peirce. registration. 3. Peirce's Axiomatization of Arithmetic. https://books.google.com/books?id=pWjOg-zbtMAC&pg=PA43 . Houser. Nathan. Roberts. Don D.. Van Evra. James . Indiana University Press . 0-253-33020-3 . 43–52 .
• Book: Suppes, Patrick. Patrick Suppes. 1960. Axiomatic Set Theory. Dover. 0-486-61630-4. registration. Derives the Peano axioms from ZFC
• Book: Tarski. Alfred. Alfred Tarski. Givant. Steven. 1987. A Formalization of Set Theory without Variables. AMS Colloquium Publications . 41. American Mathematical Society. 978-0-8218-1041-5. registration.
• Book: van Heijenoort . Jean . Jean van Heijenoort . 1967. From Frege to Godel: A Source Book in Mathematical Logic, 1879–1931. Harvard University Press . 9780674324497.
• Contains translations of the following two papers, with valuable commentary:
• Book: Dedekind, Richard. Richard Dedekind. 1890. Letter to Keferstein.. 98–103. On p. 100, he restates and defends his axioms of 1888..
• Book: Peano, Giuseppe. Giuseppe Peano. 1889. Arithmetices principia, nova methodo exposita . The principles of arithmetic, presented by a new method. 83–97 . An excerpt of the treatise where Peano first presented his axioms, and recursively defined arithmetical operations..
• Willard . Dan E. . Dan Willard . 10.2307/2695030 . 2 . The Journal of Symbolic Logic . 1833464 . 536–596 . Self-verifying axiom systems, the incompleteness theorem and related reflection principles . 66 . 2001 . 2695030 .
|
2021-12-02 22:36:09
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9001201391220093, "perplexity": 764.3442521118812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362297.22/warc/CC-MAIN-20211202205828-20211202235828-00105.warc.gz"}
|
https://thestuffedbakedpotatofactory.com/against-the-stmdn/ki5bn.php?tag=275779-function-estimator-online
|
Top
2 Dec
## function estimator online
Share with:
algorithms estimate the parameters and states of a model when new data is available during the operation of the physical system Mark-II − ISO/IEC 20968:2002 Software engin… The calculator will find the domain, range, x-intercepts, y-intercepts, derivative, integral, asymptotes, intervals of increase and decrease, critical points, extrema (minimum and maximum, local, absolute, and global) points, intervals of concavity, inflection points, limit, Taylor polynomial, and graph of the single variable function. MathWorks is the leading developer of mathematical computing software for engineers and scientists. The statistic (X 1, X 2, . Deep reinforcement learning has achieved impressive successes yet often requires a very large amount of interaction data. Therefore, , which depends on , is also a random variable. 2. Known in the industry as either an Estimator or Cost Planner, but whichever term is used they work out how much it costs to supply a building or services to a client. You can perform online parameter estimation using Simulink blocks in the Estimators sublibrary of the System Identification Toolbox™ library. For example, lets move this Graph by units to the top. How do Estimators rate their role? The tf.estimator provides some capabilities currently still under development for tf.keras. Both non-linear least squares and maximum likelihood estimation are special cases of M-estimators. The algebra calculator encompasses all of the functions that simplify math at any level. Work-life balance. If you have issues attaching drawing files on your mobile device simply select the email option and follow the instructions to checkout. Developing scikit-learn estimators¶. Estimator allows subcontractors, contractors, and business owners to keep track of work that is requested from a client. The basic idea behind maximum likelihood estimation is that we determine the values of these unknown parameters. Description . 3.9. This can be useful if you want to visualize just the “shape” of some data, as a kind … Example input. Your feedback and comments may be posted as customer voice. You capture the time-varying input-output behavior of the hydraulic valve of a continuously variable transmission. Notice that we know the A, B, C, and Dmatrices of our plant, so we can use these exact values in our estimator. Normal GFR varies according to age, sex, and body size, and declines with age. These models capture the behavior of the process at two operating conditions. A point estimator is a statistic used to estimate the value of an unknown parameter of a population. Background. A whole function Perform online parameter estimation for line-fitting using recursive estimation algorithms at the MATLAB command line. We do this using the following system for an observer: 1. x ^ ′ = A x ^ + B … Original Labour & Materials Estimate Original Materials Summary Original Quantified Schedule Thank you, your order has been placed. A functional size measurement method. These are: 1. What the estimator tries to do is make the estimated state vector approach the actual state vector quickly, and then mirror the actual state vector. Definition An estimator is said to be unbiased if and only if where the expected value is calculated with respect to the probability distribution of the sample . It is the distinction between the way of combining our data, the estimator, and the result of the combination, the estimate. Choose a web site to get translated content where available and see local events and offers. If nfft is odd, txy has (nfft + 1)/2 rows and the interval is [0,π) rad/sample. FiSMA − ISO/IEC 29881:2008 Information technology - Software and systems engineering - FiSMA 1.1 functional size measurement method. Glomerular filtration rate (GFR) is the best overall index of kidney function. Quantity of interest can be: 1. An estimate is the value of an estimator, when applied to a given sample. * Estimate includes associated works only. Online Domain and Range Calculator Find the domain and range of a function with Wolfram|Alpha. Regression estimate (integrated) Calculator. IFPUG − ISO/IEC 20926:2009 Software and systems engineering - Software measurement - IFPUG functional size measurement method. Just add the transformation you want to to. 3.5. Wolfram|Alpha is a great tool for finding the domain and range of a function. Latest reviews from 91 Estimators surveyed on SEEK. Estimate states of linear systems using time-varying Kalman filters in Other MathWorks country sites are not optimized for visits from your location. Parameter server based training 2. We call estimate instead, a specific value of that random variable. This parameter made be part of a population, or it could be part of a probability density function. In general, transformations in y-direction are easier than transformations in x-direction, see below. Similar to a tf.keras.Model, an estimator is a model-level abstraction. Online estimation algorithms update model parameters and state estimates when new data is available. Web browsers do not support MATLAB commands. Whether you are proposing an estimator for inclusion in scikit-learn, developing a separate package compatible with scikit-learn, or implementing custom components for your own projects, this chapter details how to develop objects that safely interact with scikit-learn Pipelines and model selection tools. You can also select a web site from the following list: Select the China site (in Chinese or English) for best site performance. A Function Calculator is a free online tool that displays the graph of the given function. Each of these functions can be used independently from Variogram class. Simulink, Estimate model parameters using linear and nonlinear Kalman filters at the command line In this case the estimator is expecting an array of pairwise differences to calculate the semi-variance. Even or odd function calculator: is_odd_or_even_function. We know the input to the system, we know the output of the system, and we have the system matrices of the system. The formulas used are give in CFD-Wiki here.The Schlichting skin-friction formula, as desribed here (equation 21.16 footnote in Schlichting), is used to estimate the local skin-friction for a turbulent boundary layer on a smooth flat plate. An estimator is a function of a random sample. to an embedded target. Linear, Logarithmic, e-Exponential, ab-Exponential, Power, Inverse and Quadratic regression) Maximum likelihood estimation is one way to determine these unknown parameters. This online calculator calculates values of one-variable function given function formula and set of variable values . Accelerating the pace of engineering and science. (i.e. 3. This result is perhaps unsurprising, as using complicated function approximation often requires more data to fit, and early theoretical results on linear Markov decision processes provide regret bounds that scale with the dimension of the linear approximation. You can generate C/C++ code and deploy your code The correlation is valid for Re 10^9.The density and viscosity given by default in the form is for air at sea-level and room temperature (20°C). operation, generate code and deploy to embedded targets, Estimate model parameters using recursive algorithms at the command line and in Recent Examples on the Web Insurers say most plans already offer such cost-estimator tools. Usage To plot a function just type it into the function box. and in Simulink, Get Started with System Identification Toolbox, System Identification Toolbox Documentation. Simulink. A single parameter 2. Thank you for your questionnaire.Sending completion. Free functions calculator - explore function domain, range, intercepts, extreme points and asymptotes step-by-step This website uses cookies to ensure you get the best experience. Implement an online polynomial model estimator. More than just an online function properties finder. Limit of a function: limit_calculator. Our PWA (Progressive Web App) Tools (17) {{title}} Financial Calcuators (121) Point Estimation is the attempt to provide the single best prediction of some quantity of interest. They can be the difference between a company winning and losing a project. Often shortened to KDE, it’s a technique that let’s you create a smooth curve given a set of data.. It produces a single value while the latter produces a range of values. And, the last equality just uses the shorthand mathematical notation of a product of indexed terms. It has the unique feature that you can save your work as a URL (website link). An "estimator" or "point estimate" is a statistic (that is, a function of the data) that is used to infer the value of an unknown parameter in a statistical model.The parameter being estimated is sometimes called the estimand.It can be either finite-dimensional (in parametric and semi-parametric models), or infinite-dimensional (semi-parametric and non-parametric models). Calculator Operations. Estimator definition is - one that estimates. Kernel density estimation is a really useful statistical tool with an intimidating name. You can then generate C/C++ code and Structured Text for these blocks using Simulink Coder™ and Simulink PLC Coder™ , and deploy this code to an embedded target. Online estimation algorithms update model parameters and state estimates when new data is — Harris Meyer, chicagotribune.com, "Surprise Trump rule will require insurers to reveal what they actually pay for prescription drugs," 20 Nov. 2020 Insurers say most plans already offer such cost-estimator tools. , X n) estimates the parameter T, and so we call it an estimator of T. The model behavior is identified online and used to adjust the gains of an adaptive PI controller during system operation. Full TFXintegration. You estimate two ARMAX models for a nonlinear chemical reaction process. Like most 4-function calculators it also includes keys for percent, square, square root and pi. . Career progression opportunities. Estimate model parameters and states during system Point estimation is the opposite of interval estimation. This basic online calculator is similar to a small handheld calculator and has the standard four functions for addition, subtraction, division and multiplication. Now, in light of the basic idea of maximum likelihood estimation, one reasonable way to proceed is to treat the "likelihood function" $$L(\theta)$$ as a function of $$\theta$$, and … The integral calculator calculates online the integral of a function between two values, the result is given in exact or approximated form. These concepts define the discipline of statistics: Statistics - discipline of applied math which utilizes random samples to estimate features of the population. It uses sample data when calculating a single statistic that will be the best estimate of the unknown parameter of the population. COSMIC − ISO/IEC 19761:2011 Software engineering. We will use the 3 years of data we have - the training data. You clicked a link that corresponds to this MATLAB command: Run the command by entering it in the MATLAB Command Window. Estimators Online offer affordable and detailed labour and material estimating services to anyone involved in domestic building. 3.8. Estimationis a statistical term for finding some estimate of unknown parameter, given some data. 4. A discrete-time transfer function parameter estimation problem is reformulated and recursively solved as a state estimation problem. Estimator Functions¶ Scikit-GStat implements various semi-variance estimators. We won't be able to find the actual function, the true function, f because we don't have a function, called an estimator, that associates an estimate to each sample that could possibly be observed. In statistics, M-estimators are a broad class of extremum estimators for which the objective function is a sample average. What we do not know, necessarily, are the initial conditions of the plant. We convert building drawings using in-house developed software and industry-leading paperless quantification systems. Calculator for determining whether a function is an even function and an odd function. More technically, the estimator is a function of a random variable, while the estimate is a single number. The National Kidney Foundation recommends using the CKD-EPI Creatinine Equation (2009) to estimate GFR. Function Grapher is a full featured Graphing Utility that supports graphing two functions together. Upskill with an online short course Get instant access to online training for these in-demand Estimator skills. Some functions are limited now because setting of JAVASCRIPT of the browser is OFF. BYJU’S online function calculator tool makes the calculations faster, and it displays the graph of the function by calculating the x and y-intercept values, slope values in a fraction of seconds. Estimation is a division of statistics and signal processing that determines the values of parameters through measured and observed empirical data. You can generate C/C++ code and deploy your code to an embedded target. How do we find this function? Not the values themselves. Job satisfaction. Demonstrates the use of Particle Filter block in System Identification Toolbox™. This online calculator uses several regression models for approximation of an unknown function given by a set of data points. Examples are also provided for a refresher when working through more difficult problems and guiding you to get the answers you’re looking for. The fastest way to access our services is by using our new ""Submit Drawings" function on our website, this has been designed for easy use on any mobile device. person_outline Timur schedule 2016-10-13 07:59:45 The function approximation problem is how to select a function among a well-defined class that closely matches ("approximates") a target unknown function. [1] 2020/12/02 00:29 Male / 20 years old level / High-school/ University/ Grad student / Very /, [2] 2020/04/09 03:15 Male / 60 years old level or over / A retired person / Very /, [3] 2019/09/29 09:47 Male / 20 years old level / An engineer / Useful /, [4] 2018/01/12 09:46 Male / Under 20 years old / High-school/ University/ Grad student / Very /. We've sent you an e-mail confirmation. The two main types of estimators in statistics are point estimators and interval estimators. 1. This is it. ×Close Message Us If you would like us to reply please include your email address or contact phone number How to move a function in y-direction? These fucntions can be found in the skgstat.estimators submodule. Simulink® blocks and at the command line. In statistics, the method of estimating equations is a way of specifying how the parameters of a statistical model should be estimated.This can be thought of as a generalisation of many classical methods—the method of moments, least squares, and maximum likelihood—as well as some recent methods like M-estimators.. We also have a function of our random variables, and this is called a statistic. This depends on the direction you want to transoform. Function formula can be entered using expression with math operations, numeric constants and functions. On the other hand, interval estimation uses sample data to calcu… Based on your location, we recommend that you select: . Calculates a estimate of x or y from the data table by selected regression and draws the chart. . How to transform the graph of a function? The estimate is usually obtained by using a predefined rule (a function) that associates an estimate to each sample that could possibly be observed The function is called an estimator. There would be some function, say f that would map these X to the Y values. The functionality allows for manipulation of mathematical variables and symbols with just a few clicks. Guidelines for interpreting correlation coefficient r . Before being observed, the sample is regarded as a random variable. And, we want to estimate this value. Look it up now! Function Grapher and Calculator Description:: All Functions. A vector of parameters — e.g., weights in linear regression 3. available. We do this in such a way to maximize an associated joint probability density function or probability mass function . Use "x" as the variable like this: 'onesided' — Returns the one-sided estimate of the transfer function between two real-valued input signals, x and y.If nfft is even, txy has nfft/2 + 1 rows and is computed over the interval [0,π] rad/sample. You can perform online parameter estimation and online state estimation using Simulink ® blocks and at the command line. The process of estimation is carried out in order to measure and diagnose the true value of a function or a particular set of populations. By using this website, you agree to our Cookie Policy. Estimator definition at Dictionary.com, a free online dictionary with pronunciation, synonyms and translation. Details of expression syntax see below. You can perform online parameter estimation and online state estimation using To learn more about online estimation, see What Is Online Estimation? They become involved in a project in its early stages, often whilst in competition with other companies. Unknown function given function formula can be used independently from Variogram class the operation of the process two... 2009 ) to estimate the value of an unknown function given by a set of variable values 20926:2009. - fisma 1.1 functional size measurement method recursively solved as a random variable value of random. Data, the estimate is a really useful statistical tool with an intimidating name and with. You can perform online parameter estimation and online state estimation using Simulink® blocks and at the by. Algorithms estimate the value of that random variable your order has been placed the data... The estimate to this MATLAB command line you capture the behavior of the browser OFF... Difference between a company winning and losing a project option and follow the instructions to checkout root and PI to... Tool that displays the graph of a function between two values, the result the. An odd function estimation for line-fitting using recursive estimation algorithms at the command by it. Email option and follow the instructions to checkout Information technology - Software measurement ifpug... Your mobile device simply select the email option and follow the instructions to checkout of extremum estimators which... Browser is OFF equality just uses the shorthand mathematical notation of a function is... Kalman filters in Simulink domestic building before being observed, the estimator is a useful... Of interest combination, the last equality just uses the shorthand mathematical notation of a population to. Instead, a free online dictionary with pronunciation, synonyms and translation a smooth given. By using this website, you agree to our Cookie Policy a point estimator expecting... Online tool that displays the graph of the given function formula can be found in the skgstat.estimators.. Square root and PI are a broad class of extremum estimators for the. To our Cookie Policy are a broad class of extremum estimators for which the function! Code to an embedded target systems using time-varying Kalman filters in Simulink Dictionary.com, a specific value that., necessarily, are the initial conditions of the hydraulic valve of a function with Wolfram|Alpha type into... Anyone involved in domestic building all functions call estimate instead, a specific value of that random.! Graph by units to the top variable transmission see what is online estimation algorithms update model and... Is also a random sample variables and symbols with just a few clicks will use the 3 of!, you agree to our Cookie Policy Identification Toolbox™ attaching drawing files on your mobile simply..., say f that would map these x to the y values still under development for.. Leading developer of mathematical computing Software for engineers and scientists the population state estimates when new data is.! In a project in its early stages, often whilst in competition other! And declines with age, see what is online estimation algorithms update model parameters and state estimates new. To determine these unknown parameters operations, numeric constants and functions variable values for which the objective function is full. Provides some capabilities currently still under development for tf.keras and industry-leading paperless quantification systems this is called statistic... In the skgstat.estimators submodule and declines with age utilizes random samples to estimate GFR provide the best. Available and see local events and offers probability density function entering function estimator online in the MATLAB command.... Setting of JAVASCRIPT of the population an adaptive PI controller during system operation x-direction, see what online... An associated joint probability density function or probability mass function Cookie Policy an PI... Nfft is odd, txy has ( nfft + 1 ) /2 rows and result...
Share with:
|
2021-06-18 09:15:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4046016335487366, "perplexity": 1193.5002670986362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487635920.39/warc/CC-MAIN-20210618073932-20210618103932-00569.warc.gz"}
|
https://www.nature.com/articles/s41598-017-08747-8?error=cookies_not_supported&code=7da08729-2db7-4e48-b4a1-a6655649edf2
|
## Introduction
Antimicrobial resistance has re-emerged as one of the major challenges to public health worldwide, especially due to the spread of multidrug-resistant (MDR) or even pan-resistant Gram-negative pathogenic bacteria1. The intrinsic drug resistance shown by these bacteria can be largely attributed to the primary barrier imposed by their membranes, endowed with chromosomally encoded molecular filters (porins) and drug efflux pumps2. Among these, MDR efflux pumps transport a wide range of structurally dissimilar substrates including antibiotics from various classes, posing a major concern in clinical therapy3, 4. In particular, the Resistance-Nodulation-cell Division (RND) superfamily members are notoriously known for the extremely wide substrate specificity3, 5,6,7,8 and are considered to be involved in both intrinsic and acquired MDR. The RND pump complexes span the entire periplasmic space from the inner membrane (IM) to the outer membrane (OM) by forming tripartite systems9,10,11,12,13 comprising an RND transporter protein embedded in the IM, an adaptor protein (a.k.a. membrane fusion protein, MFP) located in the periplasmic space, and an outer-membrane protein (OMP) constituting a long alpha-helical and beta-barrel tunnel (Fig. 1).
AcrB is the best characterized RND transporter8, and its structure has been solved by several labs both without and with bound substrates and inhibitors14,15,16,17,18. Structurally, AcrB is an asymmetric trimer resembling a jellyfish with each protomer comprising a total of 3 domains8 (Fig. 1): (i) a trans-membrane domain consisting of 12 α-helices embedded in the IM, where energy conversion takes place via proton coupling; (ii) a pore (porter) domain located in the periplasm, where substrate recruitment and transport occur; and (iii) a periplasmic funnel domain, which connects the RND transporter to the OMP via the assembly of MFPs19 in the constituted pump. It has been proposed that substrate transport in these proteins follows a “functional rotation mechanism” (Fig. 2) in which concerted but not necessarily synchronous cycling of the protomers occurs through any of the so far identified asymmetric states: Loose (L) (a.k.a. Access) in which substrates bind to a peripheral site named access pocket (AP); Tight (T) (a.k.a. Binding) in which substrates bind to a more deep pocket (DP); and Open (O) (a.k.a. Extrusion) in which the substrate is released into the central funnel leading towards the OMP14, 20, 21. The two pockets (Fig. 2) were previously identified in AcrB as the binding sites responsible for the recognition and selectivity of different molecules15, 22,23,24. Namely, the AP and DP have been hypothesized to be responsible for the recognition of high-molecular-mass and low-molecular-mass compounds, respectively15. They are separated by a G-rich (a.k.a. switch) loop whose flexibility has been shown to be important for the transport of high-molecular-mass molecules15, 16.
AcrD is a close homolog of AcrB in Escherichia coli with an overall sequence identity (similarity) of nearly 66% (80%) (Supplementary Figs S1S2). These moderate differences however impact on their substrate specificity patterns, which overlap only partially (Table 1 and Supplementary Fig. S3). While certain substrates like most of the beta-lactam antibiotics are common, macrolides and tetracyclines are transported by AcrB but not by AcrD, which instead exports aminoglycosides, in turn not recognized by AcrB. Categorizing the typical substrates of the two transporters on the basis of their physicochemical properties (Table 2) highlights that they are essentially hydrophobic and hydrophilic for AcrB and AcrD, respectively, while both transporters might shuttle out amphiphilic compounds.
However, such a simplistic classification is not of help either to improve our basic knowledge on RND transporters or in drug design efforts. Achieving a deeper level of information would be highly desirable, and a first step towards this goal consists in mapping the differences in substrate specificities between these two proteins in terms of defined structural, chemical and dynamic features of their putative substrate-binding pockets, whose link has not been traced yet. From a domain-wise perspective, two previous studies attempted to identify substrate recognition site(s) in these RND pumps by using chimeric analysis22, 25. The importance of periplasmic loop regions in RND pumps was pointed out by Elkins and Nikaido25, Mao et al.26, Eda et al.27 and Kobayashi et al.22. In particular, Kobayashi et al.22 identified a few residues in the AP as potential determinants of specificity towards negatively charged beta-lactams (aztreonam, carbenicillin, and sulbenicillin). Namely, by replacing three residues in AcrB with the corresponding ones in AcrD (Q569R, I626R, and E673G), the authors were able to confer to the former transporter the ability to recognize anionic beta-lactams, typical substrates of the latter protein.
However, these findings concerning the overall location of the sites responsible for substrate recognition were restricted to a subclass of compounds, and no comprehensive molecular-level rationale for the different specificities of AcrB and AcrD has been proposed yet. This void of knowledge traces back mainly to the lack of experimental structures of AcrD and of co-crystal structures of any RND transporter with compounds belonging to beta-lactam, fluoroquinolone or aminoglycoside classes. On the other hand, computational modeling, in particular all-atom MD simulations, have already proven to be insightful in addressing the molecular mechanisms of RND transporters8, 28,29,30,31,32,33,34,35,36,37,38. Moreover, given the overall good sequence identity and similarity between AcrB and AcrD of E. coli, reliable computational modeling of AcrD and related structure-based studies are possible.
Prompted by this consideration and with the aim to explain in a more deep and informative meaning the substrate specificities of AcrB and AcrD in terms of matching properties with the corresponding (substrate) binders, we performed a systematic comparison of the physicochemical nature of the main putative substrate binding sites (AP and DP) (Fig. 2) between AcrB and AcrD. Importantly, beside crystal structures or homology models, we included for both transporters conformations extracted from extensive multi-copy μs-long molecular dynamics (MD) simulations. This robust computational setup allowed us to extend the realm of structure-function relation to account for subtle interplay between behavior of solvent, charge distribution, structural changes associated with the time-evolution of the system under physiological-like conditions. We characterized and compared the molecular properties (pocket descriptors) of the binding pockets such as their flexibility, accessible binding volume, lipophilic index, electrostatic potential, hydration and multi-functional sites. In particular, we identified dynamic features not inferable from simple sequence analysis, such as the positional flexibility of a loop lining the base of AP and likely playing a key role in regulating access and transport of substrates in these RND transporters. We also pinpointed specific differences in the lipophilic and electrostatic potentials in the binding pockets of these transporters, which complement the physicochemical properties of the known substrates of these pumps and are present also when dynamics of the pocket is accounted for. In particular, in AcrB an electrostatic funnel with negative gradient leading from the periplasm to the centre of the AP shows up in our configurations whilst it is absent in AcrD. Additionally, the DP of AcrB features a more lipophilic character, if compared to AcrD, pointing out the involvement of this pocket as a lipophilicity-based selectivity filter.
The correlation of the different specificity patterns of these two transporters to the dynamic physicochemical and topographical properties of their multi-functional recognition sites could be highly informative for drug design attempts8.
## Results
### Sequence Assessment
Bacteria have an inherent ability to change their genetic makeup and adapt themselves in response to adverse environmental stress. In order to extend the results of our study to Acr proteins extracted from E. coli other than the specific one used here, we determined the presence and distribution of conserved regions in AP and DP of all the available AcrB and AcrD sequences in the UniProtKB (October 2016) (http://www.uniprot.org/blast/). Shannon entropy39 was computed for both the Acr proteins and expressed in terms of H factor. This descriptor is commonly used to quantify sequence conservation considering the probability of occurrence of an amino acid at each site in a sequence alignment. Multiple sequence alignment and Shannon entropy analysis together pointed out an overall high sequence conservation of these proteins in all E. coli strains. In particular, all the H factors associated with the Shannon entropy were lower than 1, indicating a high degree of conservation.
### Molecular dynamics simulations of AcrB and AcrD
MD simulations of AcrB were done using the high-resolution asymmetric crystal structure (PDB ID: 4DX516), each protomer being in Loose, Tight, and Open conformations, respectively. Since the structure of AcrD has not been experimentally resolved yet, we generated a homology model of AcrD based on the same AcrB crystal structure (4DX5) as its template using Modeller 9.1340 (see Methods section). According to RMSD analyses of the complete trimeric protein backbone and of each protomer in relation to the initial structure (Supplementary Fig. S4), we determined the equilibration time of ~0.5 μs to be most suitable for both AcrB and AcrD. The cluster representatives (see respectively Supplementary Figures S5 and S6 for the sampling of AP and DP clusters in AcrB and AcrD along the MD simulations) extracted from the equilibrated trajectories of AcrB and AcrD were used to characterize the distribution of accessible binding volume, molecular lipophilicity, electrostatic potential and multi-functional sites (MFS). Hydration analyses were performed on equilibrium trajectories. Although the level of confidence in homology models cannot be as high as that in experimental structures, we have thoroughly validated the AcrD structures by using state-of-the-art bioinformatic tools (details are reported in Methods and in Supplementary Information, see in particular Supplementary Table S1). The stability of the AcrD model as well as its suitability for subsequent analyses were validated in two independent μs-long MD simulations. This multiple validation of the AcrD model offers a fairly good confidence in the representativity of the clusters extracted from MD trajectories as configurations explored by the system. In the following, we present the results from these analyses on the AP and DP.
### Access Pocket of the Loose protomer
The AP is a proximally located pocket close to the periplasm in the putative substrate transport pathway of RND pumps (Fig. 2) and also likely the site of recognition for high-molecular-mass compounds15. In order to identify if any of the physicochemical properties could be used to differentiate between the APs and in addition to determine which substrate could likely be recognized by these APs, we calculated the following descriptors on the AP of Loose protomer in both AcrB and AcrD.
#### Pocket Volume and Shape
The extent and the shape of the three dimensional space that a ligand is allowed to explore to find its optimal binding pose in any putative binding pocket is governed by multiple factors, the primary being the accessible binding volume. Especially with promiscuous proteins like the RND transporters, one would expect that a large binding site with a reasonable degree of plasticity will facilitate binding of molecules with a wide range of sizes. The pocket volumes (Fig. 3a and c and Supplementary Table S2) of pre-MD structures (PDB code 4DX516 for AcrB and the final optimized model for AcrD used as starting configuration for MD simulations) and the clusters extracted from MD did not, per se, show any relevant differences. Moreover, both the volumes and the minimal projection areas of AP in AcrB and AcrD are much larger than those of the largest substrates transported by these pumps41 (Supplementary Table S2). However, principal component analysis performed on the equilibrium MD trajectories revealed a slightly different flexibility of the AP in the two proteins, namely the pocket in AcrB showed larger rearrangements in the loop residues 675 to 678 lining the base of AP (Fig. 3b) whereas in the case of AcrD (Fig. 3d) this region displayed lower flexibility and so was for the entire AP.
#### Molecular Lipophilicity Potential (MLP) and Lipophilic Index (LI)
The calculation of the LIs of the AP showed this pocket to be of higher lipophilic nature in AcrB than in AcrD (Table 3 and Supplementary Fig. S7). This is compatible with the (at least partial) hydrophobic character required for AcrB substrates. However, the specific chemical environment of the AP is neither entirely hydrophobic nor entirely polar in both the proteins (Fig. 4a,b). Interestingly, different conformations of the AP displayed similar values of the LIs, so that the relatively higher lipophilicity of AcrB with respect to AcrD turned out to be a robust feature compared to the flexibility of the AP (Table 3). According to the MLP calculated for the representatives of the most populated structural clusters, regions of relatively high lipophilicity for AcrB were located close to the hydrophobic trap (HP-trap) lined by residues F136, F178, F610, F615 and F62817, 29, and in a region at the border with the putative entrance known as Vestibule42. In contrast, no predominant spots were recognizable for AcrD (Fig. 4a,b).
#### Electrostatic Potential
The long range potential due to electrostatic interactions makes them a vital component of molecular recognition between molecules. The electrostatic potentials calculated on the molecular surfaces of the AP of AcrB and AcrD are shown in Fig. 4c,d. The left and right panels collect the results for the pre-MD structures and for the most populated cluster of each system, respectively. Concerning the pre-MD structures (Fig. 4c), positively charged patches were predominant within the AP of AcrD, while the same region of AcrB featured a more even distribution of positive vs. negative charges. Importantly, the partial closure of the pockets seen in the MD simulations of the apo proteins did not influence these main findings (Supplementary Fig. S8).
#### Hydration Analysis
Characterizing the hydration profiles around the binding pockets of these proteins helps to effectively understand the molecular mechanism of interaction of water molecules penetrating the pocket in a dynamic manner. The radial distribution function (RDF) profile around the AP residues of AcrB and AcrD were rather similar with only a minor difference in the intensity of hydration (Fig. 5a). The first solvation shell was observed around 1.9 Å in both the proteins with a slightly reduced probability in AcrB. The spatial distribution function (SDF) calculated on the trajectory of the most populated cluster extracted from MD simulations, however, featured no water density spots near the hydrophobic residues in AP of AcrB but showed a higher number of dense regions in AcrD at identical density isovalues (Fig. 5b).
### Deep Pocket of the Tight protomer
The DP is a more deeply located cavity within the putative substrate transport pathway of RND pumps (Fig. 2), and is likely the recognition site for low-molecular-mass compounds15. According to the crystal structures, this pocket exists in a collapsed state in the Loose and Open protomers but is wide open in the Tight protomer; therefore, all the analyses concerning this site were performed on the Tight protomers of AcrB and AcrD. Based on primary sequence analysis, most of the hydrophobic residues in the DP of AcrB are replaced by polar/charged amino acids in AcrD (Supplementary Figs S1S2). The ensuing effects on the physicochemical environment of the DP were thus characterized by the aforementioned pocket descriptors.
#### Pocket Volume and Shape
As for the AP, also the DP showed a partial closure during dynamics yet displaying volumes and minimal projection areas (Fig. 6a and c and Supplementary Table S3) large enough to accommodate its ligands. The cluster distribution was similarly slightly more extended for AcrD than for AcrB. The principal component analysis data showed the DP in AcrD with essential dynamics spread throughout the pocket unlike the less dynamic and more localized motions of DP in AcrB (Fig. 6b and d).
#### MLP and LI
The DP featured larger differences than the AP in the values of the LIs calculated for AcrB and AcrD, despite a reduction in the absolute values when considering the weighted average value extracted from the MD clusters compared to pre-MD structures (Table 4). The LI values indicated a much more prominent hydrophobic character of this pocket in AcrB than AcrD. Most clusters of AcrB were associated with LI values larger than 10, the highest values occurring for clusters 1, 3, 5, and 8 (LI of 14.4, 17.1, 17.0, and 17.2 respectively; see Supplementary Fig. S9). Together, the first three clusters embraced 80% of the conformations sampled by AcrB. In AcrD, the most populated clusters (1 to 5) had LIs ranging from 0.5 to 2.8 (Supplementary Fig. S9). Therefore, as already seen for the AP, the MLP proved to be a robust feature of the pockets, conserved across different conformations assumed in the dynamics. The lipophilic potential surfaces of the pre-MD and the most populated clusters are reported in Fig. 7a,b highlighting the presence of pronounced lipophilic regions in AcrB in comparison to three less extended spots in AcrD.
#### Electrostatic Potential
The electrostatic potential projected on the surfaces of the DP indicated a relatively denser positive environment in AcrD than in AcrB (blue areas in Fig. 7c,d). Noticeable is that the difference was better emphasized when the electrostatic potential surfaces were compared for the representatives of the most populated clusters. In AcrB, an extended surface area of negative potential appeared while in AcrD the distribution of areas of negative and positive potentials did not change much and the latter still presented a greater positive component with dispersed negative components. As in the case of AP, the partial closure of the pockets seen in the MD simulations of the apo proteins did not influence these main findings (Supplementary Fig. S10).
#### Hydration Analysis
The RDF profile of water around the DP showed a minor difference in the intensity of the peak between AcrB and AcrD, essentially related to the different hydration of the HP-trap region17 (Fig. 8a). This is also consistent with the replacement of three out of the five phenylalanine residues in AcrD that are present in the HP-trap of AcrB (only F610 and F628 are conserved). Indeed, the SDF clearly displayed a very low probability of hydration near this region in AcrB (Fig. 8b).
### Fragment-Based Binding Site Characterization
In addition to global physicochemical pocket descriptors discussed above, the ligand-binding properties of protein are governed by the number, strength and spatial distribution of binding energy hot spots43. A fragment-based binding site analysis was thus performed employing probes (Supplementary Fig. S11) of different physicochemical features to identify hotspots responsible for specificity by mapping the chemical functionalities on the internal surface of the two proteins.
Several multi-functional sites (MFSs) were identified within the binding pockets of AcrB and AcrD. While the AP of both proteins showed consistently higher number of MFSs in comparison to the DP (Table 5), the level of promiscuity became distinct on comparing the MFSs in the latter pocket. The DP in AcrB showed an extended MFS (Fig. 9) in the pre-MD structure where substrates like minocycline14,15,16, doxorubicin14, 16 and inhibitors like P9D17 and MBX293144 were crystallographically resolved. Although closure of the DP during the simulations resulted in the loss of the large extended MFS found in pre-MD, it created other MFS thereby preserving the promiscuity of DP in AcrB as seen in the representatives of the most populated clusters (Fig. 9 and Supplementary Fig. S12). In AcrD, the DP and interface/G-loop showed only a few consensus sites (CSs) and lacked a true MFS in both the pre-MD as well as the clusters sampled during MD. An interesting feature was that the interface between the pockets including the G-loop almost always favored an MFS in AcrB.
## Discussion
AcrB and AcrD are the major RND transporters of E. coli. They feature an overall good level of sequence (and likely fold) identity and similarity, and indeed show partly overlapping substrate specificities. However, they have distinct abilities to expel some classes of compounds; for instance, only AcrD recognizes aminoglycosides. The peculiarities of each transporter are likely related to the specific physicochemical features of the main recognition pockets, i.e. the AP and DP. Indeed, these two sites feature a lower degree of sequence conservation compared to the entire protein (see Supplementary Figures S1S2). In particular, the AP is better conserved than the DP with nearly 60% vs. 40% identical residues, respectively. An inspection of the mismatched residues between AcrB and AcrD showed that the binding pockets of the latter protein are populated with more polar/charged residues than those of the former, likely facilitating the recognition and transport of more hydrophilic molecules by AcrD. However, this hypothesis should undergo a validation through the rationalization of the different substrate susceptibilities in terms of molecular descriptors of the binding pockets. Moreover, the impact of the dynamic nature of these transporters on the pocket environment should also be considered for gaining a more realistic understanding of their differential recognition and transport events as seen in vitro or in vivo.
For this reason, in this work we compared several molecular descriptors calculated on the two main putative binding sites (AP in the Loose protomer and DP in the Tight protomer) within the periplasmic domains of AcrB and AcrD. In addition to experimental structures and homology models, which represent static snapshots of these biologically dynamic systems, we performed our analyses on a set of structures extracted from extensive MD simulations of the apo-proteins for assessing the influence of pocket dynamics. We recall that the MD simulations of AcrD were started from a homology model built using the AcrB structure as template. Clearly, the structures of cluster representatives for the former protein could feature a lower level of confidence than those of the latter. However, the AcrD model was found to be as good as experimental structures using state-of-the-art bioinformatic validation protocols (see Methods). In addition, the MD simulations of AcrD were as stable as those of AcrB, further pointing to the reliability of our findings.
The first descriptor we considered was the pocket volume and shape. However, a pure steric filter for substrates is quite unlikely because of the large volumes of all the pockets considered in the present analysis, which are at least twice as voluminous as the largest compounds transported by AcrB and AcrD. Moreover, the average values of the volumes and minimal projection areas for the AP and the DP of both proteins are, within errors, very similar; therefore, differences in the substrate specificities of AcrB and AcrD cannot be traced back to the size of such large pockets. Interestingly, in both transporters the two pockets partly collapsed during the MD simulations with respect to the conformation seen in the X-ray crystal structures and in the homology models of AcrB and AcrD, respectively. The reduction amounted to 30% in both AP and DP of AcrB while it was 15% in the AP and 28% in the DP of AcrD with respect to the pre-MD structures. This behavior is consistent with the findings of Fischer and Kandt33, who noticed a closure of the DP in the Tight protomer of AcrB in the absence of substrate during shorter MD simulations than those reported here. In coherence with this hypothesis, we also found the DP volume to be at least 1000 Å3 larger in substrate bound complexes (Supplementary Table S4). Moreover, population distributions over the clusters extracted from the MD simulations offer interesting insights into the different behavior of the two transporters. First, for both AP and DP, the first three clusters identified for AcrB cover roughly 90% and 80% of the trajectories whilst the distribution is wider for AcrD, especially when AP is considered. Straightforwardly attributing this diversity to dissimilar flexibility might not be completely correct and could hide interesting features associated with the dynamics of the considered regions. For instance, as visualized from the porcupine plots of the first principal component (Fig. 3b and d), the entire AP of AcrD exhibits almost a coherent motion with similar magnitude of eigenvector (depicted by length of the arrows) whereas in the case of AcrB, the loop residues 675 to 678 lining the base of AP show larger rearrangements. The dynamicity of this loop (Thr676-loop or hereafter referred to as ‘bottom-loop’) represents a peculiar feature in the AP of AcrB, which is unshared with AcrD. The structures of the cluster representatives of AcrB can be partitioned in two groups featuring “up” and “down” conformation of the bottom-loop (Fig. 10). The most populated cluster is characterized by an “up” conformation, while the crystal structures exhibited only “down” configuration, as for the second most populated cluster representative (Supplementary Fig. S13). A similar flip is not observed in AP of AcrD, and the analogous of the bottom-loop is always close to the pre-MD arrangement. With such major conformational shifts of the bottom-loop in AcrB, it is very likely that this loop contributes towards induced fit and minimizes the steric hindrance for the large substrates of AcrB, a hypothesis that is compatible with the larger size of some AcrB substrates that are not transported by AcrD. The importance of this bottom-loop in regulating access to porter domain and its possible active role in substrate transport by pushing compounds towards the hydrophobic binding pocket was already suggested by Fischer and Kandt, who however sampled only “down” conformations in their MD simulations33. Moreover, according to Kobayashi and co-workers the mutation E673G (located close to the bottom-loop) in AcrB, in combination with Q569R and I626R, conferred this protein the ability to recognize anionic beta-lactams, which are typical substrates of AcrD. Thus, the analysis of the volumes, although not much enlightening per se, allowed identifying specific structural features more directly involved in the entrance into and transport to a pocket, which might be of relevance in determining substrate specificity.
Next to the volume analysis, we calculated the LIs for the AP and the DP of both transporters in order to quantify how different distribution of hydrophobic residues could affect substrate recognition and how this property is tuned by the dynamics of the protein. Both pockets of AcrB are consistently characterized by higher LI values than those of AcrD, independently of the somewhat important structural changes occurring in these pockets during the MD simulations of the apo-proteins (Tables 3 and 4). The higher lipophilic nature of AP and DP in AcrB when compared to that of the same pockets in AcrD is required for AcrB to provide a favorable environment for its hydrophobic substrates to bind. However, the specific chemical environment of the AP is neither entirely hydrophobic nor entirely polar in both the proteins. Such a dispersed nature binds ligands with different physicochemical properties by weak polar and hydrophobic interactions45, while facilitating easy transport by preventing strong interactions of the substrates with residues of the pocket. Note that the values of the LIs for AP and DP of AcrD are essentially identical whilst in AcrB there is a marked difference between the two sites, the DP being the more lipophilic. This could be an indication of DP being the site where substrates might be differentiated between AcrB and AcrD in terms of their lipophilicity. In other words, the DP could function as a lipophilicity-based selectivity filter for low-molecular-mass compounds. This proposal agrees with previous suggestions based on experimental results of Yamaguchi et al.46. The difference between the DP of AcrB and AcrD became even more prominent by comparing their molecular lipophilic surfaces (Fig. 7a,b). The MLP isosurfaces are significantly wider in AcrB than in AcrD, which correlates well with the nature of the reported substrates transported by the former protein. Interestingly, the presence of phenylalanines in the G-loop of only AcrB creates a large hydrophobic bridge between the DP and the AP, which would facilitate anchoring of aromatic compounds from the AP and their subsequent transport to the DP. The presence of polar/charged residues in the DP of AcrD results in its increased hydration when compared to the DP of AcrB, and the nature of water dynamics in this region would further influence the binding behavior of potential substrate molecules.
The local stereochemistry and distribution of functional groups in a region govern both the ordering of water molecules and their biologically important interactions in that region. The structure and the dynamics of the first water hydration shells around a putative binding pocket is of primary importance, given the relevance of water displacement for the free energy balance of the recognition event47. The plot of the SDF around the AP and the DP of both transporters highlight how their different hydrophobic potentials influence hydration profiles. In particular, a lower degree of hydration was seen near the hydrophobic part of the AP in AcrB than in AcrD (Fig. 5b). Even for the DP, our analysis provides a clear evidence of the contribution of the HP-trap in determining the lower hydration of the domain compared to AcrD. While several spots are homogeneously distributed in the SDF calculated for AcrD, the DP featured several zones without hydration in AcrB, especially around the HP-trap (Fig. 8b).
Electrostatic complementarity between the pocket and substrate molecules is essential for initial substrate recruitment and augmentation of their association rate48. Therefore, this analysis is of particular interest for the AP, which is more peripheral than the DP. For AcrB, the distribution of positively and negatively charged patches on the molecular surface of this site is fairly homogeneous (Fig. 4c,d). Interestingly, in AcrB an electrostatic funnel with negative gradient leading from the periplasm to the centre of the AP is recognizable in Fig. 11, which could help the long-range recognition of positively charged compounds, and is compatible with the monocationic character of several substrates of AcrB. The electrostatic surface of this site in AcrD reveals instead a marked positive patch on the upper part, also due to the presence of residues like Arg568 and Arg625, which have been recently reported as key residues for specificity of AcrD towards negatively charged molecules like the anionic beta-lactams22.
Concerning the DP, the large electrostatic negative environment presented to the incoming substrate in AcrB would favor the recognition and binding of cationic compounds. However, this becomes unfavorable to hydrophilic polycationic aminoglycosides due to the high lipophilicity of this pocket in AcrB functioning as a probable lipophilicity-based selectivity filter as discussed above. In AcrD, a relatively denser positive environment compared to AcrB is identifiable, originating from the electrostatic contributions of amino acids like Arg and Lys replacing their less polar counterparts in AcrB (Supplementary Figs S1S2). In conjunction with the low lipophilicity of this pocket in AcrD, the observed mosaic-like electrostatic patches provide a favorable binding site for anionic beta-lactams as well as for polycationic aminoglycosides. On the other side, the poor electrostatic and hydrophilic complementarity provided by the DP in AcrB permits the binding of charged molecules like anionic beta-lactams but with far less affinity than that in AcrD.
Multidrug transporters are known to have large, flexible overlapping substrate binding pockets rich in polar and aromatic residues to bind substrate molecules at different locations with different orientations. In alternate terms, these proteins show polyspecificity with no inherent ligand specificity which otherwise could stem from the binding site geometry49. Our results suggest that features such as shape, lipophilicity, electrostatic potential and hydration of AP and DP are a few distinctive features between AcrB and AcrD. This is in agreement with the findings on other multidrug transporters where nonpolar and aromatic side chains impose specific prerequisites on drug size and shape. To further establish this, we performed a fragment based binding site characterization using FTMap, whose philosophy retraces experimental high-throughput and fragment screening methods. Also, as its algorithm does not rely on alternate measures of ligand-binding propensity such as pocket volume, cavity depth or the ability of binding non-polar spheres, the results obtained here can complement those from physicochemical pocket descriptors discussed above.
As evident from the overall distribution of MFSs (Table 5), both AcrB and AcrD provide multiple binding possibilities with different functionalities as expected from such promiscuous transporter proteins. This is distinct from the limited number of MFSs one would detect in restrained substrate binding sites of other ordinary substrate-receptor systems restricted to a specific class of substrates45. In particular, AcrB with its numerous and wide spread MFSs offers a greater level of promiscuity for diverse substrate types than AcrD, which with its smaller localized MFSs puts forth certain prerequisites on the substrates being recognized. While the AP in both proteins showed comparable numbers of MFSs dispersed within the pocket, a clear distinction is noticeable in the DP (as also observed with the pocket descriptors discussed above) where a true and wide spread MFS is seen only in AcrB. The DP of this transporter is clearly a multi-functional (or multidrug binding) site with higher preference towards hydrophobic and aromatic fragments alongside hydrogen-bond donor and acceptor fragments. The non-selective characteristics of weakly polar (by Q176, G616) and weakly hydrophobic (by F178, Y327, F615, F617, F628) interactions are a predominant player towards the promiscuous binding behavior of AcrB DP30. The DP of AcrD does not reach the level of multi-functionality seen in AcrB, and shows greater preference for hydrogen-bond donors/acceptors (by R44, D134, N136, T139, Y178, Y327, S614, G615), together with a very limited preference towards hydrophobic (by Y178, Y277, F627) fragments. The MFSs identified here are in good agreement with the data reported for AcrB by Imai et al.45, and were also found close to the residues identified as crucial for the recognition of anionic beta-lactams by Kobayashi et al.22, thereby strengthening the reliability of our findings.
As seen from the distribution of MFSs in the various MD clusters (Supplementary Fig. S12), their position is not constant and this dynamicity (attributed by spatial changes in internal cavities caused by peristaltic motions50) is most likely important to avoid the substrate from being trapped in a single site and to facilitate its efflux by multisite-drug-oscillation46. Additional studies involving substrate bound complexes can provide information on the interaction profile of these homologous Acr pumps with their corresponding substrates. However, redundancy in the residue type in the binding pocket leading to easy adaptability of the binding orientation of substrates in the presence of mutations as identified by Bohnert et al. in AcrB51 makes these pumps very challenging for studies with simple molecular docking30.
## Conclusions and Perspectives
In this study, we performed a comparative analysis of the physicochemical properties like pocket volume and shape, lipophilicity, electrostatic potential, hydration and multi-functional sites of AcrB and AcrD to rationalize their differential substrate specificities. Importantly, these analyses were performed not only on static structures but also on conformations extracted from extensive MD simulations accounting for the impact of protein dynamics. Our results reveal several features in which both the AP and the DP differ considerably between these two transporters. First, the calculated lipophilic potential turned out to be significantly different between the AP and DP of AcrB and between the corresponding pockets of AcrB and AcrD considering even the dynamics of the pockets. In particular, the DP of AcrB is more lipophilic than all other sites, suggesting the possible role of this pocket as a lipophilicity-based selectivity filter. Second, we observed specific differences in the electrostatic environment within the pockets. In particular, the presence of an electrostatic funnel sourcing from the AP of AcrB could be important for the recognition of monocationic compounds by this transporter. Thus, these two properties likely play a central role in governing substrate recognition by and specificity of AcrB and AcrD. Meanwhile, the cavity volume, which essentially remains large enough to accommodate all potential substrate molecules, possibly has an indirect effect on the lipophilic and electrostatic environment along with the distribution of MFS, which altogether with the ensuing hydration within the pocket govern recognition and transport of substrates by these pumps. In addition, specific features like the flip conformations of the bottom-loop and the lipophilic bridge created by Phe617 of G-loop both in AcrB (and not in AcrD), which could not have been identified from simple sequence analysis, are expected to play a key role in the recognition and transport function of these pumps.
More exhaustive studies including molecular docking and molecular dynamics simulations of selected substrates in the binding pockets of AcrB and AcrD are being considered to provide substantial information to further characterize these putative binding sites on the basis of substrate-protein interaction pattern.
## Methods
### Homology modeling of AcrD
A reliable structure of the system of interest is the starting and main ingredient of any structure-based computational study. Since the structure of AcrD has not yet been resolved experimentally, we built it by template-based homology modeling. The amino acid sequence of full length AcrD transporter protein from E. coli was retrieved from the UniProt database52 (UNIPROT ID: P24177), and subsequently searched for the best available template structures bearing homologous relationship to the query sequence using the NCBI-BLAST tool53 against the Protein Data Bank (PDB) (www.rcsb.org). AcrB sequence showed the highest identity (~66%) (similarity of ~80%) with least gaps over a maximum sequence coverage; therefore its high resolution crystal structure, 1.9 Å (PDB ID: 4DX516), was chosen as template for modeling AcrD. The two protein sequences were optimally aligned by ClustalOmega54 and the results were visually inspected to ensure the absence of gaps in important secondary structure regions. Modeller 9.1340 was used to generate a total of 100 asymmetric models of AcrD based on AcrB template using an optimization method combining slow MD with very thorough variable target function method through 300 iterations, and this whole cycle was repeated twice unless the objective function MOLPDF was greater than 106. The resulting models were ranked using discrete optimized protein energy (DOPE)55 score values, and the top 5 models (with the lowest DOPE score) were selected for individual structure quality checks. Each model was further subjected to loop refinement using Modeller, and to overall structure relaxation by energy minimizations using AMBER1456. The most reliable model was then selected based on various geometric and stereochemical quality factors evaluated for backbone angles, side chains flips, rotamers, steric clashes etc. using PROCHECK57, ERRAT58, ProSA59, Verify3D60 programs available in MolProbity61 and Structure Analysis and Verification Server (http://services.mbi.ucla.edu/SAVES/).
We also performed comparative structural studies by superimposition of the modeled AcrD structure over the experimentally determined X-ray crystal structure of AcrB used as the template. All the above methods were also employed on the crystal structure of AcrB for use as reference. Visual inspections were performed with VMD1.9.162 and PyMOL63.
### Molecular dynamics simulations of AcrB and AcrD
MD simulations of the crystal structure of AcrB (PDB ID: 4DX5) and of the most reliable homology model of AcrD (see Supplementary Table S1) were carried out using the AMBER14 molecular modeling software56. Protomer specific protonation states18 were adopted with E346 (E346) and D924 (D922) protonated in both Loose and Tight protomers while deprotonated in the Open protomer of AcrB (AcrD). The residues D407 (D407), D408 (D408), D566 were protonated only in the Open protomer of AcrB (AcrD). The charge state of the residue L565 of AcrD, corresponding to D566 in AcrB, was not modified of course. The topology and the initial coordinate files for these apo-protein structures were created using the LEaP module of AmberTools14. The proteins were successively embedded in 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphoethanolamine (POPE) bilayer patches, solvated with explicit TIP3P water model, and neutralized with the required number of randomly placed K+ ions28, 29, 64. The ions count was suitably adjusted to account for an osmolarity of 0.15 M KCl. Embedding of the protein into a pre-equilibrated POPE bilayer patch was performed using the PPM server65 and subsequently the CharmmGUI tool66. The lipid residue nomenclature was converted from the CHARMM to AMBER format using the charmmlipid2amber.py python script provided with AmberTools. The central pore lipids were then added after calculating the number of lipids to be added to each leaflet by dividing the approximate area of the central pore by the standard area per lipid of POPE molecules67. Periodic boundary conditions were used and the distance between the protein and the edge of the box was set to be at least 30 Å in each direction.
Multi-step energy minimization with a combination of steepest descent and conjugate gradient methods was carried out using the pmemd program implemented in AMBER14 to relax internal constrains of the systems by gradually releasing positional restraints. Following this, the systems were heated from 0 to 310 K by a 1 ns heating (0–100 K) under constant volume (NVT) followed by 5 ns of constant pressure heating (NPT) (100–310 K) with the phosphorous heads of lipids restrained along the z-axis to allow membrane merging and to bring the atmospheric pressure of the system to 1 bar. Langevin thermostat (collision frequency of 1 ps−1) was used to maintain a constant temperature, and multiple short equilibration steps of 500 ps under anisotropic pressure scaling (Berendsen barostat) in NPT conditions were performed to equilibrate the box dimensions. A time step of 2 fs was used during all these runs, while post-equilibrium MD simulations were carried out with a time step of 4 fs under constant volume conditions after hydrogen mass repartitioning68. The particle-mesh Ewald (PME) algorithm was used to evaluate long-range electrostatic forces with a non-bonded cutoff of 9 Å. During the MD simulations, the length of all R–H bonds was constrained with SHAKE algorithm. Coordinates were saved every 100 ps. The ff14SB69 version of the all-atom Amber force field was used to represent the protein systems while lipid1467 parameters were used for the POPE bilayer. After equilibration, multi-copy µs-long MD simulations were performed for each system, namely two ~3 μs-long production simulations for each transporter (for a total simulation time of ~12 μs). Trajectory analysis was done using cpptraj module of AmberTools14 and VMD1.9.1, and graphs were plotted using the xmgrace tool.
### Principal component analysis
To characterize and highlight possible similarities and differences in the collective motions of the binding pockets, we calculated the covariance matrices from the equilibrium trajectory and performed a principal component analysis70, 71. As customary in principal component analysis, the covariance matrix was constructed taking the three-dimensional positional fluctuations of Cα atoms from their ensemble average position (after least-squares fitting to remove rotational and translational motion). Diagonalization of the covariance matrix yields a set of eigenvectors and corresponding eigenvalues, which represent the direction and amplitude of the motion, respectively. The eigenvectors are then ranked according to the decreasing order of their associated eigenvalues, such that the first eigenvector represents the largest contribution to the total fluctuation of the system. To visualize the motions represented by the eigenvectors, the structures from the trajectories can be projected onto each eigenvector of interest [principal component (PC)] and transformed back into Cartesian coordinates. The two extreme projections along each eigenvector can then be interpolated to create an animation or compared to understand which parts of the protein are moving according to that specific eigenvector and to what extent. Usually, (a combination of) the first few principal components are able to represent most of the collective motions (the “essential dynamics”70) occurring in an MD simulation among the different regions of a protein.
### Clustering of MD trajectories
A cluster analysis of the MD trajectories was performed using the average-linkage hierarchical agglomerative clustering method implemented in cpptraj module of AMBER. Such clustering helps to reduce the number of structures for analysis yet retaining the large conformational space sampled during the MD runs. In this approach, we clustered in two separate instances the trajectory based on root mean square deviation (RMSD) (cutoff set to 3 Å) of the AP in the Loose protomer and of the DP in the Tight protomer. For each protein, the representative structures from each of the 10 top clusters generated in each of the two cases considered (AP in Loose, DP in Tight) were used to perform quantitative analyses in order to account for dynamical behavior. Except for hydration analyses, all non-protein molecules were stripped from the trajectory during post-processing to reduce additional memory usage and to speed up file processing.
### Pocket descriptors
The list of the pocket descriptors identified for the present study includes: i) cavity volume and shape; ii) molecular lipophilicity potential; iii) electrostatic potential; iv) site hydration; v) fragment-based binding site characterization. The various pocket descriptors used to characterize the binding site were calculated using specific programs after validating their applicability to RND systems by assessing results against available crystal structures and experimental data, as well as previous computational reports29, 30, 33, 35, 45, 64.
#### Cavity volume and shape
Evolution of size and shape of the AP and DP during the MD simulations was examined using the two-probe sphere method of rbcavity program bundled in the rDock suite72. This allows obtaining detailed information on the pocket volume and plasticity of the site. In this method, the binding site volume was identified by a fast grid-based cavity detection algorithm73 within a sphere of radius 14 Å, centred over the pockets, using large and small probe radii of 6.0 Å and 1.5 Å, respectively. These radii were found to be optimal for our case after evaluating different combinations and checking through visual inspection their accuracy in predicting volume of the pocket space by keeping the possible inclusion of regions extending outside the pocket of interest at its least. rDock also gives information about the approximate shape of the pocket; we could thus provide approximate values for the minimal cross-sectional area associated to each cavity.
#### Molecular lipophilicity potential
The three-dimensional distribution of lipophilicity in space or on a molecular surface can be described using Molecular Lipophilicity Potential (MLP), which represents the influence of all lipophilic fragmental contributions of a molecule on its environment. The MLP value of a point in space (k) is generated as the result of intermolecular interactions between all fragments in the molecule and the solvent system, at that given point. Thus, MLP can be calculated from the fragmental system of logP and a distance function as shown in the following equation74:
$$ML{P}_{k}=\sum _{i=1}^{N}{F}_{i}.f({d}_{ik})$$
(1)
where N is the number of fragments, F i is the lipophilic contribution of fragment i of the molecule and f(d ik ) is a function based on the distance of the measured point in space k to fragment i.
In this way, summing up all positive and all negative MLP values associated to each point on the binding pocket yields the lipophilic index (LI) as:
$$LI=\frac{{\rm{\Sigma }}ML{P}^{+}}{{\rm{\Sigma }}ML{P}^{+}+|{\rm{\Sigma }}ML{P}^{-}|\,}.100$$
(2)
The lipophilicity of AP in the Loose protomer and DP in the Tight protomer were qualitatively and quantitatively estimated in this way using MLP Tools75 plugin available for PyMOL.
#### Electrostatic potential
The electrostatic potential surface maps were computed by APBS76, after preprocessing structures of AcrB and AcrD to assign charges and atomic radii using the PDB2PQR server77. All electrostatic potential calculations were performed at 0.15 M physiological salt concentration, with a solvent probe of radius 1.4 Å, a solvent dielectric constant of 78.5, a biomolecular dielectric constant of 2.0, a temperature of 310 K, a minimum grid spacing of 0.5 Å and keeping the other Poisson-Boltzmann parameters at default.
#### Hydration analysis
The radial distribution function (RDF) indicates the probability of finding water molecules at a certain distance from a region or residue of interest and is commonly used to analyse the solution structure revealed from either experimental or computer simulations data.
The RDF analysis of water oxygen atoms was performed using cpptraj module of AMBER14, in which the RDF is computed from the histogram of the number of solvent particles found as a function of the distance R from an (ensemble of) atom(s), normalized by the expected number of solvent particles at that distance in bulk. The normalization is estimated from:
$$Density\,\ast \,([\frac{4\pi }{3}{(R+dR)}^{3}]-\,[\frac{4\pi }{3}d{R}^{3}])$$
(3)
where dR is equal to the bin spacing, the default density value is 0.033456 molecules Å−3, which corresponds to density of water approximately equal to 1.0 g mL−1. Bin spacing of 0.1 and a maximum bin value of 4.0 was used in this case to calculate the RDF of all water oxygen atoms to each atom of AP in the Loose protomer and of DP in the Tight protomer over the entire length of the simulation.
Though RDF clearly shows a difference in the water distribution around the desired regions, it lacks the ability to present the information about the spatial positions of these differences. Hence, spatial distribution function (SDF) of waters around the whole protein was calculated using the Gromacs utility g_spatial 78 on the trajectory frames grouped into the most populated conformational clusters extracted from MD simulations. SDF allows to determine the three-dimensional density distribution of aqueous solution around the binding pockets of the transporters. Density isovalue gives information regarding the relative number densities with respect to the average number density of solvent molecules in bulk. RDF and SDF together highlight the hydration around the binding pockets of these proteins, which can be effectively used to understand the molecular mechanism of interaction of water molecules penetrating the pocket in a dynamic manner.
#### Fragment-Based Binding Site Characterization
The FTMap server79 implementing the FTSite algorithm is a tool helpful in the identification of binding sites and of the fragments that could be possible source of structure- and fragment-based drug design attempts. The main aim of such fragment-based binding site analysis is to obtain a measure of the ability of the protein (and in particular the pockets under study) to bind a drug-like molecule.
FTMap identifies the important hot spots based on the consensus clusters of 16 standard probes which include molecules varying in size, shape and polarity (Supplementary Fig. S11). Such a diverse library of probes is useful to capture a range of interaction types that include hydrophilic, hydrophobic, hydrogen-bonding and aromatic interactions. The regions where clusters of different probes of the same or different type overlap are marked as consensus (CS) and multi-functional (MFS) sites, respectively, and are ranked based on the number of their clusters. Clusters in close proximity to a top ranked cluster are merged with it and the protein residues within this region become the top ranked putative ligand binding site.
|
2023-03-27 06:29:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6448056697845459, "perplexity": 2994.8833955252894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948609.41/warc/CC-MAIN-20230327060940-20230327090940-00604.warc.gz"}
|
https://anthonypan.com/posts/modeling-r
|
# Model Building in R
In which we explore the basics of modeling as an exploratory tool through recording and graphing predictions and residuals, variable interactions, and transformations.
February 28, 2020 - 11 minute read -
This post provides an introduction to modeling in R without going into statistical details.1 We’ll go over some examples of fitting models to data, and then examine the list-column data structure. The following libraries are used:
## Model Basics
Let’s go through an example exploratory workflow of fitting a model to a continuous variable using simulated dataset sim1:
##### 2. Fit a simple linear regression to the data.
Note: To model an interaction between x-variables, use *
##### 3. Create a new data frame with model prediction data.
Note: add_predictions() adds a single new column with model predictions, spread_predictions() adds one column for each model, and gather_predictions() two columns, model and prediction, and repeats the input rows for each model.
## Transformations
Transformations can be perfromed inside of the model formula, but if it contains an operation of +, -, *, ^, wrap it in I() so it doesn’t become part of the model specs. Let’s go through an example workflow again, this time with a polynomial transformation.
##### 3. Put predictions into a data frame.
Note: seq_range() provides a specified number of values between the minimum and maximum of a variable, which can be useful for graphing.
## Many Models
If you have a complex dataset, it may be possible to unpack the data using many simple models. For example, the gapminder dataset contains data on the life expectancy, among other variables, in many countries over the course of 50 years. With this data, let’s explore how life expectancy changes over time for each country, and dig into which countries deviate significantly from the rest of the world.
##### 9. Pull out the problem countries into a separate table. Then, plot the life expectancies of those countries over time by joining the table with the original data.
Note: History provides an explanation where the data breaks down: this graph reveals the devastating effects of the Rwandan genocide and HIV/AIDS in African countries in the 1990s.
## Data Structures: List-Columns
The life expectancy example above made use of list-column data structures. In general, an effective list-column pipeline will take the following form:
1. Create the list-column.
2. Create other intermediate list-columns by transforming existing list columns.
3. Simplify the list-column back down to a data frame or atomic vector.
### Creating List-Columns
• nest() converts a grouped data frame into a nested data frame with a list-column of data frames.
• mutate() applied with vectorized functions that return a list will create list-columns.
• summarize() applied with summary functions that return multiple results will create list-columns.
### Simplifying List-Columns
In order to manipulate and visualize the data, you will need to simplify list-columns.
• If you want a single value from the list-column, use mutate() with map_lgl(), map_int(), map_dbl(), map_chr() to create an atomic vector.
• If you want many values from the list-column, use unnest() to convert list columns back to regular columns, repeating the rows as many times as necessary.
### Turning Models into Tidy Data
The following three functions help turn models into tidy data, and often make use of list-columns.
• glance() returns a row for each model, where each column gives a model summary.
• tidy() returns a row for each coefficient in the model, where each column has info about estimate/variability.
• augment() returns a row for each row in data, adding extra values like residuals and influence stats.
1. This post is meant for a person who is looking for a refresher on basic modeling in R. The content in this post is based on chapter twenty-two through twenty-five of R for Data Science by Hadley Wickham & Garrett Grolemund.
|
2020-05-27 05:08:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2118554264307022, "perplexity": 2086.9618882377454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347392141.7/warc/CC-MAIN-20200527044512-20200527074512-00154.warc.gz"}
|
https://brilliant.org/problems/circular-motion-without-physics/
|
Circular Motion without Physics
Geometry Level 5
Let $$ABC$$ be a triangle. Let $$I$$ be its incenter. Let $$L, M, N$$ be the circumcenters of triangles $$BIC, AIC, AIB$$, respectively. What is the sum of the powers of $$L, M, N$$ with respect to the circumcircle of $$\triangle ABC$$?
Note:The power of point P to circle $$\omega$$ with radius $$r$$ and center $$O$$ is $$OP^2 - r^2$$.
×
|
2018-01-21 10:53:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8669934868812561, "perplexity": 107.2209913785}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890514.66/warc/CC-MAIN-20180121100252-20180121120252-00671.warc.gz"}
|
https://mail.python.org/pipermail/python-ideas/2014-March/027053.html
|
# [Python-ideas] Fwd: [RFC] draft PEP: Dedicated infix operators for matrix multiplication and matrix power
Nathaniel Smith njs at pobox.com
Fri Mar 14 02:59:14 CET 2014
Hi all,
In the process of cleaning out old design... peculiarities... in
numpy, I happened to look into the history of attempts to add syntax
for matrix multiplication to Python, since the lack of this is (as
you'll see) at the root of various intractable problems we have. I was
pretty surprised; it turns out that even though numerical folks have
been whinging about missing this operator for ~15 years, the only two
"maybe we can sneak matrix multiply past Guido in some sort of...
large, wooden rabbit..."
and
PEP 225, aka "let's add 12 new operators and figure out what to do
with them later"
I'd have rejected these too! So I thought, maybe we should try the
radical tactic of writing down what we actually want, carefully
explaining why we want it, and then asking for it. And at least this
way, if it gets rejected, we'll know that it was rejected for the
right reasons...
You'll notice that this draft is rather more developed than the
average first-round PEP posting, because it's already been the rounds
of all the various numerical package mailing lists to build consensus;
no point in asking for the wrong thing. Don't let that slow you down,
though. I think what we have here is fairly convincing and covers a
lot of the design space (at least it convinced me, which I wasn't sure
of at the start), but I'm still totally open to changing anything here
based on comments and feedback. AFAICT the numerical community would
walk over hot coals if there were an infix matrix multiplication
operator on the other side. (BTW, since this is python-ideas -- have
you considered adding hot coals to python 3? It might do wonders for
uptake.) ...Anyway, the point is, I'm sure I can wrangle them into
accepting any useful suggestions or other changes deemed necessary by
-n
--- [begin draft PEP -- monospace font recommended] ---
PEP: XXXX
Title: Dedicated infix operators for matrix multiplication and matrix power
Version: $Revision$
Last-Modified: $Date$
Author: Nathaniel J. Smith <njs at pobox.com>
Status: Draft
Type: Standards Track
Python-Version: 3.5
Content-Type: text/x-rst
Created: 20-Feb-2014
Post-History:
Abstract
========
This PEP proposes two new binary operators dedicated to matrix
multiplication and matrix power, spelled @ and @@
respectively. (Mnemonic: @ is * for mATrices.)
Specification
=============
Two new binary operators are added to the Python language, together
with corresponding in-place versions:
======= ========================= ===============================
Op Precedence/associativity Methods
======= ========================= ===============================
@ Same as * __matmul__, __rmatmul__
@@ Same as ** __matpow__, __rmatpow__
@= n/a __imatmul__
@@= n/a __imatpow__
======= ========================= ===============================
No implementations of these methods are added to the builtin or
standard library types. However, a number of projects have reached
consensus on the recommended semantics for these operations; see
Intended usage details_ below.
Motivation
==========
Executive summary
-----------------
In numerical code, there are two important operations which compete
for use of Python's * operator: elementwise multiplication, and
matrix multiplication. In the nearly twenty years since the Numeric
library was first proposed, there have been many attempts to resolve
this tension [#hugunin]_; none have been really satisfactory.
Currently, most numerical Python code uses * for elementwise
multiplication, and function/method syntax for matrix multiplication;
circumstances. The problem is bad enough that significant amounts of
code continue to use the opposite convention (which has the virtue of
producing ugly and unreadable code in *different* circumstances), and
this API fragmentation across codebases then creates yet more
problems. There does not seem to be any *good* solution to the
problem of designing a numerical API within current Python syntax --
only a landscape of options that are bad in different ways. The
minimal change to Python syntax which is sufficient to resolve these
problems is the addition of a single new infix operator for matrix
multiplication.
Matrix multiplication has a singular combination of features which
distinguish it from other binary operations, which together provide a
uniquely compelling case for the addition of a dedicated infix
operator:
* Just as for the existing numerical operators, there exists a vast
body of prior art supporting the use of infix notation for matrix
multiplication across all fields of mathematics, science, and
engineering; @ harmoniously fills a hole in Python's existing
operator system.
* @ greatly clarifies real-world code.
* @ provides a smoother onramp for less experienced users, who are
particularly harmed by hard-to-read code and API fragmentation.
* @ benefits a substantial and growing portion of the Python user
community.
* @ will be used frequently -- in fact, evidence suggests it may
be used more frequently than // or the bitwise operators.
* @ allows the Python numerical community to reduce fragmentation,
and finally standardize on a single consensus duck type for all
numerical array objects.
And, given the existence of @, it makes more sense than not to
have @@, @=, and @@=, so they are added as well.
Background: What's wrong with the status quo?
---------------------------------------------
When we crunch numbers on a computer, we usually have lots and lots of
numbers to deal with. Trying to deal with them one at a time is
cumbersome and slow -- especially when using an interpreted language.
Instead, we want the ability to write down simple operations that
apply to large collections of numbers all at once. The *n-dimensional
array* is the basic object that all popular numeric computing
environments use to make this possible. Python has several libraries
that provide such arrays, with numpy being at present the most
prominent.
When working with n-dimensional arrays, there are two different ways
we might want to define multiplication. One is elementwise
multiplication::
[[1, 2], [[11, 12], [[1 * 11, 2 * 12],
[3, 4]] x [13, 14]] = [3 * 13, 4 * 14]]
and the other is matrix multiplication_:
.. _matrix multiplication: https://en.wikipedia.org/wiki/Matrix_multiplication
::
[[1, 2], [[11, 12], [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
[3, 4]] x [13, 14]] = [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
Elementwise multiplication is useful because it lets us easily and
quickly perform many multiplications on a large collection of values,
without writing a slow and cumbersome for loop. And this works as
part of a very general schema: when using the array objects provided
by numpy or other numerical libraries, all Python operators work
elementwise on arrays of all dimensionalities. The result is that one
can write functions using straightforward code like a * b + c / d,
treating the variables as if they were simple values, but then
immediately use this function to efficiently perform this calculation
on large collections of values, while keeping them organized using
whatever arbitrarily complex array layout works best for the problem
at hand.
Matrix multiplication is more of a special case. It's only defined on
2d arrays (also known as "matrices"), and multiplication is the only
operation that has a meaningful "matrix" version -- "matrix addition"
is the same as elementwise addition; there is no such thing as "matrix
bitwise-or" or "matrix floordiv"; "matrix division" can be defined but
is not very useful, etc. However, matrix multiplication is still used
very heavily across all numerical application areas; mathematically,
it's one of the most fundamental operations there is.
Because Python syntax currently allows for only a single
multiplication operator *, libraries providing array-like objects
must decide: either use * for elementwise multiplication, or use
* for matrix multiplication. And, unfortunately, it turns out
that when doing general-purpose number crunching, both operations are
used frequently, and there are major advantages to using infix rather
than function call syntax in both cases. Thus it is not at all clear
which convention is optimal, or even acceptable; often it varies on a
case-by-case basis.
Nonetheless, network effects mean that it is very important that we
pick *just one* convention. In numpy, for example, it is technically
possible to switch between the conventions, because numpy provides two
different types with different __mul__ methods. For
numpy.ndarray objects, * performs elementwise multiplication,
and matrix multiplication must use a function call (numpy.dot).
For numpy.matrix objects, * performs matrix multiplication,
and elementwise multiplication requires function syntax. Writing code
using numpy.ndarray works fine. Writing code using
numpy.matrix also works fine. But trouble begins as soon as we
try to integrate these two pieces of code together. Code that expects
an ndarray and gets a matrix, or vice-versa, may crash or
return incorrect results. Keeping track of which functions expect
which types as inputs, and return which types as outputs, and then
converting back and forth all the time, is incredibly cumbersome and
impossible to get right at any scale. Functions that defensively try
to handle both types as input and DTRT, find themselves floundering
into a swamp of isinstance and if statements.
PEP 238 split / into two operators: / and //. Imagine the
chaos that would have resulted if it had instead split int into
two types: classic_int, whose __div__ implemented floor
division, and new_int, whose __div__ implemented true
division. This, in a more limited way, is the situation that Python
number-crunchers currently find themselves in.
In practice, the vast majority of projects have settled on the
convention of using * for elementwise multiplication, and function
call syntax for matrix multiplication (e.g., using numpy.ndarray
instead of numpy.matrix). This reduces the problems caused by API
fragmentation, but it doesn't eliminate them. The strong desire to
use infix notation for matrix multiplication has caused a number of
specialized array libraries to continue to use the opposing convention
(e.g., scipy.sparse, pyoperators, pyviennacl) despite the problems
this causes, and numpy.matrix itself still gets used in
introductory programming courses, often appears in StackOverflow
answers, and so forth. Well-written libraries thus must continue to
be prepared to deal with both types of objects, and, of course, are
also stuck using unpleasant funcall syntax for matrix multiplication.
After nearly two decades of trying, the numerical community has still
not found any way to resolve these problems within the constraints of
current Python syntax (see Rejected alternatives to adding a new
operator_ below).
This PEP proposes the minimum effective change to Python syntax that
will allow us to drain this swamp. It splits * into two
operators, just as was done for /: * for elementwise
multiplication, and @ for matrix multiplication. (Why not the
reverse? Because this way is compatible with the existing consensus,
and because it gives us a consistent rule that all the built-in
numeric operators also apply in an elementwise manner to arrays; the
reverse convention would lead to more special cases.)
So that's why matrix multiplication doesn't and can't just use *.
Now, in the the rest of this section, we'll explain why it nonetheless
meets the high bar for adding a new operator.
Why should matrix multiplication be infix?
------------------------------------------
Right now, most numerical code in Python uses syntax like
numpy.dot(a, b) or a.dot(b) to perform matrix multiplication.
This obviously works, so why do people make such a fuss about it, even
to the point of creating API fragmentation and compatibility swamps?
Matrix multiplication shares two features with ordinary arithmetic
operations like addition and multiplication on numbers: (a) it is used
very heavily in numerical programs -- often multiple times per line of
code -- and (b) it has an ancient and universally adopted tradition of
being written using infix syntax. This is because, for typical
formulas, this notation is dramatically more readable than any
function call syntax. Here's an example to demonstrate:
One of the most useful tools for testing a statistical hypothesis is
the linear hypothesis test for OLS regression models. It doesn't
really matter what all those words I just said mean; if we find
ourselves having to implement this thing, what we'll do is look up
some textbook or paper on it, and encounter many mathematical formulas
that look like:
.. math::
S = (H \beta - r)^T (H V H^T)^{-1} (H \beta - r)
Here the various variables are all vectors or matrices (details for
the curious: [#lht]_).
Now we need to write code to perform this calculation. In current
numpy, matrix multiplication can be performed using either the
function or method call syntax. Neither provides a particularly
import numpy as np
from numpy.linalg import inv, solve
# Using dot function:
S = np.dot((np.dot(H, beta) - r).T,
np.dot(inv(np.dot(np.dot(H, V), H.T)), np.dot(H, beta) - r))
# Using dot method:
S = (H.dot(beta) - r).T.dot(inv(H.dot(V).dot(H.T))).dot(H.dot(beta) - r)
With the @ operator, the direct translation of the above formula
becomes::
S = (H @ beta - r).T @ inv(H @ V @ H.T) @ (H @ beta - r)
Notice that there is now a transparent, 1-to-1 mapping between the
symbols in the original formula and the code that implements it.
Of course, an experienced programmer will probably notice that this is
not the best way to compute this expression. The repeated computation
of :math:H \beta - r should perhaps be factored out; and,
expressions of the form dot(inv(A), B) should almost always be
replaced by the more numerically stable solve(A, B). When using
@, performing these two refactorings gives us::
# Version 1 (as above)
S = (H @ beta - r).T @ inv(H @ V @ H.T) @ (H @ beta - r)
# Version 2
trans_coef = H @ beta - r
S = trans_coef.T @ inv(H @ V @ H.T) @ trans_coef
# Version 3
S = trans_coef.T @ solve(H @ V @ H.T, trans_coef)
Notice that when comparing between each pair of steps, it's very easy
to see exactly what was changed. If we apply the equivalent
transformations to the code using the .dot method, then the changes
are much harder to read out or verify for correctness::
# Version 1 (as above)
S = (H.dot(beta) - r).T.dot(inv(H.dot(V).dot(H.T))).dot(H.dot(beta) - r)
# Version 2
trans_coef = H.dot(beta) - r
S = trans_coef.T.dot(inv(H.dot(V).dot(H.T))).dot(trans_coef)
# Version 3
S = trans_coef.T.dot(solve(H.dot(V).dot(H.T)), trans_coef)
Readability counts! The statements using @ are shorter, contain
more whitespace, can be directly and easily compared both to each
other and to the textbook formula, and contain only meaningful
parentheses. This last point is particularly important for
readability: when using function-call syntax, the required parentheses
on every operation create visual clutter that makes it very difficult
to parse out the overall structure of the formula by eye, even for a
relatively simple formula like this one. Eyes are terrible at parsing
non-regular languages. I made and caught many errors while trying to
write out the 'dot' formulas above. I know they still contain at
least one error, maybe more. (Exercise: find it. Or them.) The
@ examples, by contrast, are not only correct, they're obviously
correct at a glance.
If we are even more sophisticated programmers, and writing code that
we expect to be reused, then considerations of speed or numerical
accuracy might lead us to prefer some particular order of evaluation.
Because @ makes it possible to omit irrelevant parentheses, we can
be certain that if we *do* write something like (H @ V) @ H.T,
then our readers will know that the parentheses must have been added
intentionally to accomplish some meaningful purpose. In the dot
examples, it's impossible to know which nesting decisions are
important, and which are arbitrary.
Infix @ dramatically improves matrix code usability at all stages
of programmer interaction.
Transparent syntax is especially crucial for non-expert programmers
-------------------------------------------------------------------
A large proportion of scientific code is written by people who are
experts in their domain, but are not experts in programming. And
there are many university courses run each year with titles like "Data
analysis for social scientists" which assume no programming
background, and teach some combination of mathematical techniques,
introduction to programming, and the use of programming to implement
these mathematical techniques, all within a 10-15 week period. These
courses are more and more often being taught in Python rather than
special-purpose languages like R or Matlab.
For these kinds of users, whose programming knowledge is fragile, the
existence of a transparent mapping between formulas and code often
means the difference between succeeding and failing to write that code
at all. This is so important that such classes often use the
numpy.matrix type which defines * to mean matrix
multiplication, even though this type is buggy and heavily
disrecommended by the rest of the numpy community for the
fragmentation that it causes. This pedagogical use case is, in fact,
the *only* reason numpy.matrix remains a supported part of numpy.
Adding @ will benefit both beginning and advanced users with
better syntax; and furthermore, it will allow both groups to
standardize on the same notation from the start, providing a smoother
on-ramp to expertise.
But isn't matrix multiplication a pretty niche requirement?
-----------------------------------------------------------
The world is full of continuous data, and computers are increasingly
called upon to work with it in sophisticated ways. Arrays are the
lingua franca of finance, machine learning, 3d graphics, computer
vision, robotics, operations research, econometrics, meteorology,
computational linguistics, recommendation systems, neuroscience,
astronomy, bioinformatics (including genetics, cancer research, drug
discovery, etc.), physics engines, quantum mechanics, geophysics,
network analysis, and many other application areas. In most or all of
these areas, Python is rapidly becoming a dominant player, in large
part because of its ability to elegantly mix traditional discrete data
structures (hash tables, strings, etc.) on an equal footing with
modern numerical data types and algorithms.
We all live in our own little sub-communities, so some Python users
may be surprised to realize the sheer extent to which Python is used
for number crunching -- especially since much of this particular
sub-community's activity occurs outside of traditional Python/FOSS
channels. So, to give some rough idea of just how many numerical
Python programmers are actually out there, here are two numbers: In
2013, there were 7 international conferences organized specifically on
numerical Python [#scipy-conf]_ [#pydata-conf]_. At PyCon 2014, ~20%
of the tutorials appear to involve the use of matrices
[#pycon-tutorials]_.
To quantify this further, we used Github's "search" function to look
at what modules are actually imported across a wide range of
real-world code (i.e., all the code on Github). We checked for
imports of several popular stdlib modules, a variety of numerically
oriented modules, and various other extremely high-profile modules
like django and lxml (the latter of which is the #1 most downloaded
package on PyPI). Starred lines indicate packages which export array-
or matrix-like objects which will adopt @ if this PEP is
approved::
Count of Python source files on Github matching given search terms
(as of 2014-04-10, ~21:00 UTC)
================ ========== =============== ======= ===========
module "import X" "from X import" total total/numpy
================ ========== =============== ======= ===========
sys 2374638 63301 2437939 5.85
os 1971515 37571 2009086 4.82
re 1294651 8358 1303009 3.12
numpy ************** 337916 ********** 79065 * 416981 ******* 1.00
warnings 298195 73150 371345 0.89
subprocess 281290 63644 344934 0.83
django 62795 219302 282097 0.68
math 200084 81903 281987 0.68
pickle+cPickle 215349 22672 238021 0.57
matplotlib 119054 27859 146913 0.35
sqlalchemy 29842 82850 112692 0.27
pylab *************** 36754 ********** 41063 ** 77817 ******* 0.19
scipy *************** 40829 ********** 28263 ** 69092 ******* 0.17
lxml 19026 38061 57087 0.14
zlib 40486 6623 47109 0.11
multiprocessing 25247 19850 45097 0.11
requests 30896 560 31456 0.08
jinja2 8057 24047 32104 0.08
twisted 13858 6404 20262 0.05
gevent 11309 8529 19838 0.05
pandas ************** 14923 *********** 4005 ** 18928 ******* 0.05
sympy 2779 9537 12316 0.03
theano *************** 3654 *********** 1828 *** 5482 ******* 0.01
================ ========== =============== ======= ===========
These numbers should be taken with several grains of salt (see
footnote for discussion: [#github-details]_), but, to the extent they
can be trusted, they suggest that numpy might be the single
most-imported non-stdlib module in the entire Pythonverse; it's even
more-imported than such stdlib stalwarts as subprocess, math,
pickle, and threading. And numpy users represent only a
subset of the broader numerical community that will benefit from the
@ operator. Matrices may once have been a niche data type
restricted to Fortran programs running in university labs and military
clusters, but those days are long gone. Number crunching is a
mainstream part of modern Python usage.
In addition, there is some precedence for adding an infix operator to
handle a more-specialized arithmetic operation: the floor division
operator //, like the bitwise operators, is very useful under
certain circumstances when performing exact calculations on discrete
values. But it seems likely that there are many Python programmers
who have never had reason to use // (or, for that matter, the
bitwise operators). @ is no more niche than //.
So @ is good for matrix formulas, but how common are those really?
----------------------------------------------------------------------
We've seen that @ makes matrix formulas dramatically easier to
work with for both experts and non-experts, that matrix formulas
appear in many important applications, and that numerical libraries
like numpy are used by a substantial proportion of Python's user base.
But numerical libraries aren't just about matrix formulas, and being
important doesn't necessarily mean taking up a lot of code: if matrix
formulas only occured in one or two places in the average
numerically-oriented project, then it still wouldn't be worth adding a
new operator. So how common is matrix multiplication, really?
When the going gets tough, the tough get empirical. To get a rough
estimate of how useful the @ operator will be, the table below
shows the rate at which different Python operators are actually used
in the stdlib, and also in two high-profile numerical packages -- the
scikit-learn machine learning library, and the nipy neuroimaging
library -- normalized by source lines of code (SLOC). Rows are sorted
by the 'combined' column, which pools all three code bases together.
The combined column is thus strongly weighted towards the stdlib,
which is much larger than both projects put together (stdlib: 411575
SLOC, scikit-learn: 50924 SLOC, nipy: 37078 SLOC). [#sloc-details]_
The dot row (marked ******) counts how common matrix multiply
operations are in each codebase.
::
==== ====== ============ ==== ========
op stdlib scikit-learn nipy combined
==== ====== ============ ==== ========
= 2969 5536 4932 3376 / 10,000 SLOC
- 218 444 496 261
+ 224 201 348 231
== 177 248 334 196
* 156 284 465 192
% 121 114 107 119
** 59 111 118 68
!= 40 56 74 44
/ 18 121 183 41
> 29 70 110 39
+= 34 61 67 39
< 32 62 76 38
>= 19 17 17 18
<= 18 27 12 18
dot ***** 0 ********** 99 ** 74 ****** 16
| 18 1 2 15
& 14 0 6 12
<< 10 1 1 8
// 9 9 1 8
-= 5 21 14 8
*= 2 19 22 5
/= 0 23 16 4
>> 4 0 0 3
^ 3 0 0 3
~ 2 4 5 2
|= 3 0 0 2
&= 1 0 0 1
//= 1 0 0 1
^= 1 0 0 0
**= 0 2 0 0
%= 0 0 0 0
<<= 0 0 0 0
>>= 0 0 0 0
==== ====== ============ ==== ========
These two numerical packages alone contain ~780 uses of matrix
multiplication. Within these packages, matrix multiplication is used
more heavily than most comparison operators (< != <=
>=). Even when we dilute these counts by including the stdlib
into our comparisons, matrix multiplication is still used more often
in total than any of the bitwise operators, and 2x as often as //.
This is true even though the stdlib, which contains a fair amount of
integer arithmetic and no matrix operations, makes up more than 80% of
the combined code base.
By coincidence, the numeric libraries make up approximately the same
proportion of the 'combined' codebase as numeric tutorials make up of
PyCon 2014's tutorial schedule, which suggests that the 'combined'
column may not be *wildly* unrepresentative of new Python code in
general. While it's impossible to know for certain, from this data it
seems entirely possible that across all Python code currently being
written, matrix multiplication is already used more often than //
and the bitwise operations.
But isn't it weird to add an operator with no stdlib uses?
----------------------------------------------------------
It's certainly unusual (though Ellipsis was also added without any
stdlib uses). But the important thing is whether a change will
clear from the above that @ will be used, and used heavily. And
this PEP provides the critical piece that will allow the Python
numerical community to finally reach consensus on a standard duck type
for all array-like objects, which is a necessary precondition to ever
adding a numerical array type to the stdlib.
Matrix power and in-place operators
-----------------------------------
The primary motivation for this PEP is @; the other proposed
operators don't have nearly as much impact. The matrix power operator
@@ is useful and well-defined, but not really necessary. It is
still included, though, for consistency: if we have an @ that is
analogous to *, then it would be weird and surprising to *not*
have an @@ that is analogous to **. Similarly, the in-place
operators @= and @@= provide limited value -- it's more common
to write a = (b @ a) than it is to write a = (a @ b), and
in-place matrix operations still generally have to allocate
substantial temporary storage -- but they are included for
completeness and symmetry.
Compatibility considerations
============================
Currently, the only legal use of the @ token in Python code is at
statement beginning in decorators. The new operators are all infix;
the one place they can never occur is at statement beginning.
Therefore, no existing code will be broken by the addition of these
operators, and there is no possible parsing ambiguity between
decorator-@ and the new operators.
Another important kind of compatibility is the mental cost paid by
users to update their understanding of the Python language after this
change, particularly for users who do not work with matrices and thus
do not benefit. Here again, @ has minimal impact: even
comprehensive tutorials and references will only need to add a
sentence or two to fully document this PEP's changes for a
non-numerical audience.
Intended usage details
======================
This section is informative, rather than normative -- it documents the
consensus of a number of libraries that provide array- or matrix-like
objects on how the @ and @@ operators will be implemented.
This section uses the numpy terminology for describing arbitrary
multidimensional arrays of data, because it is a superset of all other
commonly used models. In this model, the *shape* of any array is
represented by a tuple of integers. Because matrices are
two-dimensional, they have len(shape) == 2, while 1d vectors have
len(shape) == 1, and scalars have shape == (), i.e., they are "0
dimensional". Any array contains prod(shape) total entries. Notice
that prod(()) == 1_ (for the same reason that sum(()) == 0); scalars
are just an ordinary kind of array, not a special case. Notice also
that we distinguish between a single scalar value (shape == (),
analogous to 1), a vector containing only a single entry (shape ==
(1,), analogous to [1]), a matrix containing only a single entry
(shape == (1, 1), analogous to [[1]]), etc., so the dimensionality
of any array is always well-defined. Other libraries with more
restricted representations (e.g., those that support 2d arrays only)
might implement only a subset of the functionality described here.
.. _prod(()) == 1: https://en.wikipedia.org/wiki/Empty_product
Semantics
---------
The recommended semantics for @ for different inputs are:
* 2d inputs are conventional matrices, and so the semantics are
obvious: we apply conventional matrix multiplication. If we write
arr(2, 3) to represent an arbitrary 2x3 array, then arr(3, 4)
@ arr(4, 5) returns an array with shape (3, 5).
* 1d vector inputs are promoted to 2d by prepending or appending a '1'
to the shape, the operation is performed, and then the added
dimension is removed from the output. The 1 is always added on the
"outside" of the shape: prepended for left arguments, and appended
for right arguments. The result is that matrix @ vector and vector
@ matrix are both legal (assuming compatible shapes), and both
return 1d vectors; vector @ vector returns a scalar. This is
clearer with examples.
* arr(2, 3) @ arr(3, 1) is a regular matrix product, and returns
an array with shape (2, 1), i.e., a column vector.
* arr(2, 3) @ arr(3) performs the same computation as the
previous (i.e., treats the 1d vector as a matrix containing a
single *column*, shape = (3, 1)), but returns the result with
shape (2,), i.e., a 1d vector.
* arr(1, 3) @ arr(3, 2) is a regular matrix product, and returns
an array with shape (1, 2), i.e., a row vector.
* arr(3) @ arr(3, 2) performs the same computation as the
previous (i.e., treats the 1d vector as a matrix containing a
single *row*, shape = (1, 3)), but returns the result with shape
(2,), i.e., a 1d vector.
* arr(1, 3) @ arr(3, 1) is a regular matrix product, and returns
an array with shape (1, 1), i.e., a single value in matrix form.
* arr(3) @ arr(3) performs the same computation as the
previous, but returns the result with shape (), i.e., a single
scalar value, not in matrix form. So this is the standard inner
product on vectors.
An infelicity of this definition for 1d vectors is that it makes
@ non-associative in some cases ((Mat1 @ vec) @ Mat2 !=
Mat1 @ (vec @ Mat2)). But this seems to be a case where
practicality beats purity: non-associativity only arises for strange
expressions that would never be written in practice; if they are
written anyway then there is a consistent rule for understanding
what will happen (Mat1 @ vec @ Mat2 is parsed as (Mat1 @ vec)
@ Mat2, just like a - b - c); and, not supporting 1d vectors
would rule out many important use cases that do arise very commonly
in practice. No-one wants to explain to new users why to solve the
simplest linear system in the obvious way, they have to type
(inv(A) @ b[:, np.newaxis]).flatten() instead of inv(A) @ b,
or perform an ordinary least-squares regression by typing
solve(X.T @ X, X @ y[:, np.newaxis]).flatten() instead of
solve(X.T @ X, X @ y). No-one wants to type (a[np.newaxis, :]
@ b[:, np.newaxis])[0, 0] instead of a @ b every time they
compute an inner product, or (a[np.newaxis, :] @ Mat @ b[:,
np.newaxis])[0, 0] for general quadratic forms instead of a @
Mat @ b. In addition, sage and sympy (see below) use these
non-associative semantics with an infix matrix multiplication
operator (they use *), and they report that they haven't
experienced any problems caused by it.
* For inputs with more than 2 dimensions, we treat the last two
dimensions as being the dimensions of the matrices to multiply, and
'broadcast' across the other dimensions. This provides a convenient
way to quickly compute many matrix products in a single operation.
For example, arr(10, 2, 3) @ arr(10, 3, 4) performs 10 separate
matrix multiplies, each of which multiplies a 2x3 and a 3x4 matrix
to produce a 2x4 matrix, and then returns the 10 resulting matrices
together in an array with shape (10, 2, 4). The intuition here is
that we treat these 3d arrays of numbers as if they were 1d arrays
*of matrices*, and then apply matrix multiplication in an
elementwise manner, where now each 'element' is a whole matrix.
Note that broadcasting is not limited to perfectly aligned arrays;
in more complicated cases, it allows several simple but powerful
tricks for controlling how arrays are aligned with each other; see
[#broadcasting]_ for details. (In particular, it turns out that
when broadcasting is taken into account, the standard scalar *
matrix product is a special case of the elementwise multiplication
operator *.)
If one operand is >2d, and another operand is 1d, then the above
rules apply unchanged, with 1d->2d promotion performed before
broadcasting. E.g., arr(10, 2, 3) @ arr(3) first promotes to
arr(10, 2, 3) @ arr(3, 1), then broadcasts the right argument to
create the aligned operation arr(10, 2, 3) @ arr(10, 3, 1),
multiplies to get an array with shape (10, 2, 1), and finally
removes the added dimension, returning an array with shape (10, 2).
Similarly, arr(2) @ arr(10, 2, 3) produces an intermediate array
with shape (10, 1, 3), and a final array with shape (10, 3).
* 0d (scalar) inputs raise an error. Scalar * matrix multiplication
is a mathematically and algorithmically distinct operation from
matrix @ matrix multiplication, and is already covered by the
elementwise * operator. Allowing scalar @ matrix would thus
both require an unnecessary special case, and violate TOOWTDI.
The recommended semantics for @@ are::
def __matpow__(self, n):
if not isinstance(n, numbers.Integral):
raise TypeError("@@ not implemented for fractional powers")
if n == 0:
return identity_matrix_with_shape(self.shape)
elif n < 0:
return inverse(self) @ (self @@ (n + 1))
else:
return self @ (self @@ (n - 1))
(Of course we expect that much more efficient implementations will be
used in practice.) Notice that if given an appropriate definition of
identity_matrix_with_shape, then this definition will
automatically handle >2d arrays appropriately. Notice also that with
this definition, vector @@ 2 gives the squared Euclidean length of
the vector, a commonly used value. Also, while it is rarely useful to
explicitly compute inverses or other negative powers in standard
immediate-mode dense matrix code, these computations are natural when
doing symbolic or deferred-mode computations (as in e.g. sympy,
theano, numba, numexpr); therefore, negative powers are fully
supported. Fractional powers, though, bring in variety of
mathematical complications_, so we leave it to individual projects
to decide whether they want to try to define some reasonable semantics
for fractional inputs.
.. _mathematical complications:
https://en.wikipedia.org/wiki/Square_root_of_a_matrix
--------
We group existing Python projects which provide array- or matrix-like
types based on what API they currently use for elementwise and matrix
multiplication.
**Projects which currently use * for *elementwise* multiplication, and
function/method calls for *matrix* multiplication:**
The developers of the following projects have expressed an intention
to implement @ and @@ on their array-like types using the
above semantics:
* numpy
* pandas
* blaze
* theano
The following projects have been alerted to the existence of the PEP,
but it's not yet known what they plan to do if it's accepted. We
don't anticipate that they'll have any objections, though, since
everything proposed here is consistent with how they already do
things:
* pycuda
* panda3d
**Projects which currently use * for *matrix* multiplication, and
function/method calls for *elementwise* multiplication:**
The following projects have expressed an intention, if this PEP is
accepted, to migrate from their current API to the elementwise-*,
matmul-@ convention (i.e., this is a list of projects whose API
fragmentation will probably be eliminated if this PEP is accepted):
* numpy (numpy.matrix)
* scipy.sparse
* pyoperators
* pyviennacl
The following projects have been alerted to the existence of the PEP,
but it's not known what they plan to do if it's accepted (i.e., this
is a list of projects whose API fragmentation may or may not be
eliminated if this PEP is accepted):
* cvxopt
**Projects which currently use * for *matrix* multiplication, and
which do not implement elementwise multiplication at all:**
There are several projects which implement matrix types, but from a
very different perspective than the numerical libraries discussed
above. These projects focus on computational methods for analyzing
matrices in the sense of abstract mathematical objects (i.e., linear
maps over free modules over rings), rather than as big bags full of
numbers that need crunching. And it turns out that from the abstract
math point of view, there isn't much use for elementwise operations in
the first place; as discussed in the Background section above,
elementwise operations are motivated by the bag-of-numbers approach.
The different goals of these projects mean that they don't encounter
the basic problem that this PEP exists to address, making it mostly
irrelevant to them; while they appear superficially similar, they're
actually doing something quite different. They use * for matrix
multiplication (and for group actions, and so forth), and if this PEP
is accepted, their expressed intention is to continue doing so, while
perhaps adding @ and @@ on matrices as aliases for * and
**:
* sympy
* sage
If you know of any actively maintained Python libraries which provide
an interface for working with numerical arrays or matrices, and which
are not listed above, then please let the PEP author know:
njs at pobox.com
Rationale for specification details
===================================
Choice of operator
------------------
Why @ instead of some other punctuation symbol? It doesn't matter
much, and there isn't any consensus across other programming languages
about how this operator should be named [#matmul-other-langs]_, but
@ has a few advantages:
* @ is a friendly character that Pythoneers are already used to
typing in decorators, and its use in email addresses means it is
more likely to be easily accessible across keyboard layouts than
some other characters (e.g. \$ or non-ASCII characters).
* The mATrices mnemonic is cute.
* It's round like * and :math:\cdot.
* The use of a single-character token makes @@ possible, which is
a nice bonus.
* The swirly shape is reminiscent of the simultaneous sweeps over rows
and columns that define matrix multiplication; its asymmetry is
evocative of its non-commutative nature.
(Non)-Definitions for built-in types
------------------------------------
No __matmul__ or __matpow__ are defined for builtin numeric
types (float, int, etc.) or for the numbers.Number
hierarchy, because these types represent scalars, and the consensus
semantics for @ are that it should raise an error on scalars.
We do not -- for now -- define a __matmul__ method on the standard
memoryview or array.array objects, for several reasons.
First, there is currently no way to create multidimensional memoryview
objects using only the stdlib, and array objects cannot represent
multidimensional data at all, which makes __matmul__ much less
useful. Second, providing a quality implementation of matrix
multiplication is highly non-trivial. Naive nested loop
implementations are very slow and providing one in CPython would just
create a trap for users. But the alternative -- providing a modern,
competitive matrix multiply -- would require that CPython link to a
BLAS library, which brings a set of new complications. In particular,
several popular BLAS libraries (including the one that ships by
default on OS X) currently break the use of multiprocessing
[#blas-fork]_. And finally, we'd have to add quite a bit beyond
__matmul__ before memoryview or array.array would be
useful for numeric work -- like elementwise versions of the other
arithmetic operators, just to start. Put together, these
considerations mean that the cost/benefit of adding __matmul__ to
these types just isn't there, so for now we'll continue to delegate
these problems to numpy and friends, and defer a more systematic
solution to a future proposal.
There are also non-numeric Python builtins which define __mul__
(str, list, ...). We do not define __matmul__ for these
types either, because why would we even do that.
Unresolved issues
-----------------
Associativity of @
''''''''''''''''''''''
It's been suggested that @ should be right-associative, on the
grounds that for expressions like Mat @ Mat @ vec, the two
different evaluation orders produce the same result, but the
right-associative order Mat @ (Mat @ vec) will be faster and use
less memory than the left-associative order (Mat @ Mat) @ vec.
(Matrix-vector multiplication is much cheaper than matrix-matrix
multiplication). It would be a shame if users found themselves
required to use an overabundance of parentheses to achieve acceptable
speed/memory usage in common situations, but, it's not currently clear
whether such cases actually are common enough to override Python's
general rule of left-associativity, or even whether they're more
common than the symmetric cases where left-associativity would be
faster (though this does seem intuitively plausible). The only way to
answer this is probably to do an audit of some real-world uses and
check how often the associativity matters in practice; if this PEP is
accepted in principle, then we should probably do this check before
finalizing it.
Rejected alternatives to adding a new operator
==============================================
Over the past few decades, the Python numeric community has explored a
variety of ways to resolve the tension between matrix and elementwise
multiplication operations. PEP 211 and PEP 225, both proposed in 2000
and last seriously discussed in 2008 [#threads-2008]_, were early
attempts to add new operators to solve this problem, but suffered from
serious flaws; in particular, at that time the Python numerical
community had not yet reached consensus on the proper API for array
objects, or on what operators might be needed or useful (e.g., PEP 225
proposes 6 new operators with unspecified semantics). Experience
since then has now led to consensus that the best solution, for both
numeric Python and core Python, is to add a single infix operator for
matrix multiply (together with the other new operators this implies
like @=).
We review some of the rejected alternatives here.
**Use a second type that defines __mul__ as matrix multiplication:**
As discussed above (Background: What's wrong with the status quo?_),
this has been tried this for many years via the numpy.matrix type
(and its predecessors in Numeric and numarray). The result is a
strong consensus among both numpy developers and developers of
downstream packages that numpy.matrix should essentially never be
used, because of the problems caused by having conflicting duck types
for arrays. (Of course one could then argue we should *only* define
__mul__ to be matrix multiplication, but then we'd have the same
problem with elementwise multiplication.) There have been several
pushes to remove numpy.matrix entirely; the only counter-arguments
have come from educators who find that its problems are outweighed by
the need to provide a simple and clear mapping between mathematical
notation and code for novices (see Transparent syntax is especially
crucial for non-expert programmers_). But, of course, starting out
newbies with a dispreferred syntax and then expecting them to
transition later causes its own problems. The two-type solution is
worse than the disease.
**Add lots of new operators, or add a new generic syntax for defining
infix operators:** In addition to being generally un-Pythonic and
repeatedly rejected by BDFL fiat, this would be using a sledgehammer
to smash a fly. The scientific python community has consensus that
adding one operator for matrix multiplication is enough to fix the one
otherwise unfixable pain point. (In retrospect, we all think PEP 225
was a bad idea too -- or at least far more complex than it needed to
be.)
**Add a new @ (or whatever) operator that has some other meaning in
general Python, and then overload it in numeric code:** This was the
approach taken by PEP 211, which proposed defining @ to be the
equivalent of itertools.product. The problem with this is that
when taken on its own terms, adding an infix operator for
itertools.product is just silly. (During discussions of this PEP,
a similar suggestion was made to define @ as a general purpose
function composition operator, and this suffers from the same problem;
functools.compose isn't even useful enough to exist.) Matrix
multiplication has a uniquely strong rationale for inclusion as an
infix operator. There almost certainly don't exist any other binary
operations that will ever justify adding any other infix operators to
Python.
**Add a .dot method to array types so as to allow "pseudo-infix"
A.dot(B) syntax:** This has been in numpy for some years, and in many
cases it's better than dot(A, B). But it's still much less readable
than real infix notation, and in particular still suffers from an
extreme overabundance of parentheses. See Why should matrix
multiplication be infix?_ above.
**Use a 'with' block to toggle the meaning of * within a single code
block**: E.g., numpy could define a special context object so that
we'd have::
c = a * b # element-wise multiplication
with numpy.mul_as_dot:
c = a * b # matrix multiplication
However, this has two serious problems: first, it requires that every
array-like type's __mul__ method know how to check some global
state (numpy.mul_is_currently_dot or whatever). This is fine if
a and b are numpy objects, but the world contains many
non-numpy array-like objects. So this either requires non-local
coupling -- every numpy competitor library has to import numpy and
then check numpy.mul_is_currently_dot on every operation -- or
else it breaks duck-typing, with the above code doing radically
different things depending on whether a and b are numpy
objects or some other sort of object. Second, and worse, with
blocks are dynamically scoped, not lexically scoped; i.e., any
function that gets called inside the with block will suddenly find
itself executing inside the mul_as_dot world, and crash and burn
horribly -- if you're lucky. So this is a construct that could only
be used safely in rather limited cases (no function calls), and which
would make it very easy to shoot yourself in the foot without warning.
**Use a language preprocessor that adds extra numerically-oriented
operators and perhaps other syntax:** (As per recent BDFL suggestion:
[#preprocessor]_) This suggestion seems based on the idea that
numerical code needs a wide variety of syntax additions. In fact,
given @, most numerical users don't need any other operators or
syntax; it solves the one really painful problem that cannot be solved
by other means, and that causes painful reverberations through the
larger ecosystem. Defining a new language (presumably with its own
parser which would have to be kept in sync with Python's, etc.), just
to support a single binary operator, is neither practical nor
desireable. In the numerical context, Python's competition is
special-purpose numerical languages (Matlab, R, IDL, etc.). Compared
to these, Python's killer feature is exactly that one can mix
specialized numerical code with code for XML parsing, web page
generation, database access, network programming, GUI libraries, and
so forth, and we also gain major benefits from the huge variety of
tutorials, reference material, introductory classes, etc., which use
Python. Fragmenting "numerical Python" from "real Python" would be a
major source of confusion. A major motivation for this PEP is to
*reduce* fragmentation. Having to set up a preprocessor would be an
especially prohibitive complication for unsophisticated users. And we
use Python because we like Python! We don't want
almost-but-not-quite-Python.
as in a well-known Python recipe:** (See: [#infix-hack]_) Beautiful is
better than ugly. This is... not beautiful. And not Pythonic. And
especially unfriendly to beginners, who are just trying to wrap their
heads around the idea that there's a coherent underlying system behind
these magic incantations that they're learning, when along comes an
evil hack like this that violates that system, creates bizarre error
messages when accidentally misused, and whose underlying mechanisms
can't be understood without deep knowledge of how object oriented
systems work. We've considered promoting this as a general solution,
and perhaps if the PEP is rejected we'll revisit this option, but so
far the numeric community has mostly elected to leave this one on the
shelf.
References
==========
.. [#preprocessor] From a comment by GvR on a G+ post by GvR; the
comment itself does not seem to be directly linkable:
.. [#infix-hack] http://code.activestate.com/recipes/384122-infix-operators/
http://www.sagemath.org/doc/reference/misc/sage/misc/decorators.html#sage.misc.decorators.infix_operator
.. [#scipy-conf] http://conference.scipy.org/past.html
.. [#pydata-conf] http://pydata.org/events/
.. [#lht] In this formula, :math:\beta is a vector or matrix of
regression coefficients, :math:V is the estimated
variance/covariance matrix for these coefficients, and we want to
test the null hypothesis that :math:H\beta = r; a large :math:S
then indicates that this hypothesis is unlikely to be true. For
example, in an analysis of human height, the vector :math:\beta
might contain one value which was the the average height of the
measured men, and another value which was the average height of the
measured women, and then setting :math:H = [1, -1], r = 0 would
let us test whether men and women are the same height on
average. Compare to eq. 2.139 in
http://sfb649.wiwi.hu-berlin.de/fedc_homepage/xplore/tutorials/xegbohtmlnode17.html
https://github.com/rerpy/rerpy/blob/0d274f85e14c3b1625acb22aed1efa85d122ecb7/rerpy/incremental_ls.py#L202
.. [#pycon-tutorials] Out of the 36 tutorials scheduled for PyCon 2014
(https://us.pycon.org/2014/schedule/tutorials/), we guess that the
8 below will almost certainly deal with matrices:
* Dynamics and control with Python
* Exploring machine learning with Scikit-learn
* How to formulate a (science) problem and analyze it using Python
code
* Diving deeper into Machine Learning with Scikit-learn
* Data Wrangling for Kaggle Data Science Competitions – An etude
* Hands-on with Pydata: how to build a minimal recommendation
engine.
* Python for Social Scientists
In addition, the following tutorials could easily involve matrices:
* Introduction to game programming
* mrjob: Snakes on a Hadoop *("We'll introduce some data science
concepts, such as user-user similarity, and show how to calculate
these metrics...")*
* Mining Social Web APIs with IPython Notebook
* Beyond Defaults: Creating Polished Visualizations Using Matplotlib
This gives an estimated range of 8 to 12 / 36 = 22% to 33% of
tutorials dealing with matrices; saying ~20% then gives us some
wiggle room in case our estimates are high.
.. [#sloc-details] SLOCs were defined as physical lines which contain
at least one token that is not a COMMENT, NEWLINE, ENCODING,
INDENT, or DEDENT. Counts were made by using tokenize module
from Python 3.2.3 to examine the tokens in all files ending .py
underneath some directory. Only tokens which occur at least once
in the source trees are included in the table. The counting script
will be available as an auxiliary file once this PEP is submitted;
until then, it can be found here:
https://gist.github.com/njsmith/9157645
Matrix multiply counts were estimated by counting how often certain
tokens which are used as matrix multiply function names occurred in
each package. In principle this could create false positives, but
as far as I know the counts are exact; it's unlikely that anyone is
using dot as a variable name when it's also the name of one of
the most widely-used numpy functions.
All counts were made using the latest development version of each
project as of 21 Feb 2014.
'stdlib' is the contents of the Lib/ directory in commit
d6aa3fa646e2 to the cpython hg repository, and treats the following
tokens as indicating matrix multiply: n/a.
'scikit-learn' is the contents of the sklearn/ directory in commit
69b71623273ccfc1181ea83d8fb9e05ae96f57c7 to the scikit-learn
repository (https://github.com/scikit-learn/scikit-learn), and
treats the following tokens as indicating matrix multiply: dot,
fast_dot, safe_sparse_dot.
'nipy' is the contents of the nipy/ directory in commit
(https://github.com/nipy/nipy/), and treats the following tokens as
indicating matrix multiply: dot.
.. [#blas-fork] BLAS libraries have a habit of secretly spawning
play very poorly with fork(); the usual symptom is that
attempting to perform linear algebra in a child process causes an
.. [#matmul-other-langs]
http://mail.scipy.org/pipermail/scipy-user/2014-February/035499.html
.. [#github-details] Counts were produced by manually entering the
string "import foo" or "from foo import" (with quotes) into
the Github code search page, e.g.:
https://github.com/search?q=%22import+numpy%22&ref=simplesearch&type=Code
on 2014-04-10 at ~21:00 UTC. The reported values are the numbers
given in the "Languages" box on the lower-left corner, next to
"Python". This also causes some undercounting (e.g., leaving out
Cython code, and possibly one should also count HTML docs and so
forth), but these effects are negligible (e.g., only ~1% of numpy
usage appears to occur in Cython code, and probably even less for
the other modules listed). The use of this box is crucial,
however, because these counts appear to be stable, while the
"overall" counts listed at the top of the page ("We've found ___
code results") are highly variable even for a single search --
simply reloading the page can cause this number to vary by a factor
of 2 (!!). (They do seem to settle down if one reloads the page
repeatedly, but nonetheless this is spooky enough that it seemed
better to avoid these numbers.)
These numbers should of course be taken with multiple grains of
salt; it's not clear how representative Github is of Python code in
general, and limitations of the search tool make it impossible to
get precise counts. AFAIK this is the best data set currently
available, but it'd be nice if it were better. In particular:
* Lines like import sys, os will only be counted in the sys
row.
* A file containing both import X and from X import will be
counted twice
* Imports of the form from X.foo import ... are missed. We
could catch these by instead searching for "from X", but this is
a common phrase in English prose, so we'd end up with false
positives from comments, strings, etc. For many of the modules
considered this shouldn't matter too much -- for example, the
stdlib modules have flat namespaces -- but it might especially
lead to undercounting of django, scipy, and twisted.
Also, it's possible there exist other non-stdlib modules we didn't
think to test that are even more-imported than numpy -- though we
tried quite a few of the obvious suspects. If you find one, let us
know! The modules tested here were chosen based on a combination
of intuition and the top-100 list at pypi-ranking.info.
Fortunately, it doesn't really matter if it turns out that numpy
is, say, merely the *third* most-imported non-stdlib module, since
the point is just that numeric programming is a common and
mainstream activity.
Finally, we should point out the obvious: whether a package is
import**ed** is rather different from whether it's import**ant**.
No-one's claiming numpy is "the most important package" or anything
like that. Certainly more packages depend on distutils, e.g., then
depend on numpy -- and far fewer source files import distutils than
import numpy. But this is fine for our present purposes. Most
source files don't import distutils because most source files don't
care how they're distributed, so long as they are; these source
files thus don't care about details of how distutils' API works.
This PEP is in some sense about changing how numpy's and related
packages' APIs work, so the relevant metric is to look at source
files that are choosing to directly interact with that API, which
is sort of like what we get by looking at import statements.
.. [#hugunin] The first such proposal occurs in Jim Hugunin's very
first email to the matrix SIG in 1995, which lays out the first
draft of what became Numeric. He suggests using * for
elementwise multiplication, and % for matrix multiplication:
https://mail.python.org/pipermail/matrix-sig/1995-August/000002.html
=========
This document has been placed in the public domain.
--
Nathaniel J. Smith
Postdoctoral researcher - Informatics - University of Edinburgh
http://vorpus.org
--
Nathaniel J. Smith
Postdoctoral researcher - Informatics - University of Edinburgh
http://vorpus.org
|
2017-01-24 13:48:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4625648260116577, "perplexity": 5452.713486623891}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00420-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://tamzinouthwaite.net/estevan/how-join-2-type-document-in-one-on-latex.php
|
## How to insert an image as front cover and back cover using
Latex Math Symbols Universitetet i Bergen. Here are some commonly used math commands in LaTeX. Fractions. Symbol Command just to get $\cbrt{2}$. \end{document} one for the question and one for each of, My First LaTeX Document; Adding a two-column section to a LaTeX document – this tells LaTeX to make each column vertically aligned to the top..
### LaTeX Tutorial 2 Common Math Notation - Part 1/2 - YouTube
Generating a single LaTeX file by merging different LaTeX. Preparation of Papers in Two Column Format amount of text that can be placed on one page. Please do not use LATEX. on page one of this document is a, The most frequently asked questions following line to your document preamble, rather than the one given doc/latex/psnfss. Reading this document is strongly.
### Combining or merging several TEX files into one document?
Merge PDF files using LaTeX Chongyang Ma's Blog. combine – Bundle individual documents into a single /macros/latex/contrib/combine: DocВuВmenВta DownВload the conВtents of this packВage in one zip, Sample Latex Files: I am distributing sample Latex files for the Report, Slides and Paper in one column and two column format. It will help you in your documentation.
### LaTeX Tutorial 2 Common Math Notation - Part 1/2 - YouTube
Combine PowerPoint documents SharePoint Stack Exchange. The most frequently asked questions following line to your document preamble, rather than the one given doc/latex/psnfss. Reading this document is strongly We will create our first LaTeX document. and take one of the following two Elements part for all the common features that belong to every type of document..
Preparation of Papers in Two Column Format amount of text that can be placed on one page. Please do not use LATEX. on page one of this document is a You can also get a FREE word processor-type interface for LaTeX, (2) TexShell (3 so you had to use Ghostview if you wanted to view your entire document
You can also get a FREE word processor-type interface for LaTeX, (2) TexShell (3 so you had to use Ghostview if you wanted to view your entire document Is it possible to divide an MS Word page into two separate columns, where entering information into one Merge the top two cells to Then you can type in
## 1. Join one of the three groups 2. 3. 30 min ) ilo.org
Combine PowerPoint documents SharePoint Stack Exchange. Join one of the three groups 2. The following document cites examples of manufacturing produces quality surgical and cleaning latex gloves., LaTeX Math Symbols Prepared by L. Kocbach, on the basis of this document (origin: David Carlisle, Manchester University) File A.tex contains all necessary code.
### Latex Math Symbols Universitetet i Bergen
Including full LaTeX documents within others Stack Overflow. Help Create Join Login. Open Source Software. LaTeX to RTF convertor that handles equations, LaTeX2RTF 2.3.8. I use document class RevTex4-1., In large LaTeX documents one usually has several tex files, one for each chapter or section, and then they are joined together to generate a single output. This helps.
Creating two columns in article, report or book. to be distinguished when creating multiple columns in a Latex document. 2 Bibliography: very basic one. [1] In large LaTeX documents one usually has several tex files, one for each chapter or section, and then they are joined together to generate a single output. This helps
Obviously, you must remember to \usepackage{array} in your document hline 1 & 2 & 3 \\ \hline 4 & 5 & 6 columns to merge; alignment is Help Create Join Login. Open Source Software. LaTeX to RTF convertor that handles equations, LaTeX2RTF 2.3.8. I use document class RevTex4-1.
### Generating a single LaTeX file by merging different LaTeX
Latex Math Symbols Universitetet i Bergen. How can I change the margins in LaTeX? On this in the preamble of your document are both changed because LaTeX has two side margins, one for odd pages, LaTeX/Document Structure. Instructs LaTeX to typeset the document in two columns if you want a report to be in 12pt type on A4, but printed one-sided in.
### LaTeX to RTF converter / [Latex2rtf-users] LaTeX2RTF with
1. Join one of the three groups 2. 3. 30 min ) ilo.org. My First LaTeX Document; Adding a two-column section to a LaTeX document – this tells LaTeX to make each column vertically aligned to the top. Join one of the three groups 2. The following document cites examples of manufacturing produces quality surgical and cleaning latex gloves..
To produce a simple LaTeX document, You can then type latex paper.tex and the typesetting program One style file you can include if you need phonetic Join one of the three groups 2. The following document cites examples of manufacturing produces quality surgical and cleaning latex gloves.
1.2 Type size options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 5.2 One column abstracts 18.1 Draft documents 25/02/2012В В· LaTeX Tutorial 2 - Common Math Notation - Part 1/2 Michelle Krummel. LaTeX Tutorial 2 Using LaTeX to type up a HW assignment or Test - Duration:
14/05/2018В В· How do I combine two Word documents that or even open the one version of the merged document I did of How to Merge Documents in Microsoft create new file exer3.tex type document class using article \end{document} 2. Display the output as She said "three". I AM LATEX USING ARTICLE DOCUMENT \end
896959
|
2020-02-22 13:52:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9285021424293518, "perplexity": 2461.375377391715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145676.44/warc/CC-MAIN-20200222115524-20200222145524-00161.warc.gz"}
|
http://clay6.com/qa/6436/in-a-set-a-if-a-1-a-2-in-r-rightarrow-a-2-a-1-in-r-for-a-1-a-2-in-a-then-wh
|
Browse Questions
# In a set $A$, if $(a_1,a_2) \in R\;\Rightarrow\; (a_2,a_1)\in R \; for \;a_1,a_2 \in A$, then what is the relation $R$ called?
Can you answer this question?
ANSWER: A relation R in a set A is called $\mathbf{symmetric}$, if $(a_1,a_2) \in R\;\Rightarrow\; (a_2,a_1)\in R \; for \;a_1,a_2 \in A$
answered Mar 8, 2013
|
2017-06-28 21:05:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9901831150054932, "perplexity": 1559.2242032113686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323801.5/warc/CC-MAIN-20170628204133-20170628224133-00364.warc.gz"}
|
https://byjus.com/chemistry/toluene/
|
# Toluene
## What is Toluene?
Toluene is a liquid, which is colorless, water-insoluble and smells like paint thinners. It is a mono-substituted colorless liquid, consisting of a CH3 group that is attached to a phenyl group.
Toluene is widely used as an industrial raw material and a solvent for manufacturing of many commercial products, including paints and glues.
## Chemical Properties
Toluene more reactive to electrophiles than the benzene. Due to the greater part of methyl group than the electron-releasing properties, it reacts normal fragrant in the same position. It faces sulfonation to provide an acid called p-toluenesulfonic and chlorination by Cl2 in the presence of FeCl3 to give ortho and para isomers of chlorotoluene.
## Toluene Uses
Toluene is largely used as a benzene:
$C_{6}H_{5}CH_{3}+H_{2}\rightarrow C_{6}H_{6}+CH_{4}$
While the second most used application involves its disproportionate to a mixture of benzene and xylene.
Precursor to other chemicals: Along with the synthesis of benzene and xylene, toluene is used in the manufacture of the following
• Polyurethane foam
• Trinitrotoluene – Explosive
• TNT
• Synthetic Drugs.
Solvent: Toluene is a common solvent used for the following:
• Glues
• Paints
• Paint Thinners
• Printing Ink
• Rubber
• Leather Tanners
• Silicone Sealants
• Chemical Reactants
• Lacquers
• Disinfectants.
Fuel: It can be used in internal combustion engines as gasoline fuel.
Niche applications: It is used as a solvent for carbon nanomaterials, nanotubes and fullerenes.
#### Practise This Question
Chlorination of methane is an example of:
|
2019-06-26 03:48:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33034998178482056, "perplexity": 11540.841199162513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000164.31/warc/CC-MAIN-20190626033520-20190626055520-00237.warc.gz"}
|
https://zbmath.org/?q=an:06854682
|
# zbMATH — the first resource for mathematics
Value regions of univalent self-maps with two boundary fixed points. (English) Zbl 1393.30021
Let $$f$$ be a holomorphic self-map of the unit disk $$\mathbb D$$. A point $$\sigma \in \partial \mathbb D$$ is called a boundary regular fixed point if $$f(\sigma) = \sigma$$ and the angular derivative $$f'(\sigma)$$ is finite. It is known that if $$f \in \mathrm{Hol}(\mathbb D, \mathbb D)$$ has no fixed points in $$\mathbb D$$, then it has a unique boundary regular fixed point $$\tau$$ with $$f'(\tau) \leq 1$$. Such a point $$\tau$$ is called (boundary) Denjoy-Wolff point. The authors find the exact value region $${\mathcal V}(z_0, T)$$ of the point evaluation functional $$f \to f(z_0)$$ over the class of all univalent self-maps $$f$$ of $$\mathbb D$$ having a boundary regular fixed point at $$\sigma = -1$$ with $$f'(-1) = e^{T}$$ and the Denjoy-Wolff point at $$\tau = 1$$.
##### MSC:
30C75 Extremal problems for conformal and quasiconformal mappings, other methods 30D05 Functional equations in the complex plane, iteration and composition of analytic functions of one complex variable 30C35 General theory of conformal mappings 30C55 General theory of univalent and multivalent functions of one complex variable 30C80 Maximum principle, Schwarz’s lemma, Lindelöf principle, analogues and generalizations; subordination
Full Text:
|
2021-01-24 06:39:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6992272138595581, "perplexity": 304.43915079364797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703547333.68/warc/CC-MAIN-20210124044618-20210124074618-00250.warc.gz"}
|
https://www.physicsforums.com/threads/differentiable-manifolds-over-fields-other-than-r-c.978845/
|
# I Differentiable manifolds over fields other than R, C
#### WWGD
Gold Member
[Moderator's note: Spin-off from another thread.]
Summary: How do you define a derivative on a manifold with no metric?
I was reading about differentiable manifolds on wikipedia, and in the definition it never specifies that the differentiable manifold has a metric on it. I understand that you can set up limits of functions in topological spaces without a metric being defined, but my understanding of derivatives suggests that you need a metric in both the domain and the codomain, in order to come up with a rate of change which you are finding the limit of. Is there a more general definition of the derivative that is being used here?
You need the structure of a topological vector field K with 0 as a limit point of K-{0}. The TVF structure allows the addition and quotient expression to make sense; you need 0 as a limit point to define the limit as h-->0 and the topology to speak of convergence and a limit.
Last edited by a moderator:
Related Topology and Analysis News on Phys.org
#### fresh_42
Mentor
2018 Award
Or to say it short: $\varphi\, : \,M\longrightarrow N \Longrightarrow D\varphi\, : \,TM\longrightarrow TN$ and the differentiation takes place in the tangent bundle, over a real or complex vector space.
#### Math_QED
Homework Helper
For a differentiable manifold you don't need the charts to be differentiable (smooth). You just require that each chart is a homeomorphism of an open subset of your topological space onto an open subset of an Euclidean space $\mathbb{R}^n$. The differentiability kicks in when we talk about compatibility of charts. If $\phi, \psi$ are two chart, we want that $\phi \circ \psi^{-1}$ is a smooth map on their common domain.
Or maybe I misunderstand the question and you ask about what a differentiable map between two (smooth) manifolds is? What do you mean with a derivative on a manifold? You can only take the derivative of a map.
#### WWGD
Gold Member
Or to say it short: $\varphi\, : \,M\longrightarrow N \Longrightarrow D\varphi\, : \,TM\longrightarrow TN$ and the differentiation takes place in the tangent bundle, over a real or complex vector space.
I think you may be able to do it over the p-adics, but I have no idea how/what that would be like.
#### fresh_42
Mentor
2018 Award
I think you may be able to do it over the p-adics, but I have no idea how/what that would be like.
A non-Archimedean metric? Would be strange, but I have a bit of the suspicion that nobody actually has an idea of the p-adic world. On the other hand they play a role in my book about quadratic forms, and manifolds are not far away, at least orthogonal groups. I recently asked about the possibility to vary the field in a GUT instead of the symmetry groups. I only received a big no-no as answers, but this might have more to do with mainstream than with necessity. Why should it be impossible to build the standard model on orthogonal groups over p-adic vector spaces, possibly with a large p? But as long as nobody proves a version of Noether over p-adic fields, it is unlikely that anybody considers such an approach. They are too deeply caught in the superworld and real eigenvalues as observables.
#### WWGD
Gold Member
By metric do you mean a distance metric or metric tensor? I don't see the need to use a distance metric to define the limit quotient; just need to define addition, quotient ( multiplicative inverse) and limits, and having {0} as a limit point of the complement.
#### fresh_42
Mentor
2018 Award
There is more to a manifold than differentiability. But even differentiability needs to have nearby defined! Sure, this can be done by open neighborhoods and thus by a p-adic topology. I only assume that a non-Archimedean distance will have strange consequences for the analysis on such a manifold.
If you think about it, then the entire (physical) world is build on the concept of invariance of a quadratic form. In GR it is $(-1,1,1,1)$, and in SM it are the unitary groups. Isn't that strange? You only need to be able to distinguish left and right, forward and backward, i.e. $2\neq 0$. The rest is orthogonality.
#### WWGD
Gold Member
Ah, yes, most defs of manifold imply metrizability. I think normal+ 2nd countable does it by, e.g.. Urysohn. I was thinking of differentiability as a stand-alone.
#### WWGD
Gold Member
There is more to a manifold than differentiability. But even differentiability needs to have nearby defined! Sure, this can be done by open neighborhoods and thus by a p-adic topology. I only assume that a non-Archimedean distance will have strange consequences for the analysis on such a manifold.
If you think about it, then the entire (physical) world is build on the concept of invariance of a quadratic form. In GR it is $(-1,1,1,1)$, and in SM it are the unitary groups. Isn't that strange? You only need to be able to distinguish left and right, forward and backward, i.e. $2\neq 0$. The rest is orthogonality.
Edit: But these are not standard metrics since they are not Real- valued. In the non-standard Reals,e.g., if a is a non-standard number, d(a,0)=a is non-Real.
#### fresh_42
Mentor
2018 Award
But these are not standard metrics since they are not Real- valued. In the non-standard Reals,e.g., if a is a non-standard number, d(a,0)=a is non-Real.
But this is another issue and has nothing to do with p-adic completions. The most abstract concept to do analysis is probably measure theory. However, I have no idea how measure theory works over the p-adics.
The Minkowski metric or the unitarity of symmetry groups are all special versions of orthogonality, or better invariances of a quadratic form. My book about quadratic forms (O'Meara) has not only p-adics as examples, it also deals with Clifford algebras, spinors, and as mentioned, orthogonal groups - all terms which are important in physics. If I were younger, I would attack Noether over p-adics. Who knows, it might be a new unknown way to describe motion, and therewith physics. It isn't crazier than to search for bosinos. The main obstacle is probably not the analysis or at least differentiation, it is likely to develop an intuition for those fields.
#### WWGD
Gold Member
I always found quadratic forms with Q(a,a)=0 for nonzero a intriguing. Positive-definite ones and inner- products somehow feel more reasonable.
#### fresh_42
Mentor
2018 Award
$$f\, : \,\mathbb{Q}_p \longrightarrow \mathbb{Q}_p \; , \;x \longmapsto \left(\dfrac{1}{|x|_p}\right)^2 \text{ and } f(0)=0$$
is differentiable on $\mathbb{Q}_p$ with $f\,'(x)\equiv 0$, but it isn't even locally constant in $x=0$.
This means a differentiable p-adic manifold is fundamentally different than a real one.
#### fresh_42
Mentor
2018 Award
I think you may be able to do it over the p-adics, but I have no idea how/what that would be like.
Btw., do you have any imagination how an open ball in a p-adic topology looks like?
#### WWGD
Gold Member
Btw., do you have any imagination how an open ball in a p-adic topology looks like?
Not now, let me think it through.
"Differentiable manifolds over fields other than R, C"
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
|
2019-10-17 08:42:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7847499847412109, "perplexity": 650.3183882747604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986673250.23/warc/CC-MAIN-20191017073050-20191017100550-00467.warc.gz"}
|
https://www.physicsforums.com/threads/amplitude-of-multislit-grating.108864/
|
# Homework Help: Amplitude of multislit grating
1. Jan 31, 2006
### UrbanXrisis
The formula for the amplitude for a multislit grating is:
$$A(Y,t)=\frac{M}{R}cos \left(\frac{2 \pi R}{\lambda}-2 \pi f t \right) \frac{sin^2 \left[(2N+1) \frac{x}{2}\right]}{sin^2 \left[\frac{x}{2}\right]}$$
a spectra would look something like this
I am trying to find a formula not for the amplitude of the giant maximas but of the height of the maxima right after the giant maxima.
so I know maxima's occur when the denominator of sine is much smaller than the numerator. But how would I equate the maxima height right after the giant maxima?
|
2018-10-16 22:27:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8738442063331604, "perplexity": 1325.7752342201063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510893.26/warc/CC-MAIN-20181016221847-20181017003347-00460.warc.gz"}
|
https://brilliant.org/discussions/thread/a-request-for-a-search-bar/
|
×
# A Request for a Search Bar
I would just like to ask if Brilliant could design some sort of search bar enabling people to be better connected and search for their friends.
Note by Alex Benfield
3 years, 11 months ago
Sort by:
yeah even i feel as need to have one a lot of m friends are on brilliant but i cant see their profiles!!!! · 3 years, 11 months ago
Hi Alex,
A search bar to find members on Brilliant is definitely in store for the future. Staff · 3 years, 11 months ago
yohhooooo!! thank you.. · 3 years, 10 months ago
Of course, then we can keep track of other's standings. · 3 years, 11 months ago
There is a search bar of such kind in discussion area! · 3 years, 11 months ago
No there isn't?! · 3 years, 11 months ago
|
2017-07-24 00:44:11
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8983656764030457, "perplexity": 5429.40080799238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424645.77/warc/CC-MAIN-20170724002355-20170724022355-00648.warc.gz"}
|
https://www.codingame.com/training/expert/when-pigs-fly
|
• 8
## Learning Opportunities
This puzzle can be solved using the following concepts. Practice using these concepts and improve your skills.
## Goal
Given a set of universal truths, determine whether All, Some, or No pigs can fly.
Each of N lines contains a logical statement S in the following general form:
OBJECTA (are OBJECTB | have TRAIT | can ABILITY)
or
TRAITA are TRAITB
where parentheses contain options separated by pipes ( | ). Furthermore, OBJECTS can be expanded like so:
OBJECT [with TRAITA [and TRAITB ...]] [that can ABILITYA [and ABILITYB ...]]
where brackets [ ] denote optional text.
Below are sample statements.
(1) MICE are RODENTS
(2) MICE with WINGS are BATS
(3) MICE that can FLY are ANIMALS with SUPERPOWERS
(4) BATS are RODENTS
(5) RODENTS with FEET and NOSES that can EAT are POPSICLES
To clarify, statement (1) means that all MICE are RODENTS, but only some RODENTS are MICE. Furthermore, it cannot be assumed from statements (1) and (4) that some MICE are BATS.
The task is to determine what can be concluded about pigs flying: must it be true for all pigs, some pigs, or none?
Input
Line 1: An integer N representing the number of statements.
Next N lines: A logical statement S written as described in the prompt.
Output
A string stating what can be concluded from the input about pigs flying:
(1) All pigs can fly
(2) Some pigs can fly
(3) No pigs can fly
Constraints
2 ≤ N ≤ 15
1 ≤ Length of S ≤ 256
PIGS appears in at least one statement
FLY appears in at least one statement
Statements are composed of letters and spaces
OBJECTS, TRAITS, and ABILITIES are written in uppercase, and everything else is in lowercase.
Example
Input
3
PIGS are BACONS
BACONS are GODS
GODS can FLY
Output
All pigs can fly
A higher resolution is required to access the IDE
Join the CodinGame community on Discord to chat about puzzle contributions, challenges, streams, blog articles - all that good stuff!
Online Participants
|
2021-01-24 10:04:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31449025869369507, "perplexity": 7154.866901837321}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703547475.44/warc/CC-MAIN-20210124075754-20210124105754-00122.warc.gz"}
|
https://brilliant.org/problems/is-it-a-factor/
|
# Is it a factor?
Algebra Level 2
If $f(x) = x^3 + 3x^2 - 5x + 2$, is $x - 2$ a factor of $f(x)$?
×
|
2021-06-20 14:01:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3450779318809509, "perplexity": 2995.2931946369486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487662882.61/warc/CC-MAIN-20210620114611-20210620144611-00268.warc.gz"}
|
https://www.richardvasques.com/publications/2862-p7-anisotropic-diffusion-in-model-2-d-pebble-bed-reactor-cores
|
# Richard Vasques
Assistant Professor of Nuclear Engineering
# [P7] Anisotropic diffusion in model 2-D pebble-bed reactor cores
### Conference paper
Richard Vasques, Edward W. Larsen
Proceedings of International Conference on Advances in Mathematics, Computational Methods, and Reactor Physics, Saratoga Springs, NY, 2009 May
View PDF
Cite
### Cite
APA
Vasques, R., & Larsen, E. W. (2009). [P7] Anisotropic diffusion in model 2-D pebble-bed reactor cores. In Proceedings of International Conference on Advances in Mathematics, Computational Methods, and Reactor Physics. Saratoga Springs, NY.
Chicago/Turabian
Vasques, Richard, and Edward W. Larsen. “[P7] Anisotropic Diffusion in Model 2-D Pebble-Bed Reactor Cores.” In Proceedings of International Conference on Advances in Mathematics, Computational Methods, and Reactor Physics. Saratoga Springs, NY, 2009.
MLA
Vasques, Richard, and Edward W. Larsen. “[P7] Anisotropic Diffusion in Model 2-D Pebble-Bed Reactor Cores.” Proceedings of International Conference on Advances in Mathematics, Computational Methods, and Reactor Physics, 2009.
ABSTRACT: We describe an analysis of neutron transport in a modeled 2-D (transport in a plane) pebble-bed reactor (PBR) core consisting of fuel discs stochastically piled up in a square box. Specifically, we consider the question of whether the force of gravity, which plays a role in this piling, affects the neutron transport within the system. Monte Carlo codes were developed for (i) deriving realizations of the 2-D core, and (ii) performing 2-D neutron transport inside the heterogeneous core; results from these simulations are presented. In addition to numerical results, we present preliminary findings from a new theory that generalizes the atomic mix approximation for PBR problems. This theory utilizes a non-classical form of the Boltzmann equation in which the locations of the scattering centers in the system are correlated and the distance to collision is not exponentially distributed. We take the diffusion limit of this equation and derive an anisotropic diffusion equation. (The diffusion is anisotropic because the mean and mean square distances between collisions in the horizontal and vertical directions are slightly different.) We show that the results predicted using the new theory more closely agree with experiment than the atomic mix results. We conclude by discussing plans to extend the present work to 3-D problems, in which we expect the anisotropic diffusion to be more pronounced.
Share
|
2022-12-09 00:08:56
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8438527584075928, "perplexity": 1740.6056881080058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711368.1/warc/CC-MAIN-20221208215156-20221209005156-00825.warc.gz"}
|
https://community.wolfram.com/groups/-/m/t/601091?p_p_auth=L3zdl1ga
|
# Remove strange line in animation
Posted 7 years ago
9169 Views
|
13 Replies
|
5 Total Likes
|
Hello, I am trying to run some animation from intothecontinuum-blog, and in 2 animations I got a little problem, this here for example: SquareLattice[t_] := Graphics[{Table[ Rectangle[{i + t, j + t}], {i, -2, 42, 2}, {j, -2, 42, 2}], Table[Rectangle[{i + 1 + t, j + 1 + t}], {i, -2, 42, 2}, {j, -2, 42, 2}]}, PlotRange -> {{0, 40}, {0, 40}}, ImageSize -> 500] f[x_, y_] := {Log[Sqrt[(x)^2 + (y)^2]], ArcTan[x, y]} ListAnimate[ Table[ImageTransformation[SquareLattice[t], f[#[[1]], #[[2]]] &, DataRange -> {{-Pi, Pi}, {-Pi, Pi}}], {t, 0, .9, .1}]] The gif in the blog seems to be fine, but if I evaluate the code, I got a thin line on the left (I try to attach a picture). This phenomen I already have recognized in another animation....but why is it there? Whatever I try, I am not able to get it away...does anyone have a little hint for me how to remove it?Many thanks already!
13 Replies
Sort By:
Posted 7 years ago
Why does that line exist?Look at the Plot of ArcTan[x, y], which defines how the y coordinate is obtained: Plot3D[ArcTan[x, y], {x, -Pi, Pi}, {y, -Pi, Pi}] There is a discontinuity where that line exists in your image. I imagine it has some numerical difficultly there.
Posted 7 years ago
I think the real answer here is to do it differently, to avoid this. Why not make Parts of annuluses yourself? I spent a few minutes to get this: ClearAll[DashedCircle] DashedCircle[rs:{ri_,ro_},n_Integer,\[Theta]_:0]:=Annulus[{0,0},rs,#+\[Theta]]&/@Partition[Subdivide[0,2\[Pi],2n],2] L={0.1,100,40}; M=20; \[Lambda]=PowerRange[#1,#2,(#2/#1)^(1/#3)]&@@L; \[Lambda]=Partition[\[Lambda],2,1]; Graphics[MapThread[DashedCircle[#1,M,#2 \[Pi]/M]&,{\[Lambda],Mod[Range[Length[\[Lambda]]],2,0]}]] Giving:Adding the motion should not be too hard, I will leave that as an exercise... Perhaps you could just set the PlotRange slightly different each time to zoom in...
Posted 7 years ago
Related Mathematica.SE thread: "How can this type of optical illusion be created in Mathematica?"This thread contains some other methods allowing creation of such figures.
Posted 7 years ago
Very interesting, many thanks....but if I try tile := Module[{KeyHole}, KeyHole[base_] := Sequence[Disk[{0, 1/3} + base, 1/10], Rectangle[{-1/30, 1/15} + base, {1/30, 1/3} + base]]; Image@ Rasterize@ Graphics[{Orange, Rectangle[{0, 0}, {1, 1}], Blue, Rectangle[{0, 0}, {1/2, 1/2}], Rectangle[{1/2, 1/2}, {1, 1}], Black, KeyHole[{0, 0}], KeyHole[{1/2, 1/2}], KeyHole[{1, 0}], White, KeyHole[{0, 1/2}], KeyHole[{1/2, 0}], KeyHole[{1, 1/2}]}, PlotRange -> {{0, 1}, {0, 1}}]] floortex := ImagePad[ImageRotate[#, Right], 5 First@ImageDimensions[#], "Periodic"] &[tile] LogPolar[{x_, y_}] := {Log[Sqrt[x^2 + y^2]], ArcTan[x, y]}; ImageTransformation[floortex, LogPolar, PlotRange -> {{-1, 1}, {-1, 1}}, DataRange -> {{-2 \[Pi], 0}, {-\[Pi], \[Pi]}}, Padding -> White] for example, I get the same problem. A visible white line on the left.... Seems I need to try one of the other ways shown.....(but as I read there, they-re up to 9 times slower....and the Raspberry already is very slow!).
Posted 7 years ago
Many thanks, I'll play around with it when the other animation I am waiting for is ready...and post it here when I have had success, and how (but I think this will take a few days cause the actual one stopped today as a reason of insufficent memory...started it again with a change some hours ago...Raspberry isn't the fastest :) )
Posted 7 years ago
Perhaps set \$HistoryLength = 2 so it doesn't save all the intermediate steps. In Mathematica the default is infinity. So it saves all the output, which you can access by Out[1], Out[2]... or %, %%, %%% et cetera.
Posted 7 years ago
Thanks, but does it make a difference if I only load a small codesnippet and evaluate it? Does it affect evaluation anyhow? Look like it is only relevant for the notebook to me / useless for me in view of memoryusage in an evaluation.
Posted 7 years ago
Unless you use %, %%, %%% or Out[...] a lot, it has no difference, it just only saves the last n outputs instead of everything!
Posted 7 years ago
This ClearAll[DashedCircle] DashedCircle[rs:{ri_,ro_},n_Integer,\[Theta]_:0]:=Annulus[{0,0},rs,#+\[Theta]]&/@Partition[Subdivide[0,2\[Pi],2n],2] L={0.1,100,40}; M=20; \[Lambda]=PowerRange[#1,#2,(#2/#1)^(1/#3)]&@@L; \[Lambda]=Partition[\[Lambda],2,1]; Graphics[MapThread[DashedCircle[#1,M,#2 \[Pi]/M]&,{\[Lambda],Mod[Range[Length[\[Lambda]]],2,0]}]] result in lots of "Subdivide is not a Graphics primitive or directive"-errors here...Regarding my first post SquareLattice[t_] := Graphics[{Table[ Rectangle[{i + t, j + t}], {i, -2, 42, 2}, {j, -2, 42, 2}], Table[Rectangle[{i + 1 + t, j + 1 + t}], {i, -2, 42, 2}, {j, -2, 42, 2}]}, PlotRange -> {{0, 40}, {0, 40}}, ImageSize -> 500] f[x_, y_] := {Log[Sqrt[(x)^2 + (y)^2]], ArcTan[x, y]} ListAnimate[ Table[ImageTransformation[SquareLattice[t], f[#[[1]], #[[2]]] &, DataRange -> {{-Pi, Pi}, {-Pi, Pi}}], {t, 0, .9, .1}]] and Why does that line exist?Look at the Plot of ArcTan[x, y], which defines how the y coordinate is obtained:Plot3D[ArcTan[x, y], {x, -Pi, Pi}, {y, -Pi, Pi}] There is a discontinuity where that line exists in your image. I imagine it has some numerical difficultly there. I found out, that Plot3D[ArcTan[x, y], {x, -Pi, Pi}, {y, -Pi, Pi}] seems not to be part of the correct code. What I've not found is a solution...the ready animation included in the .CDF-file available there (Link) seems to be fine - but if I evaluate it in Mathematica by myself, I'll get that line......
Posted 7 years ago
Regarding the error; what version of Mathematica do you have? This needs 10.2 or higher for the Annulus command.
Posted 7 years ago
I am using 10 (or 10.1?) - the latest available for the Raspberry.Minutes ago I've tried the next one from that blog, VHStripes[t_] := Graphics[{Thickness[.01], Line[Table[{{j + t, 22 + t}, {j + t, -2 + t}}, {j, -2, 22, 1}]], Line[Table[{{22 + t, i + t}, {-2 + t, i + t}}, {i, -2, 22, 1}]]}, PlotRange -> {{-.5, 20.5}, {-.5, 20.5}}, ImageSize -> 500] f[x_, y_] := {Log[Sqrt[(x)^2 + (y)^2]], ArcTan[x, y]} ListAnimate[ Table[ImageTransformation[VHStripes[t], f[#[[1]], #[[2]]] &, DataRange -> {{-Pi, Pi}, {-Pi, Pi}}], {t, 0, .9, .3}]] (with the same result) where f[x_, y_] := {Log[Sqrt[(x)^2 + (y)^2]], ArcTan[x, y]} seems to be neccessary - so I assume the general problem could only be in the DataRange -> {{-Pi, Pi}, {-Pi, Pi}} or am I missing something?The really strange thing for me is, that these 2 anims running fine in the blog and in the .cdf-file.....some other animations on the same blog have that problem visible, too - but not these 2 ones. Have the creators faked them? (Edit: And one example in Alexey Popkov's link some postings higher resulting in the same problem here)Strange stuff for a non-mathematician..........However, I'll try one of the other way shown. But it would be great if someone could explain this problem a bit more....I did not proof it, but I do not think that the guy who made these anims posted the answer in the link above, too. And 2 people posting something, which presumably worked for them - but not here?
Posted 7 years ago
It is because it does resampling close to the x-line where there is a jump in the plane. Using ImageTransformations it will subsample and average and all kinds of things, to avoid this, you could set: VHStripes[t_]:=Graphics[Style[{Thickness[.01],Line[Table[{{j+t,22+t},{j+t,-2+t}},{j,-2,22,1}]],Line[Table[{{22+t,i+t},{-2+t,i+t}},{i,-2,22,1}]]},Antialiasing->False],PlotRange->{{-.5,20.5},{-.5,20.5}},ImageSize->500] f[x_,y_]:={Log[Sqrt[(x)^2+(y)^2]],ArcTan[x,y]} ListAnimate[Table[ImageTransformation[Rasterize[VHStripes[t],"Image"],f[#[[1]],#[[2]]]&,DataRange->{{-Pi,Pi},{-Pi,Pi}},Resampling->"NearestLeft"],{t,0,.9,.3}]] which works for me.
Posted 7 years ago
Solved, thats it, you're awesome! Tried around without 'rasterize' and some resampling-options, but it only seems to work with 'Nearestleft' - but with bad quality. But when I do it with 'Rasterize', I am able to set better resamplng-options, too - without that annoying line! Very good, now I can do further experiments :)
|
2022-08-17 22:25:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19032855331897736, "perplexity": 3660.9312307326772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573118.26/warc/CC-MAIN-20220817213446-20220818003446-00334.warc.gz"}
|
https://acsatprep.org/answer/
|
### Strategy #1: Using the PIA (Plug In Answers) strategy:
• (A) Is not the right answer because when you plug in -80 for $x^{2}$ (which doesn’t actually make any sense since $x^{2}$ can’t equal a negative number), you get $-80+y^{^{2}}=160$. After adding 80 to both sides, you have $y^{2}=240$. So $y\approx&space;15.5$. If you plug that back into the second equation, you get $15.5=-3x$ which gives you $x\approx&space;5.16$. If $x^{^{2}}=-80$, as we originally presumed, x would not exist in the real number system. Even if you mistakenly thought you could take the square root of a negative number, you might get $x\approx&space;-8.9$ when you tried to take the square root of both sides, which is not equal to $5.16$.
• (B) is not the right answer because when you plug $4$ in for $x^{2}$ in the first equation, you get $4+y^{2}=160$, which gives you $y^{2}=156$. So after you take the square root of both sides, $y\approx&space;\pm&space;12.5$. Looking at the second equation, if $x^{^{2}}=4$, then after you take the square root of both sides you get $x\doteq&space;\pm&space;2$. Plugging $\pm&space;2$ in for x in the second equation gives you y=$\pm&space;6$. This does not match the $y$ value we got from the first equation.
• (C) is the correct answer!!! When you plug $16$ in for $x^{2}$ in the first equation, you end up with $16+y^{2}=160$. After subtracting from both sides, you are left with $y^{2}=144$. After taking the square root of both sides, $y=\pm&space;12$. Looking at the second equation, if $x^{2}=16$ then $x=\pm&space;4$. So we get $y=-3(\pm&space;4)$ which means $y=\pm&space;12$. This is the same thing we got when we plugged into the first equation! Now we know (C) is the answer!
• (D) is not the right answer because when you plug $144$ in for $x^{2}$ in the first equation, you get $144+y^{2}=160$, which gives you $y^{2}=16$. So after you take the square root of both sides, $y=\pm&space;4$. Looking at the second equation, if $x^{2}=144$, then after you take the square root of both sides you get $x=\pm&space;12$. Plugging $\pm&space;12$ in for x in the second equation gives you y=$\pm&space;36$. This does not match the value we got from the first equation
### Strategy #2: Using the substitution method
You need to find a solution that works for both equations. Since the y is isolated in the second equation you can substitute into in the first equation.
• (A) is not the right answer. You might have gotten this answer if you forgot to square the $-3$ when you substituted $-3x$ in for $y$. If this was your mistake, you may have gotten $x^{2}-3x^{2}=160$. Upon combining like terms, you may have ended up with $-2x^{2}=160$. After dividing by $-2$ you would get $x^{2}=-80$.
• (B) is not the right answer. You might have gotten this answer if you solved for $x$, instead of $x^{2}$.
• (D) is not the right answer. You might have gotten this answer if you solved for $y^{2}$, instead of $x^{2}$.
• (C) IS the correct answer!! If you substituted correctly by plugging in $-3x$ for $y$ like this: $x^{2}+(-3x)^2=160$ and then correctly simplified the left-hand side like this: $x^{2}+9x^{2}=160$ leading to $10x^{2}=160$, you could then divide both sides by $10$, leaving you with the answer: $x^{2}=16$.
|
2019-12-14 19:08:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 56, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9867779612541199, "perplexity": 322.4631404461345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541288287.53/warc/CC-MAIN-20191214174719-20191214202719-00201.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/applied-mathematics/elementary-technical-mathematics/chapter-1-section-1-8-multiplication-and-division-of-fractions-exercise-page-49/69
|
## Elementary Technical Mathematics
given a $\frac{1}{20}$ acre plot produces 448 lb shelled corn Therefore we need to find the yield in bushels per acre first convert the weight into bu = 448 lb $\div$ 56 = 8 bu Now the yield = 8 bu $\div$ $\frac{1}{20}$ 8 bu x 20 = 160 bu/acre
|
2021-03-01 20:14:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4392932653427124, "perplexity": 3629.4245552133257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362899.14/warc/CC-MAIN-20210301182445-20210301212445-00237.warc.gz"}
|
http://gtu-mcq.com/BE/Mechanical-Engineering/Semester-6/2161901/VYyuc2QYnF5fZ0cSVGPFVw/MCQs
|
# Dynamics of Machinery (2161901) MCQ
### MCQs of Balancing of Rotating Masses
MCQ No - 1
#### The balancing of rotating and reciprocating parts of an engine is necessary when it runs at
(A) slow speed
(B) medium speed
(C) high speed
(D) None of the above
C
MCQ No - 2
#### The static balancing is satisfactory for low speed rotors but with increasing speeds, dynamic balancing becomes necessary. This is because, the
(A) unbalanced couples are caused only at higher speeds
(B) unbalanced forces are not dangerous at higher speeds
(C) effects of unbalances are proportional to the square of the speed
(D) effects of unbalances are directly proportional to the speed
C
MCQ No - 3
#### A system in dynamic balance implies that
(A) the system is critically damped
(B) there is no critical speed in the system
(C) the system is also statically balanced
(D) there will absolutely no wear of bearings
C
MCQ No - 4
#### A disturbing mass ${m}_{1}$ attached to a rotating shaft may be balanced by a single mass ${m}_{2}$ attached in the same plane of rotation as that of ${m}_{1}$ such that
(A) ${m}_{1}.{r}_{2}={m}_{2}.{r}_{1}$
(B) ${m}_{1}.{r}_{1}={m}_{2}.{r}_{2}$
(C) ${m}_{1}.{m}_{2}={r}_{1}.{r}_{2}$
B
MCQ No - 5
#### For static balancing of a shaft
(A) the net dynamic force acting on the shaft is equal to zero
(B) the net couple due to the dynamic forces acting on the shaft is equal to zero
(C) both A. and B
(D) none of the above
A
|
2021-03-03 21:48:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2817414104938507, "perplexity": 2458.9953612480854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178367790.67/warc/CC-MAIN-20210303200206-20210303230206-00130.warc.gz"}
|
http://edvinfo.com/mean-square/minimum-mean-square-error-estimation-example.html
|
Home > Mean Square > Minimum Mean Square Error Estimation Example
# Minimum Mean Square Error Estimation Example
## Contents
Thus Bayesian estimation provides yet another alternative to the MVUE. Special Case: Scalar Observations As an important special case, an easy to use recursive expression can be derived when at each m-th time instant the underlying linear observation process yields a ISBN9780471016564. Contents 1 Motivation 2 Definition 3 Properties 4 Linear MMSE estimator 4.1 Computation 5 Linear MMSE estimator for linear observation process 5.1 Alternative form 6 Sequential linear MMSE estimation 6.1 Special this content
Prediction and Improved Estimation in Linear Models. A naive application of previous formulas would have us discard an old estimate and recompute a new estimate as fresh data is made available. One possibility is to abandon the full optimality requirements and seek a technique minimizing the MSE within a particular class of estimators, such as the class of linear estimators. This can happen when y {\displaystyle y} is a wide sense stationary process. https://en.wikipedia.org/wiki/Minimum_mean_square_error
## Minimum Mean Square Error Estimation Example
Computation Standard method like Gauss elimination can be used to solve the matrix equation for W {\displaystyle W} . ISBN978-0132671453. Since the matrix C Y {\displaystyle C_ − 0} is a symmetric positive definite matrix, W {\displaystyle W} can be solved twice as fast with the Cholesky decomposition, while for large Had the random variable x {\displaystyle x} also been Gaussian, then the estimator would have been optimal.
the dimension of y {\displaystyle y} ) need not be at least as large as the number of unknowns, n, (i.e. Another feature of this estimate is that for m < n, there need be no measurement error. Note that MSE can equivalently be defined in other ways, since t r { E { e e T } } = E { t r { e e T } Minimum Mean Square Error Estimation Matlab Lastly, this technique can handle cases where the noise is correlated.
Example 2 Consider a vector y {\displaystyle y} formed by taking N {\displaystyle N} observations of a fixed but unknown scalar parameter x {\displaystyle x} disturbed by white Gaussian noise. Detection, Estimation, and Modulation Theory, Part I. ISBN978-0132671453. When x {\displaystyle x} is a scalar variable, the MSE expression simplifies to E { ( x ^ − x ) 2 } {\displaystyle \mathrm ^ 6 \left\{({\hat ^ 5}-x)^ ^
For random vectors, since the MSE for estimation of a random vector is the sum of the MSEs of the coordinates, finding the MMSE estimator of a random vector decomposes into Mmse Estimator Derivation Two basic numerical approaches to obtain the MMSE estimate depends on either finding the conditional expectation E { x | y } {\displaystyle \mathrm − 6 \ − 5} or finding In particular, when C X − 1 = 0 {\displaystyle C_ σ 6^{-1}=0} , corresponding to infinite variance of the apriori information concerning x {\displaystyle x} , the result W = Theory of Point Estimation (2nd ed.).
## Minimum Mean Square Error Algorithm
Here the required mean and the covariance matrices will be E { y } = A x ¯ , {\displaystyle \mathrm σ 0 \ σ 9=A{\bar σ 8},} C Y = Minimum Mean Squared Error Estimators "Minimum Mean Squared Error Estimators" Check |url= value (help). Minimum Mean Square Error Estimation Example The initial values of x ^ {\displaystyle {\hat σ 0}} and C e {\displaystyle C_ σ 8} are taken to be the mean and covariance of the aprior probability density function Minimum Mean Square Error Matlab Notice, that the form of the estimator will remain unchanged, regardless of the apriori distribution of x {\displaystyle x} , so long as the mean and variance of these distributions are
Please try the request again. http://edvinfo.com/mean-square/mean-square-between.html Instead the observations are made in a sequence. Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. Generated Thu, 20 Oct 2016 13:48:52 GMT by s_wx1011 (squid/3.5.20) ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://0.0.0.6/ Connection Minimum Mean Square Error Pdf
Implicit in these discussions is the assumption that the statistical properties of x {\displaystyle x} does not change with time. The expressions can be more compactly written as K 2 = C e 1 A T ( A C e 1 A T + C Z ) − 1 , {\displaystyle Let a linear combination of observed scalar random variables z 1 , z 2 {\displaystyle z_ σ 6,z_ σ 5} and z 3 {\displaystyle z_ σ 2} be used to estimate have a peek at these guys After (m+1)-th observation, the direct use of above recursive equations give the expression for the estimate x ^ m + 1 {\displaystyle {\hat σ 0}_ σ 9} as: x ^ m
Here the required mean and the covariance matrices will be E { y } = A x ¯ , {\displaystyle \mathrm σ 0 \ σ 9=A{\bar σ 8},} C Y = Minimum Mean Square Error Estimation Ppt Jaynes, E.T. (2003). Your cache administrator is webmaster.
## pp.344–350.
In the Bayesian setting, the term MMSE more specifically refers to estimation with quadratic cost function. Jaynes, E.T. (2003). ISBN0-13-042268-1. Minimum Mean Square Error Equalizer the dimension of x {\displaystyle x} ).
But then we lose all information provided by the old observation. Optimization by Vector Space Methods (1st ed.). In the Bayesian approach, such prior information is captured by the prior probability density function of the parameters; and based directly on Bayes theorem, it allows us to make better posterior check my blog Thus, the MMSE estimator is asymptotically efficient.
the dimension of y {\displaystyle y} ) need not be at least as large as the number of unknowns, n, (i.e. ISBN0-471-09517-6. The form of the linear estimator does not depend on the type of the assumed underlying distribution. The matrix equation can be solved by well known methods such as Gauss elimination method.
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Thus, we can combine the two sounds as y = w 1 y 1 + w 2 y 2 {\displaystyle y=w_{1}y_{1}+w_{2}y_{2}} where the i-th weight is given as w i = Alternative form An alternative form of expression can be obtained by using the matrix identity C X A T ( A C X A T + C Z ) − 1 Adaptive Filter Theory (5th ed.).
Your cache administrator is webmaster. Minimum Mean Squared Error Estimators "Minimum Mean Squared Error Estimators" Check |url= value (help). Another approach to estimation from sequential observations is to simply update an old estimate as additional data becomes available, leading to finer estimates. It is required that the MMSE estimator be unbiased.
Wiley.
|
2018-04-21 20:47:49
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9704426527023315, "perplexity": 1017.6609522686559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945448.38/warc/CC-MAIN-20180421203546-20180421223546-00435.warc.gz"}
|
https://www.jobilize.com/online/course/0-2-derivation-of-the-equations-for-a-basic-fdm-tdm-transmux-by-openst?qcr=www.quizover.com&page=5
|
# 0.2 Derivation of the equations for a basic fdm-tdm transmux (Page 6/10)
Page 6 / 10
The computational efficiency of the transmultiplexer can therefore be traced to two key items:
1. Separation of the tuning computation into two segments, one of which (the $\left\{v\left(r,p\right)\right\}$ ) need be computed only once
2. The use of the FFT algorithm to compute the inverse DFT
The first accrues from strategic choices of the sampling and tuning frequencies, while the second depends on N being chosen to be a highly composite integer.
## The transmux as a dft-based filter bank
We have just developed an FDM-TDM transmultiplexer by first writing the equations for a single, decimated digital tuner. The equations for a bank of tuners come from then assuming that (1) they all use the same filter pulse response and (2) their center frequencies are all integer multiples of some basic frequency step. In this section, we develop an alternate view, which happens to yield the same equations. It produces a different set of insights, however, making its presentation worthwhile.
## Using the dft as a filter bank
Instead of building a bank of tuners and then constraining their tuning frequencies to be regularly spaced, suppose we start with a structure known to provide equally-spaced spectral measurements and then manipulate it to obtain the desired performance.
Consider the structure shown in [link] . The sampled input signal $x\left(k\right)$ enters a tapped delay line of length N . At every sampling instant, all N current and delayed samples are weighted by constant coefficients $w\left(i\right)$ (where $w\left(i\right)$ scales $x\left(k-i\right)$ , for i between 0 and $N-1$ ), and then applied to an inverse discrete Fourier transform Whether or not it is implemented with an FFT is irrelevant at this point. Also, we happen to use the inverse DFT to produce a result consistent with that found in the proceeding subsction, but the forward DFT could also be used. . The complete N-point DFT is computed for every value of k and produces N outputs. The output sample stream from the m-th bin of the DFT is denoted as ${X}_{m}\left(k\right)$ .
Since DFTs are often associated with spectrum analysis, it may seem counterintuitive to consider the output bins as time samples. It is strictly legal from an analytical point of view, however, since the DFT is merely an N-input, N-output, memoryless, linear transformation. Even so, the relationship of this scheme and digital spectrum analysis will be commented upon later. We continue by first examining the path from the input to a specific output bin, the m-th one, say. For every input sample $x\left(k\right)$ there is an output sample ${X}_{m}\left(k\right)$ . By inspection we can write an equation relating the input and chosen output:
${X}_{m}\left(k\right)=\sum _{p=0}^{N-1}x\left(k-p\right)w\left(p\right){e}^{j2\pi \frac{mp}{N}}$
the m-th bin of an N-point DFT of the weighted, delayed data. We can look at this equation another way by defining ${\overline{w}}_{m}\left(p\right)$ by the expression
${\overline{w}}_{m}\left(p\right)\equiv w\left(p\right)·{e}^{j2\pi \frac{np}{N}}$
and observing that [link] can be written as
${X}_{m}\left(k\right)=\sum _{p=0}^{N-1}x\left(k-p\right)·{\overline{w}}_{m}\left(p\right).$
From this equation it is clear ${X}_{m}\left(k\right)$ is the output of the FIR digital filter that has $x\left(k\right)$ as its input and ${\overline{w}}_{m}\left(p\right)$ as its pulse response. Since the pulse response does not depend on the time index k , the filtering is linear and shift-invariant. For such a filter we can compute its transfer function, using the expression
Is there any normative that regulates the use of silver nanoparticles?
what king of growth are you checking .?
Renato
What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ?
why we need to study biomolecules, molecular biology in nanotechnology?
?
Kyle
yes I'm doing my masters in nanotechnology, we are being studying all these domains as well..
why?
what school?
Kyle
biomolecules are e building blocks of every organics and inorganic materials.
Joe
anyone know any internet site where one can find nanotechnology papers?
research.net
kanaga
sciencedirect big data base
Ernesto
Introduction about quantum dots in nanotechnology
what does nano mean?
nano basically means 10^(-9). nanometer is a unit to measure length.
Bharti
do you think it's worthwhile in the long term to study the effects and possibilities of nanotechnology on viral treatment?
absolutely yes
Daniel
how to know photocatalytic properties of tio2 nanoparticles...what to do now
it is a goid question and i want to know the answer as well
Maciej
Abigail
for teaching engĺish at school how nano technology help us
Anassong
Do somebody tell me a best nano engineering book for beginners?
there is no specific books for beginners but there is book called principle of nanotechnology
NANO
what is fullerene does it is used to make bukky balls
are you nano engineer ?
s.
fullerene is a bucky ball aka Carbon 60 molecule. It was name by the architect Fuller. He design the geodesic dome. it resembles a soccer ball.
Tarell
what is the actual application of fullerenes nowadays?
Damian
That is a great question Damian. best way to answer that question is to Google it. there are hundreds of applications for buck minister fullerenes, from medical to aerospace. you can also find plenty of research papers that will give you great detail on the potential applications of fullerenes.
Tarell
what is the Synthesis, properties,and applications of carbon nano chemistry
Mostly, they use nano carbon for electronics and for materials to be strengthened.
Virgil
is Bucky paper clear?
CYNTHIA
carbon nanotubes has various application in fuel cells membrane, current research on cancer drug,and in electronics MEMS and NEMS etc
NANO
so some one know about replacing silicon atom with phosphorous in semiconductors device?
Yeah, it is a pain to say the least. You basically have to heat the substarte up to around 1000 degrees celcius then pass phosphene gas over top of it, which is explosive and toxic by the way, under very low pressure.
Harper
Do you know which machine is used to that process?
s.
how to fabricate graphene ink ?
for screen printed electrodes ?
SUYASH
What is lattice structure?
of graphene you mean?
Ebrahim
or in general
Ebrahim
in general
s.
Graphene has a hexagonal structure
tahir
On having this app for quite a bit time, Haven't realised there's a chat room in it.
Cied
what is biological synthesis of nanoparticles
how did you get the value of 2000N.What calculations are needed to arrive at it
Privacy Information Security Software Version 1.1a
Good
Got questions? Join the online conversation and get instant answers!
|
2019-07-23 21:16:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 16, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5691267848014832, "perplexity": 1645.8348430921174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529664.96/warc/CC-MAIN-20190723193455-20190723215455-00042.warc.gz"}
|
https://hearthisidea.com/episodes/neel/
|
# Neel Nanda on Effective Planning and Building Habits that Stick
## March 12, 2020
00:00/00:00
Neel Nanda is a final year maths undergraduate at the University of Cambridge. He topped his year twice, and is a double gold medalist in the International Mathematical Olympiad. Neel also teaches regularly – from revision lectures and notes to a recent ‘public rationality’ workshop series featuring sessions on habits and planning. Neel is also an active member in rationalist and effective altruism communities.
In the first half of our conversation, Neel discusses how to build and think about useful habits, and introduces the ‘trigger action pattern’ framework. In the second half, we learn about ‘effective planning’: anticipating systematic points of failure, and adjusting course in advance – “the art of looking at things you want to happen and both making sure they happen and making sure they happen well.” Why, for instance, are we normally so overoptimistic in forecasting the time to complete a project? We also consider illustrations from failed construction projects, experimental psychology, and insect behaviour.
We hope this episode and write-up will be useful for anyone enthusiastic about improving their productivity and ‘learning how to learn’ over the long-term. These are not (unfortunately) instant, effort-free ‘hacks’, but evidence-based methods which pay off in the long-run. We haven’t done a self-improvement episode before, but we hope you find it useful!
For those interested in implementing these ideas right away, Neel has written some teaching notes for planning and habit-building with a practical focus. Also, Neel has created an anonymous feedback form for this episode, and he would love to hear any of your thoughts!
## Book Recommendations 📚
We began by asking Neel about what it means to actually reflect on what our goals are, and how to best achieve them. Although the word ‘productivity’ is often used in the context of work and studying, the concepts gleaned from learning about productivity can be put to use in the rest of our lives: “basically everything we ever do is some form of achieving goals.” It can be rewarding to apply an attitude of continual learning to any time we have a goal, and want to get there effectively. For Neel, productivity is just one example of a goal; and he applies an attitude of continual learning whenever he has a goal that he wants to achieve effectively:
“I have things that I value. Every day I’m taking actions to bring me closer or further from these. If I can find ways of taking the actions that get me more of what I want, I’m kind of an idiot for not doing that.. I definitely feel much, much happier with where my life is than a few years ago, in large part because of this kind of thinking”
Even when it comes to work or studying, most of us are biased against exploration. We have our preferred routines, methods, and tools; and too rarely experiment with alternatives. After all, experimenting in this way carries obvious costs: it takes time and effort to switch to a new way of doing things, and there is always a risk or even a likelihood that this new way turns out to be no improvement on your familiar way. Yet, this kind of experimentation is often dramatically undervalued:
“If you’re going to spend 1000 hours over your [university] career learning, finding a better system that can get you 1% better is worth 10 hours of additional productivity. And so it would easily be worth 5 hours of trying out a new system.”
Therefore, even if trying out a new work routine or learning a new study tool carries a chance of failure, it is very often totally worth trying. In the field of computer science, this is known as the explore-exploit tradeoff: the perennial dilemma between sticking with familiar, ‘tried and tested’ options on the one hand; and gambling with your time or resources for the chance to learn something even better. So, let’s look at two ways you can begin to improve in this way.
## Building Good Habits
In order to understand what is meant by ‘habit’, consider the Sphex wasp. Some species of this genus prepare nests for their eggs, which they stock with insect prey for the hatched baby wasps to feed on. The wasp will drop a paralysed insect near the opening of the nest, and will then enter the nest to inspect it. Normally, it will emerge from the nest after the inspection and carry the insect inside. But during this inspection, an experimenter can move the insect a few inches away from the opening. When the wasp sees this, it locates the insect and carries it back to the opening. Then something strange happens: the wasp inspects the nest again, before re-emerging to carry the insect inside. On some accounts, this can be repeated endlessly. Although the wasp behaves as if it were thinking through its behaviour, their vulnerability to being ‘short-circuited’ tells us that this behaviour is really habitual in the strongest sense: it is the product of a near-enough hard-coded if/then pattern. You can watch a video of this here. Yet, absent meddling researchers, this simple algorithm is fantastically useful for the wasps. In that sense, it might be mechanically simple – but it is not dumb or stupid. Indeed, by combining enough of these basic reflexes, intelligent behaviour really does begin to emerge.
Although humans are rarely so easily blindsided by such simple tricks, our minds do nonetheless also exhibit very similar kinds of if/then pattern. For instance, how many of us have absent-mindedly browsed our go-to social media app, closed the app out of boredom, and then reflexively re-opened the app to read the exact same content; before catching ourselves a few seconds later? Or instinctively pull out our phone when we feel a notification buzz, even if we know what the notification was? Or reply to the waiter’s “enjoy your meal” with the instantly regrettable “you too”?
Habits, then, can be characterised by a few features. They are triggered by some concrete event – arriving at your nest with an insect, or feeling your phone buzz. The trigger is immediately followed by an action: the delay between trigger and action rarely takes longer than a few seconds. And habitual actions are reflexive – they require minimal (or no) conscious deliberation. They are not effortful, and in fact often take effort to override. They are also neither inherently good nor inherently bad: at any one time, we can name bad habits we’re trying to kick, and desirable habits we’re trying to learn. Yet changing habits is often far more difficult than we expect, so bad habits linger and good habits go unlearned after a few days or weeks of effort. No wonder – almost all your habits were learned accidentally. Lastly, habits in this sense are distinct from ‘routines’ or ‘systems’ like going to the gym once a week – which are neither frequent nor specific enough to become genuinely reflexive and automatic.
### Trigger Action Patterns
However, we can utilise this understanding to more effectively change our habits – by inventing our own trigger-action patterns, or ‘TAPs’. Neel suggests an algorithm for breaking this process down into four steps:
1. Choose the habit you want to learn.
Since habits are bite-sized, concrete, and immediate; vague intentions like ‘work out more’ or ‘study harder’ won’t do. What small habitual actions can you translate your intention into? For instance, Neel gives the example of wanting to be more active – which he translated into a set of more concrete goals. One of these goals was ‘take the stairs when I walk into the office’. A good litmus test for a realistic, learnable habit is to imagine what you would do if somebody tapped you on the shoulder to remind you to do it just before you normally forget to do so. In Neel’s case, although he took the elevator reflexively, he would take the stairs without much fuss if only he was reminded to at the right moment. If Neel chose to cultivate the habit of sprinting up the stairs to the top floor, he would probably fail: a habit must fall within the bounds of gentle encouragement and far short of requiring willpower. Nobody takes the elevator just because they forgot to sprint full-tilt up the stairs!
1. Identify a trigger.
A bad trigger is easy to miss, too rare or unreliable, or comes too late. For instance: if I want to turn my phone off earlier before I go to sleep, a bad trigger would be ‘notice I’ve spent too long on my phone’. An ideal trigger is visceral, regular, easy to notice, and familiar. In Neel’s example, he chose ‘opening the office door’. A test for a good trigger is to close your eyes and visualise it: can you imagine sounds, feelings, or a clear image? In Neel’s case, the distinctively cold, metallic feeling of the door handle was enough to make sure the trigger gets noticed.
1. Pair your trigger with an action.
Just like the trigger, your action should be concrete, specific, brief. And remember that habits qua habits should require minimal willpower. In Neel’s case, he did not choose ‘take the stairs’ as his action. This is because an early morning bleary-eyed Neel walking into the office is likely to override his long-term intention to build a habit of taking the stairs with a more immediate desire to take the lift. Instead, Neel chose ‘look at the stairs’. This might sound weirdly noncommittal – what good is looking at the stairs? But remember how a habit is often something you would do if only you were reminded at the crucial moment: and looking at the stairs serves as that reminder. If your goal is to shrug off a bad habit, consider the trigger that normally causes the habit, and try pairing it with the action of just noticing that ‘now is when I normally do my bad habit’. You might even say it out loud – physical triggers and actions are harder to forget or shrug off than entirely mental ones. The idea here is to use triggers to disengage your ‘autopilot’; opening a brief window to remember what you ought to be doing!
1. Practice!
Having established your trigger–action pair, it is left to begin the process of ingraining the habit. Suppose you want to kick your habit of scrolling through your phone: you choose the trigger of ‘my head hits the pillow’ and the action of ‘I turn off my phone and pick up a book’. A good TAP, to be sure, but you only get one chance to practice it per day. What if you wanted to accelerate the learning process? The solution is to artificially bring about the trigger: literally rehearse getting into bed and lying down. You lie back, your head hits the pillow, you roll over and grab a book. Rinse, repeat. 10 times, Neel suggests. This is going to feel weird and unnatural; but treat it like any other kind of practice. After all, nobody learns a killer forehand or masters the violin just by forming an intention to do so – no matter how strong. Just like any other kind of practice, the habit begins to embed itself through being repeatedly exercised. Once the habit has initially been locked-in through deliberate practice, you might even discover a benign spiral – the more automatic a habit becomes, the less effortful it becomes, the more likely you are to repeat it.
### Chaining and Compounding
You might be wondering what good are such small, incremental habits in achieving the really big changes that we want to make. We started by considering big general goals like ‘get fit’, ‘learn to code’, ‘ace my exams’, but we ended up with bite-size actions like ‘look at the stairs’. But the beauty of TAPs is that one action can produce a trigger for another habit, and so on. In this way, habits can be ‘chained’ or ‘stacked’ indefinitely. This is particularly applicable to morning routines, where each stage (getting out of bed, brushing teeth, etc) provides clear triggers for successive actions.
Moreover, although no single exercising of a habitual action makes a significant change, sticking to a habit over the long-term can lead to exponential returns. Suppose you invest the money you normally spend on take-out coffee every day, or smoke one less cigarette/drink/pizza slice per day, or commit to learning a skill for a few minutes each day. If the returns come in small fractional increments, then at first they might be dispiritingly small; before snowballing into noticeable changes. Consider these two self-explanatory graphs from James Clear’s excellent ‘Atomic Habits’.
## Effective Planning
The psychologists Amos Tversky and Daniel Kahneman are famous for having introduced the distinction between ‘System 1’ and ‘System 2’: a way of dividing human minds into separate but communicating modules that illuminates a range of otherwise puzzling phenomena. System 1 is fast, reflexive, and unconscious; while System 2 is slow, deliberative, and conscious. We exercise System 1 when we notice where a sound is coming from, understand sentence meaning from letter on a page, or (crucially) perform a habitual action. We exercise System 2 when we focus our attention on a task that does not come automatically: when we solve tricky maths questions, retrieve a recondite nugget of information from our memory stores, or (crucially) make plans.
The goal of habit-forming is to shift the burden for some behaviour away from System 2 (effortful, unsustainable, easily forgotten) over to System 1 (automatic, unconscious). Real habitual behaviour is performed by our System 1s; which is great, because it frees up attention to deliberate and strain over more complicated things. If habits belong to System 1, then planning belongs to System 2.
Most of us make plans before undertaking big projects, particularly if they involve other people. Plans help coordinate all those people’s efforts, and can provide direction and timing cues (‘We’re behind schedule! Better speed up!’). But Neel stresses that planning needn’t exclusively apply to large-scale projects: it just as useful to plan out your approach to an essay or assignment, your day of studying, or even the next few hours of work. Even self-talk like “I should get round to doing that some time” is an example of a plan – just a bad one. As Neel writes, “one of the main benefits of thinking about planning is noticing when you're procrastinating about something and will never really get round to doing it, and feeling a prompt to make an actual plan.”
Wherever you look, people suck at planning. Even critical, multi-million dollar projects run over time and over budget like clockwork. Consider New York’s Second Avenue Subway. The line, a short stretch of subway tunnel running under Second Avenue on the East Side of Manhattan, was originally proposed in 1920. Overground lines were demolished in anticipation of the new route in 1942 and 1955. Construction on the line itself finally began in 1972, with governor Nelson Rockerfeller holding a victorious ground-breaking ceremony. But a few years later, the plan was halted again due to a fiscal crisis, after only completing a few short sections. At this point, the subway had become its own punchline: New Yorkers would make promises “once the Second Avenue Subway was built.” New plans were drawn to finally complete the project in 2004 over four phases. Phase 1 was estimated to cost $3.8 billion, but eventually ran$500 million over budget, in part because the poor rock quality forced the constructors to literally freeze close to two blocks of earth. Phase 2 was estimated to cost $3.4 billion, but that too ratcheted up to c.$6 billion. The second phase is expected to open by 2027-2029, and I couldn’t even find any information about the third and fourth phases. Sure, running more than half a century and more than a billion dollars over expectations is particularly embarrassing, but were you shocked? On the whole, we’re desensitised to megaprojects running over budget and past successive deadlines: the real surprises are big projects that finish on time and within budget.
Of course, mega-projects don’t just fall foul of wild over-optimism on account of bad planning. There are clear incentives for warping facts, hiding complications, and exaggerating budgets and timeframes; particularly when multiple construction firms compete for a bid or when the projects underpin the public image of a politician or political agenda.
Yet, small-scale plans go wrong just as often as big institutional projects. If you are a student, you will be painfully familiar with the feeling of setting yourself the best part of a week to write an essay, only to find yourself pulling an all-nighter and handing in a botched job hours after the deadline (even though the same thing happened the previous week, and the week before that…). Despite knowing that our standard plan normally fails, we endlessly repeat it. This time always feels like the time you’ll make it work, the time you’ll finally stop procrastinating and hand in that essay well before the deadline.
Hofstadter’s law: It always takes longer than you expect, even when you take into account Hofstadter's Law.
These are all instances of the ‘planning fallacy’, where our forecasts about the time needed to complete a task to satisfaction are reliably biased in the direction of over-optimism. Yet, human beings are capable of failing in so many more ways besides the planning fallacy. Neel gives one example: imagine Anna has a technical job interview tomorrow. She cares about preparing as well as possible, so she spends all day preparing for possible questions. In the evening, she’s worried the day’s work wasn’t enough. So she works into the night and becomes increasingly stressed. During the interview, she messes up. Not because of gaps in her technical knowledge, but because her frazzled brain makes trivial errors that would have been prevented by a good night’s sleep. In this (possibly familiar) example, the mistake was not one of timing. Rather, Anna over-optimised for a conspicuously desirable thing (time spent preparing) while neglecting less obvious but equally useful activities like getting sleep, or taking breaks. In this way, even thoughtfully timed procrastination has a positive place in effective planning. First, we need answers – why does this all happen? Why are we normally so bad at adapting our plans to past failures? And why this amnesic optimism?
For Neel, a useful way to understand planning failures is with the concepts of the ‘inside view’ and the ‘outside view’ – another idea from the prolific pair Kahneman and Tversky. The inside view (or internal perspective) is the intuitive one. That’s why we spontaneously adopt it when we start to plan out some new particular new project in our imagination. We help ourselves to the information immediately available to us: are there currently any obvious impediments to my finishing this on time? Nope. Do I feel motivated right now to get it done? Absolutely. And do I have enough time from now until the deadline? More than enough. In adopting the inside view, we ignore the ‘unknown unknowns’ that befall most projects.
By contrast, the outside view (or external perspective) gathers information about all the other instances of this kind of project, in the same way an outside observer would approach the task of forecasting your chances of success with no knowledge of what’s in your head. It estimates the base rate of failure or success based on a suitable ‘reference class’ – the other times I or people similar to me have attempted a similar thing with similar constraints. The base rate is just the ‘prior’ probability of success: the proportion of similar projects that ended up successful. The kind of information this perspective relies on is not particular (special to this case) but distributional (spread across many past cases). Suppose I’m writing an essay, and give myself 5 days to finish it. I like the topic, I feel fired up to get it done, and 5 days seems like more than enough. But when I consider my track record, I remember that I end up handing in late pretty much every time. The inside view tells me that this time is special: this time I’ll just work a bit harder and learn from my mistakes. The outside view reminds me that this is what I thought every other time, too:
“Remember that a lot of the previous things that I screwed up were also things that I thought were special and I thought I was going to take more seriously.”
The inside view typically falls foul to an ‘optimism bias’, focusing on the evidence that we are likely to succeed on time and ignoring the lessons from previous instances. The chance of success extrapolated from the outside view tends to be far more accurate; but adopting the outside view is also less natural, less automatic, and often less comfortable. Yet, we are all capable of using it accurately, because although we underestimate completion times for our own plans, we do not do so for other people’s plans: presumably because we are less prone to adopting the internal view when thinking about others.
How can we get better at planning, and make use of the outside view?
## An Algorithm for Better Plans
We all know what it’s like to think “I should get round to doing $x$ at some point”, knowing we’re unlikely ever to do so. Therefore, Neel points out that the first failure mode is often failing to make a plan in the first place. So the zeroth step in planning more effectively is to decide to do so. Even if you do spend a few minutes to plan out your time, it is too easy to almost instantly forget about it:
“Most plans fail at the point where you don’t even think about them and when you don’t take any action to make them happen.”
The next useful concept for better planning is the idea of an ‘inner simulator’: our ability to imagine hypothetical scenarios. Our ‘System 1’ is often amazingly good at doing this without too much conscious effort. For instance, Neel asks us to imagine throwing a bucket of water over your friend as they’re sitting at their desk. Thinking of a specific person, we can immediately bring to mind their reaction, the look on their face, and how the situation plays out over time. We will make use of this in thinking about a more effective way to make plans.
The first stage is to come up with an initial plan: it can be optimistic, because we will be using our inner simulators to refine it. For instance, I need to write 12 pieces of coursework in just under two months for my degree. My initial plan is to punch in 6 hours a day, every day, for the next two months. That’ll give me $\approx \frac{60}{12}=5$ hours per essay.
The second stage is to fast-forward in your imagination to the last moment of your plan, and imagine that things went wrong. Ask yourself: are you surprised? We can put our inner simulators to good use here: make an effort to picture where you’ll be sitting, what you’ll be seeing, the kind of emotions you’ll be feeling. This probably doesn’t sound fun. But remember that we are typically most vulnerable to failing to predict failure precisely when we flinch away from imagining the possibility. In my case, it’s so much nicer to imagine a seamlessly productive two months, than the more realistic ebbs and bumps in motivation and procrastination. Using our inner simulator is an effective way to gauge the feasibility of a plan – shifting it from an foggy future event to a subjectively concrete outcome.
Now you’re imagining failure, ask yourself what happened. Imagine you’re using hindsight (pre-hindsight?) from your perspective in the imagined future, looking back over the next few hours, days or weeks. In my case, I only began to feel real time pressure in the last fortnight; by which point I knew I had to rush some of the essays I put off to a standard I wouldn’t be happy with. After a couple of all nighters, and feeling disappointed with myself, I handed in a set of sub-par essays.
The next step is the constructive part: now adjust your plan so that scenario is less likely to occur. This is likely to feel hard at first. Neel suggests setting a five-minute timer and trying to generate new ideas until the timer ends – a surprising proportion of apparently intractable problems can be solved with five minutes of focused thought. For instance, you might build in a social accountability system: tell a friend about your plan, and ask them to hold your feet to the fire if you fall behind schedule. In my case, I’m going to give myself the weekends to focus on other hobbies, and I’m going to make sure every essay at least gets written before I worry about refining any of them. Now go back to the previous step. Imagine you’ve reached the end of this plan, and you’ve failed again. Are you surprised? If, in your imagination, you honestly do feel surprised that such a well-laid plan could have failed, then congratulations! You’ve created a watertight plan. But suppose you only feel a shade more surprised. In this case, rinse and repeat: update your plan, and imagine the failure mode again, until you’ve hit on a plan which causes genuine surprise when you try to imagine how it failed.
This might sound needlessly uncomfortable, but it is far more valuable to imagine failure in advance rather than success, because you can adjust course before it’s too late:
“If I notice right now my plan’s going to go wrong, that’s amazing, because right now that’s cost nothing, I can do something differently to prepare for this. Noticing something in advance isn’t a failure; it’s a success.”
Gmail recently introduced a button that allows you to undo sending an email within a few seconds. This is great: we’ve all sent an email only to immediately notice you forgot to attach the file. Imagining and adjusting for unsurprising failures is like an undo button for plans: you get to notice the problem just before it’s too late. Neel suggests that building a habit of effective planning and imagining failure modes can pay dividends in the long-run. Next time you face an opportunity to make a plan before starting a project, notice the voice in your head that says “oh, I’ll get round to that” or “I have a good feeling about this one” and remember the uncomfortable lessons of the outside view.
“This kind of thing should be really exciting. If you can make better decisions solely in your head by asking the right questions, that’s an amazing skill. This is something I want to understand and I want to practice.”
There is a nice interplay between effective planning and building good habits: you can build a mental habit of imagining failure modes for your plans, and you can also make plans for building habits – would I be surprised if I fail to see through my plan to make a habit of doing 50 pushups every time I get out of bed? Also, you might consider making a trigger of noticing when you say to yourself, “I should do that some time”; paired with the action of asking yourself, “would I be surprised if I didn’t?”. Can you think of any triggers that would work well for you to identify the times when you could be planning better?
If you feel like you want to try this planning technique, imagine yourself in a couple of week’s time: would you be surprised if you never got round to it? If so, is there a reason you can’t try making a plan this way right now? Failing that, is there anything you could do right now to ensure that you try it at a specific later time?
The moment you decide you want to do something in the future, like using this technique, you've started to make a plan. The question is – are you going to make it a good one?
## Implementation Intentions
When you think about how to update your plans to make them more likely to succeed, there’s a good chance that motivation came to mind. You might think ‘last time, when I fell short of my plan, it was because I just didn’t care enough’.
Or maybe you want to instil a habit through sheer force of will: ‘sure, my last attempt at making a routine of exercise trailed off, but this time I really want to get fit!’.
A growing body of research shows that sheer motivation is ineffective. In one study, researchers wanted to examine what improved the likelihood of building better exercise habits. Subjects were divided into three groups. The first group were just asked to track how often they exercised. The second were asked in addition to get motivated by learning about the benefits of exercise, and the health risks of failing to exercise. The third group received the same ‘motivational’ material as the second, but were also asked to make a plan detailing exactly when and where they would exercise. They completed the following sentence:
“During the next week, I will partake in at least 20 minutes of vigorous exercise on [DAY] at [TIME] in [PLACE].”
The researchers found that the ‘motivational’ material had no meaningful effect on building an exercise routine: the second group almost exactly matched the first, with 35%-38% exercising at least once per week. However, nearly 91% of the third group exercised at least once per week!
The name given to what the third group did is an implementation intention: a specific (when and where) plan. Implementation intentions work best when a specific response is paired with a specific situation – evidence which underpins the effectiveness of TAPs in building habits.
Other studies show that asking participants to form implementation intentions can improve the likelihood of behaviours ranging from voting in a presidential election to taking vitamins, obtaining a mammography, or changing eating habits. The effectiveness of implementation intentions links to the earlier point that good plans and habits are robust against ‘off days’. They survive waning motivation after the initial excitement, and become your ‘default’, ‘autopilot’ behaviour.
Neel’s suggestions for building habits and effective planning are useful frameworks, but they won’t tell you how to adapt them for your personal circumstances. Neither are they anything like the final word on productivity (there is, after all, a cottage industry of productivity books and workshops, each claiming to be the final word on productivity). For instance, the ‘adapt your plan’ stage of the effective planning algorithm doesn’t tell us anything about how to improve your plan. Here are some potentially useful extra details which Neel discussed in our conversation:
• If you’re trying to learn lots of structured information (e.g. for an exam), spaced repetition is an extremely well-evidenced technique which might improve on your existing approach. Computer and mobile apps exist which automate the process. Neel recommends ‘Anki’, a cross-platform flashcard program. I might also put a shout in for ‘Msenome’.
• Tracking your progress in sticking to habits can provide a source of motivation and feedback, particularly if you are practising more than one. Many apps exist for just this purpose. I use and recommend Loop Habit Tracker for Android. Habit Bull and Habitify for iOS also looks promising.
• When planning towards a goal, it’s easy to get hung up on proxies for the goal, and optimise them at the expense of the goal itself. For instance, if you want to ace an exam, you might start to really care about how much time you spend studying or how much you feel productive. But it’s possible to become so hung up on the hours you clock in front of your books that you work yourself into a tired wreck poring through textbooks for the sake of it. Remember that the goal was only ever to do well in your exam, and alternative, non-obvious actions like taking longer and more frequent breaks are often the best way to achieve that.
• It’s equally easy to forget the overreaching significance of your work or study in the heat of the moment. For instance, if you’re a student, there’s a good chance that you choose your subject because you find at least some aspects of your subject genuinely interesting; or because your subject opens doors to a career you actually look forwards to. Try to replace feelings of guilt (I’m so unmotivated!) with reminders of why you’re doing this thing you’re doing. Neel says, “it’s much harder to motivate myself to care about things when I lose sight of why it matters to me.”
• The Pareto principle tends to apply to learning: 80% of the really useful information often comes from 20% of the total information you digest. Much of what you hear in a lecture or read in a textbook can be safely discarded: you might know it already, it might be there for interest, or just to fill time or pages. The rest can be distilled into key points, and Neel suggests your goal should be to identify and retain those points. One useful way to discover for yourself the information that’s doing the heavy lifting is to imagine how you might explain something you’ve just learned to a friend (or actually explain it out loud to a rubber duck!).
• Time tracking software can be useful for recording periods of work and keeping schedule. Neel recommends an app called toggl for time tracking. Also, distraction blocking / device locking apps can help beat phone-based procrastination. There are lots to choose from: Offtime, SPACE, Forest, and QualityTime.
• Try sticking post-it notes in conspicuous places to remind you of habits.
• A few of Neel’s habit ideas:
• Someone explains something to you ⟶ repeat it beck in your own words and check you understood.
• Take the first bite of a meal ⟶ savour the food.
• When you feel gratitude towards a friend ⟶ text them something nice!
|
2021-07-29 15:54:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3400014340877533, "perplexity": 1634.4317980794897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153860.57/warc/CC-MAIN-20210729140649-20210729170649-00243.warc.gz"}
|
http://www.zadaj.pl/wypracowanie/moje-ulububione-miejsce-na-swiecie-po-angielsku-w_20481
|
guska00013
03.01.12, 00:03 | Liceum/Technikum / Język angielski (wypracowanie)
Zgłoś
# moje ulububione miejsce na swiecie po angielskuMy favourite place is Italy. Its a very beautiful country and it has numerous of advantages. I like amazing landscapes the most. If I could live in this country I would like to tour all of the interesting places. The most marvellous, are those which shows us romantic stories of Italy such as Romeo and Juliet. I would visit every place where the story began and also the ruins of Rome. I could even live there because I like an Italian cuisine so much and I want to taste the delicacies of the north-western Italy. My dream is to live in Sicily because this is the largest island in the Mediterranean. I love beach, so I could spend there all days. Fresh air and amazing landscapes are wonderful, so living there can even improve health. Despite one disadvantage that there is a big crowded at the time of arrival of tourists, its still place that I like the most. Another advantage is that I would never get bored, because there are many entertainments and also things which would secure a lot of fun. I hope that if I couldnt move there, Ill go for long holidays. It would be really good to wake up in the morning and see that beautiful view through the window. In the place like that I could relax, admire the landscapes and listen to the sound of nature. It would make me feel fully comfortable and even solving problems would be easier in my favourite place. I would not like to live in a big build-up city. Despite that there is close to the centre and shops in places like this I can see more disadvantages than advantages. There is no nature, polluted air and everything is on the rush and there is chaos everywhere. I cannot imagine living in such a place. I would not like that in my view from the window were blocks and other buildings. In the future when I will come back from work I want to relax, admire the landscapes, walk and listen the sound of nature, but in the city I can hear only sound of rushing cars and I`m sure that it would not help to relax. Because of no nature and pollutions I would feel tired and a lack of space, make me feel upset. This is place in which I would feel uncomfortable.
Edytor zaawansowany Zamknij
Podgląd:
Nazwa Kod Rezultat
Odstęp \ a następnie spacja
Nowa linia \\
Potęga x^{2}
Ułamek \frac{x}{y}
Pierwiastek \sqrt{x}
Pierwiastek n-tego stopnia \sqrt[n]{x}
Iloczyn wektorowy \times
Iloczyn skalarny \cdot
Układ 2 równań \left \{ {{y=2} \atop {x=2}} \right
Układ n równań (każde w nowej linii) \begin{cases} ax+b=0\\cx+d=0\\ex+f=0 \end{cases}
Indeks dolny x_{123}
Indeks górny x^{123}
Znaki specjalne \backslash \ \% \ \# \ \\$ \ \& \ \^ \ \~
Kwantyfikator "istnieje" \exists
Kwantyfikator "dla każdego" \forall
Suma zbiorów \cup
Iloczyn zbiorów \cap
Mniejsze lub równe \leq
Większe lub równe \geq
Nierówność \neq
Około \approx
Najczęściej używane symbole:
Pi \pi
Nieskończoność \infty
Alfa \alpha
Beta \beta
Gamma \gamma
Wyrażenia zaawansowane:
Całka nieoznaczona \int{x}\, dx
Całka oznaczona \int\limits^a_b {x} \, dx
Limes \lim_{n \to \infty} a_n
Suma szeregu \sum_{n=1}^{\infty}\frac{1}{n}
Macierz \left[\begin{array}{ccc}1&2&3\\4&5&6\\7&8&9\end{array}\right]
|
2015-05-29 16:13:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1812181919813156, "perplexity": 5645.194219325194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930256.3/warc/CC-MAIN-20150521113210-00297-ip-10-180-206-219.ec2.internal.warc.gz"}
|
http://yifanqian.com/publication/qian-2019-quantifying/
|
# Quantifying the alignment of graph and features in deep learning
Type
Publication
arXiv preprint arXiv:1905.12921
|
2020-10-31 15:56:00
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9418148994445801, "perplexity": 5477.39920986564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107919459.92/warc/CC-MAIN-20201031151830-20201031181830-00141.warc.gz"}
|
https://www.groundai.com/project/measuring-the-universe-with-galaxy-redshift-surveys/
|
Measuring the Universe with galaxy redshift surveys
# Measuring the Universe with galaxy redshift surveys
## Abstract
Galaxy redshift surveys are one of the pillars of the current standard cosmological model and remain a key tool in the experimental effort to understand the origin of cosmic acceleration. To this end, the next generation of surveys aim at achieving sub-percent precision in the measurement of the equation of state of dark energy and the growth rate of structure . This however requires comparable control over systematic errors, stressing the need for improved modelling methods. In this contribution we review at the introductory level some highlights of the work done in this direction by the Darklight project1. Supported by an ERC Advanced Grant, Darklight developed novel techniques for clustering analysis, which were tested through numerical simulations before being finally applied to galaxy data as in particular those of the recently completed VIPERS redshift survey. We focus in particular on: (a) advances on estimating the growth rate of structure from redshift-space distortions; (b) parameter estimation through global Bayesian reconstruction of the density field from survey data; (c) impact of massive neutrinos on large-scale structure measurements. Overall, Darklight has contributed to paving the way for forthcoming high-precision experiments, such as Euclid, the next ESA cosmological mission.2
cosmology, surveys, large-scale structure, dark energy
\mainmatter
## 1 Introduction
A major achievement in cosmology over the 20th century has been the detailed reconstruction of the large-scale structure of the Universe around us. Started in the 1970s, these studies developed over the following decades into the industry of redshift surveys, beautifully exemplified by the Sloan Digital Sky Survey (SDSS) in its various incarnations (e.g. [1]). These maps have covered in detail our “local” Universe (i.e. redshifts ) and only recently we started exploring comparable volumes at larger redshifts, where the evolution of galaxies and structure over time can be detected (see e.g. [2]). Fig. 1 shows a montage using data from some of these surveys, providing a visual impression of the now well-established sponge-like topology of the large-scale galaxy distribution and how it stretches back into the younger Universe.
In addition to their purely cartographic beauty, these maps provide a quantitative test of the theories of structure formation and of the Universe composition. Statistical measurements of the observed galaxy distribution represent in fact one of the experimental pillars upon which the current “standard” model of cosmology is built. Let us define the matter over-density (or fluctuation) field, with respect to the mean density, as ; this can be described in terms of Fourier harmonic components as
δ(k)=∫Vδ(x)e−ik⋅xd3x, (1)
where is the volume considered. The power spectrum is then defined by the variance of the Fourier modes:
⟨δ(k)δ∗(k′)⟩=(2π)3P(k)δD(k−k′). (2)
The observed number density of galaxies is related to the matter fluctuation field through the bias parameter by
ng=¯n(1+bδ), (3)
which corresponds to assuming that . This linear and scale-independent relation provides an accurate description of galaxy clustering at large scales, although it breaks down in the quasi-linear regime below scales of [8]. In general, depends on galaxy properties, as we shall discuss in more detail in Sect. 3. From the hypothesis of linear bias, it descends that , where is the observed galaxy-galaxy power spectrum. This connection allows us to use measurements of to constrain the values of cosmological parameters that regulate the shape of .
Fig. 2 [9] shows an example of such measurements: the left panel plots four estimates of the power spectrum (more precisely, its monopole, i.e. the average of over spherical shells) obtained at from the VIPERS survey data of Fig. 1 (see also Sect. 2.2). In the central and right panels, we show the posterior distribution of the mean density of matter and the baryon fraction from a combined likelihood analysis of the four measurements; these are compared to similar estimates from other surveys and from the Planck CMB anisotropy constraints [10]. More precisely, the galaxy power spectrum shape on large scales probes the combination , where . Such comparisons provide us with important tests of the CDM model, with the estimate from VIPERS straddling Planck and local measurements.
If one goes beyond the simple shape of angle-averaged quantities, two-point statistics of the galaxy distribution contain further powerful information, which is key to understanding the origin of the mysterious acceleration of cosmic expansion discovered less than twenty years ago [11, 12]. First, tiny “baryonic wiggles” in the shape of the power spectrum define a specific, well known comoving spatial scale, corresponding to the sound horizon scale at the epoch when baryons were dragged into the pre-existing dark-matter potential wells. In fact, it turns out that there are enough baryons in the cosmic mixture to influence the dominant dark-matter fluctuations [13, 7] and leave in the galaxy distribution a visible signature of the pre-recombination acoustic oscillations in the baryon-radiation plasma. Known as Baryonic Acoustic Oscillations (BAO), these features provide us with a formidable standard ruler to measure the expansion history of the Universe , complementary to what can be done using Type Ia supernovae as standard candles (see e.g. [14] for the latest measurements from the SDSS-BOSS sample).
Secondly, the observed redshift maps are distorted by the contribution of peculiar velocities that cannot be separated from the cosmological redshift. This introduces a measurable anisotropy in our clustering statistics, what we call Redshift Space Distortions (RSD), an effect that provides us with a powerful way to probe the growth rate of structure . This key information can break the degeneracy on whether the observed expansion history is due to the presence of the extra contribution of a cosmological constant (or dark energy) in Einstein’s equations or rather require a more radical modification of gravity theory. While RSD were first described in the 1980’s [15, 16]), their potential in the context of understanding the origin of cosmic acceleration was fully recognized only recently [17]; nowadays they are considered one of the potentially most powerful “dark energy tests” expected from the next generation of cosmological surveys, as in particular the ESA mission Euclid [18], of which the Milan group is one of the original founders.
## 2 Measuring the growth rate of structure from RSD
### 2.1 Improved models of redshift-space distortions
Translating galaxy clustering observations into precise and accurate measurements of the key cosmological parameters, however, requires modelling the effects of non-linear evolution, galaxy bias (i.e. how galaxies trace mass) and redshift-space distortions themselves. The interest in RSD precision measurements stimulated work to verify the accuracy of these measurements [19, 20]. Early estimates – focused essentially on measuring , given that in the context of General Relativity (e.g. [21]) – adopted empirical non-linear corrections to the original linear theory by Kaiser; this is the case of the so-called “dispersion model” [22], which in terms of the power spectrum of density fluctuations is expressed as
Ps(k,μ)=D(kμσ12)(1+βμ2)2b2Pδδ(k), (4)
where is the redshift-space power spectrum, which depends both on the amplitude and the orientation of the Fourier mode with respect to the line-of-sight, is the real-space (isotropic) power spectrum of the matter fluctuation field and , with being the growth of structure and the linear bias of the specific population of halos (or galaxies) used. The latter is defined as the ratio of the rms clustering amplitude of galaxies to that of the matter, conventionally measured in spheres of radius, . For what will follow later, it is useful to note that
β=fb=fσ8σgal8, (5)
can be recast as
βσgal8=fσ8, (6)
which combines two directly measurable quantities to the left, showing that what we actually measure is the combination of the growth rate and the rms amplitude of clustering, . This is what nowadays is customarily plotted when presenting measurements of the growth rate from redshift surveys (e.g. Fig. 8).
Going back to eq. (4), the term is usually either a Lorentzian or a Gaussian function, empirically introducing a nonlinear damping to the Kaiser linear amplification, with the Lorentzian (corresponding to an exponential in configuration space) normally providing a better fit to the galaxy data [23]. This term is regulated by a second free parameter, , which corresponds to an effective (scale-independent) line-of-sight pairwise velocity dispersion. Fig. 3 (from [20]), shows how estimates of using the dispersion model can be plagued by systematic errors as large as 10%, depending on the kind of galaxies (here dark matter halos) used. With the next generation of surveys aiming at 1% precision by collecting several tens of millions of redshifts, such a level of systematic errors is clearly unacceptable.
Exploring how to achieve this overall goal by optimising measurements of galaxy clustering and RSD, has been one of the main goals of the Darklight project, supported by an ERC Advanced Grant awarded in 2012. Darklight focused on developing new techniques, testing them on simulated samples, and then applying them to the new data from the VIMOS Public Extragalactic Redshift Survey (VIPERS), which was built in parallel.
After assessing the limitations of existing RSD models [20, 24] the first goal of Darklight has been to develop refined theoretical descriptions. This work followed two branches: one, starting from first principles, was based on revisiting the so-called streaming model approach; the second, more pragmatic, aimed at refining the application to real data of the best models available at the time, as in particular the “TNS” model [25]. Such more “data oriented” line of development also included exploring the advantages of specific tracers of large-structure in reducing the impact of non-linear effects.
The first approach [26] focused on the so-called streaming model [27], which in the more general formulation by Scoccimarro [28] (see also [29]), describes the two-point correlation function in redshift space as a function of its real-space counterpart
1+ξS(s⊥,s∥)=∫dr∥ [1+ξR(r)] P(r∥−s∥|r) . (7)
Here quantities noted with and correspond to the components of the pair separation – in redshift or real space – respectively perpendicular and parallel to the line of sight, with and . The interest in the streaming model is that this expression is exact: knowing the form of the pairwise velocity distribution function at any separation , a full mapping of real- to redshift-space correlations is provided. The problem is that this is a virtually infinite family of distribution functions.
The essential question addressed in [26] has been whether a sufficiently accurate description of this family (and thus of RSD) is still possible with a reduced number of degrees of freedom. It is found that, at a given galaxy separation , they can be described as a superposition of virtually infinite Gaussian functions, whose mean and dispersion are in turn distributed according to a bivariate Gaussian, with its own mean and covariance matrix. A recent extension of this work [30] shows that such“Gussian-Gaussian” model cannot fully match the level of skewness observed at small separations, in particular when applied to catalogues of dark matter halos. They thus generalize the model by allowing for the presence of a small amount of local skewness, meaning that the velocity distribution is obtained as a superposition of quasi-Gaussian functions. In its simplest formulation, this improved model takes as input the real space correlation function and the first three velocity moments (plus two well defined nuisance parameters) and returns an accurate description of the anisotropic redshift-space two-point correlation function down to very small scales ( for dark matter particles and virtually zero for halos). To be applied to real data to estimate the growth rate of structure , the model still needs a better theoretical and/or numerical understanding of how the velocity moments depend on on small scale, as well as tests on mock catalogues including realistic galaxies.
The second, parallel approach followed in Darklight was to work on the “best” models existing in the literature, optimising their application to real data. The natural extensions to the dispersion model (4) start from the Scoccimarro [28] expression
Ps(k,μ)=D(kμσ12)(b2Pδδ(k)+2fbμ2Pδθ(k)+f2μ4Pθθ(k)), (8)
where and are respectively the so-called density-velocity divergence cross-spectrum and the velocity divergence auto-spectrum, while is the usual matter power spectrum. If one then also accounts for the non-linear mode coupling between the density and velocity-divergence fields, two more terms arise inside the parenthesis, named and , leading to the TNS model by Taruya and collaborators [25].
A practical problem in the application of either of these two models is that the values of and cannot be measured from the data. As such, they require empirical fitting functions to be calibrated using numerical simulations [31]. As part of the Darklight work, we used the DEMNUni simulations (see sect. 4) to derive improved fitting functions in different cosmologies [32]:
Pδθ(k)=(Pδδ(k)Plin(k)e−k/k∗)12, (9)
Pθθ(k)=Plin(k)e−k/k∗, (10)
where is the linear matter power spectrum and is a parameter representing the typical damping scale of the velocity power spectra, which is well described as , where are the only two parameters that need to be calibrated from the simulations. These forms for and have valuable, physically motivated properties: they naturally converge to in the linear regime, including a dependence on redshift through . They represent a significant improvement over previous implementations of the Scoccimarro and TNS models and allowed us to extend their application to smaller scales and to the high redshifts covered by VIPERS.
### 2.2 Application to real data: optimising the samples
The performance, in terms of systematic error, of any RSD model when applied to real data does not depend only on the quality of the model itself. The kind of tracers of the density and velocity field that are used, significantly enhance or reduce some of the effects we are trying to model and correct. This means that, in principle, we may be able to identify specific sub-samples of galaxies for which the needed non-linear corrections to RSD models are intrinsically smaller. This could be an alternative to making our models more and more complex, as it happens for the full galaxy population.
Such an approach becomes feasible if the available galaxy survey was constructed with a broad selection function and supplemented by extensive ancillary information (e.g. multi-band photometry, from which spectral energy distributions, colours, stellar masses, etc. can be obtained). This allows a wide space in galaxy physical properties to be explored, experimenting with clustering and RSD measurements using different classes of tracers (and their combination), as e.g. red vs. blue galaxies, groups, clusters. This is the case, for example, of the Sloan Digital Sky Survey main sample [6]. The VIMOS Public Extragalactic Redshift Survey (VIPERS) [3] was designed with the idea of extending this concept to , i.e. when the Universe was around half its current age, providing Darklight with a state-of-the-art playground.
VIPERS is a new statistically complete redshift survey, constructed between 2008 and 2016 as one of the “ESO Large Programmes”, exploiting the unique capabilities of the VIMOS multi-object spectrograph at the Very Large Telescope (VLT) [5]. It has secured redshifts for galaxies with magnitude (out of spectra) over a total area of square degrees, tiled with a mosaic of 288 VIMOS pointings. Target galaxies were selected from the two fields (W1 and W4) of the Canada-âFrance-âHawaii Telescope Legacy Survey Wide catalogue (CFHTLS–Wide), benefiting of its excellent image quality and photometry in five bands ()3. The survey concentrates over the range , thanks to a robust colour pre-selection that excluded lower- targets, nearly doubling in this way the sampling density achieved by VIMOS within the redshift of interest [3]. This set-up produces a combination of dense sampling () and large volume ( h Mpc), which is unique for these redshifts and allows studies of large-scale structure and galaxy evolution to be performed on equal statistical footing with state-of-the-art surveys of the local Universe (see Fig. 1). Sparser samples like the SDSS LRG, BOSS [14] or Wigglez [33] surveys allow for much larger volumes to be probed and are excellent to measure large-scale features as Baryonic Acoustic Oscillations. However, they include a very specific, limited sample of the overall galaxy population and (by design) fail to register the details of the underlying nonlinear structure. The rich content of information of VIPERS can be further appreciated in Fig. 4, where the connection between galaxy colours and large-scale structure is readily visible by eye.
VIPERS released publicly its final catalogue and a series of new scientific results in November 2016. More details on the survey construction and the properties of the sample can be found in [5, 4, 3].
Fig. 5 shows two measurements of the anisotropic two-point correlation function in redshift space (i.e. what is called in eq. (7); here and ), using the VIPERS data. In this case the sample has been split into two classes, i.e. blue and red galaxies, defined on the basis of their rest-frame photometric colour (see [34] for details). The signature of the linear streaming motions produced by the growth of structure is evident in the overall flattening of the contours along the line-of-sight direction (). These plots also show how blue galaxies (left) are less affected by small-scale nonlinear motions, i.e. those of high-velocity pairs within virialised structures. These produce the small-scale streching of the contours along (vertical direction), which is instead evident in the central part of the red galaxy plot on the right. For this reason, blue galaxies turn out to be better tracers of RSD, for which it is sufficient to use a simpler modelling, as shown in Fig. 6. When using the full galaxy population, the best performing model is the TNS by Taruya et al. [25] (left panel), while when we limit the sample to luminous blue galaxies only, it is sufficient to use the simpler nonlinear corrections by Scoccimarro [28] (filled circles, right panel); open circles correspond to the simplest model, i.e. the standard dispersion model [22], which is not sufficient even in this case. See [34] for details.
### 2.3 RSD from galaxy outflows in cosmic voids
Cosmic voids, i.e. the large under-dense regions visible also in Fig. 1, represent an interesting new way to look at the data from galaxy redshift surveys. As loose as they may appear, over the past few years they have proved to be able to yield quantitative cosmological constraints on the growth of structure. Indeed, growth-induced galaxy peculiar velocities tend to outflow radially from voids, which leaves a specific mark in the observed void-galaxy cross-correlation function (see e.g. [35]). The dense sampling of VIPERS makes it excellent for looking for cosmic voids at high redshift. Fig. 7 shows an example of how a catalogue of voids was constructed from these data [36].
The Darklight contribution to this new research path has been presented recently [37]. By modelling the void-galaxy cross-correlation function of VIPERS, a further complementary measurement of the growth rate of structure has been obtained[37]. This value is plotted in Fig. 8, which provides a summary of all VIPERS estimates, plotted in the customary form (see Sect. 2.1 for details). The figure also includes one further measurement, based on a joint analysis of RSD and galaxy-galaxy lensing [38], which has not been discussed here. In addition, one more analysis is in progress, based on the linearisation technique called “clipping” [39].
Such a multifaceted approach to estimating the growth rate of structure clearly represents an important cross-check of residual systematic errors in each single technique. We stress again how this has been made possible thanks to the broad “information content” of the VIPERS survey, which provides us with an optimal compromise (for these redshifts) between a large volume, a high sampling rate and extensive information on galaxy physical properties.
## 3 Optimal methods to derive cosmological parameters
The cosmological information we are interested in is encoded in the two-point statistics of the matter density field, i.e. its correlation function or, in Fourier space, its power spectrum . As we have seen in the Introduction, this is connected to the observed galaxy fluctuations as , with . The galaxy bias depends in general on the galaxy properties, such as their luminosity and morphology, as well as the environment in which they are found (in groups or in isolation). Thus, in this context the bias terms are nuisance parameters that are marginalized in the analysis. However, the precision with which the measurement can be made depends very much on these parameters as they set the amplitude of the power spectrum and the effective signal-to-noise ratio.
Going beyond the standard approach to estimate cosmological parameters, as e.g. used in the analysis of Fig. 2, in Darklight we have investigated and applied optimal methods given the observed constraints (luminosity function and bias). We can formulate this as a forward modelling problem through Bayes’ theorem, which tells us how the measurements relate to the model:
p(Pδδ,δ,b,¯n|ng)∝p(ng|Pδδ,δ,b,¯n)p(Pδδ,δ,b,¯n). (11)
On the left-hand side, the posterior describes the joint distributions of the model parameters, here explicitly written as the density field , its power spectrum , the galaxy bias and the mean number density , but we can generalize to the underlying cosmological parameters. The posterior is factored into the likelihood and prior terms on the right-hand side. To evaluate the posterior we must assume forms for these functions. We begin by assuming multi-variate Gaussian distributions for the likelihood and priors since these forms fully encode the information contained in the power spectrum or correlation function statistics. In this limit the maximum-likelihood solution is given by the Wiener filter. In [45] we demonstrate that in this limit the solution is optimal in the sense that it minimizes the variance on the density field and power spectrum.
Fig. 9 shows one possible reconstruction of the VIPERS density field. It represents a single step in the Monte Carlo chain used to sample the full posterior distribution as presented in [45]. In this work we characterized the full joint posterior likelihood of the density field, the matter power spectrum, RSD parameters, linear bias and luminosity function. These terms, particularly since they are estimated from a single set of observations, are correlated and the analysis naturally reveals these correlations.
A notable aspect of this analysis is that we optimally use diverse information including the luminosity function, density field and power spectrum to infer cosmological parameters and it becomes even more interesting with additional observables. We can envision simultaneous inference using cluster counts or cosmic shear. Generalizing requires putting a full dynamical model for large-scale structure in the likelihood term effectively moving the likelihood analysis to the initial conditions. Observational systematics may be naturally included as well.
## 4 A new kid in town: massive neutrinos
The non-vanishing neutrino mass, implied by the discovery of neutrino flavour oscillations, has important consequences for our analysis of the large-scale structure in the Universe. Even if sub-dominant, the neutrino contribution suppresses to some extent the growth of fluctuations on specific scales, producing a deformation of the shape of the total matter power spectrum. Given current upper limits on the sum of the masses ( eV at % confidence [14]), the expected effect corresponds to a few percent change in the amplitude of total matter clustering. In the era of precision cosmology, neutrinos are an ingredient that cannot be neglected anymore. Conversely, future surveys like Euclid may eventually be able to obtain an estimate of the total mass of neutrinos with a precision that surpasses ground-based experiments [46]. To achieve this goal, we shall be able to: (a) describe how these effects are mapped from the matter to the galaxy power spectrum, i.e. what we measure; (b) distinguish these spectral deviations from those due to non-linear clustering, and to the presence of other possible contributions, e.g. forms of dark energy beyond the cosmological constant, like quintessence or in general an evolving equation of state of dark energy .
This has been addressed in Darklight through the ”Dark Energy and Massive Neutrino Universe” (DEMNUni) simulations, a suite of fourteen large-sized N-body runs including massive neutrinos (besides cold dark matter), which have been recently completed [47]. They explore the impact on the evolution of structure of a neutrino component with three different total masses ( eV), including scenarios with evolving , according to the phenomenological form .
Running these simulations required developing new techniques to account for the evolving hot dark matter component represented by neutrinos [48]. Early analyses of the whole suite show that the effects of massive neutrinos and evolving dark energy are highly degenerate (less than % difference) with a pure CDM model, when one considers the clustering of galaxies or weak lensing observations. Disentangling these different effects will therefore represent a challenge for future galaxy surveys as Euclid and needs to be carefully addressed.
Fig. 10 gives an example of physical effects that can be explored using these numerical experiments, showing weak-lensing maps (in terms of the amplitude of the resulting deflection angle) built via ray-tracing through the matter particle distribution of the simulations, for sources placed at redshift . The middle panel shows the difference between a pure CDM scenario and a model with eV. More quantitatively, in terms of angular power spectra of the deflection field, massive neutrinos produce a scale-dependent suppression with respect to the CDM case, which, on small scales, asymptotically tends towards a constant value of about %, %, % for eV, respectively.
#### Acknowledgments.
Many of the results presented here would have not been possible without the outstanding effort of the VIPERS team to build such a unique galaxy sample. We are particularly grateful to S. de la Torre and J.A. Peacock for their insight and crucial contribution to the cosmological analyses discussed in this paper. Scientific discussions and general support in the development of Darklight by J. Dossett, J. He, and J. Koda are also warmly acknowledged.
### Footnotes
1. http://darklight.fisica.unimi.it
2. Review to appear in Towards a Science Campus in Milan: A snapshot of current research at Physics Department ’Aldo Pontremoli’ (2018, Springer, Berlin, in press)
### References
1. Eisenstein, D. J., et al. AJ 142, 72 (2011).
2. Guzzo, L., Vipers Team. The Messenger 168, 40 (2017).
3. Guzzo, L., et al. AAP 566, A108 (2014).
4. Garilli, B., et al. AAP 562, A23 (2014).
5. Scodeggio, M., et al. AAP, in press, ArXiv e-print 161107048 (2017).
6. York, D. G., et al. AJ 120, 1579 (2000).
7. Eisenstein, D. J., et al. ApJ 633, 560 (2005).
8. Di Porto, C., et al. AAP 594, A62 (2016).
9. Rota, S., et al. AAP 601, A144 (2017).
10. Planck Collaboration, et al. ArXiv e-print: 150201589 (2015).
11. Riess, A. G., et al. AJ 116, 1009 (1998).
12. Perlmutter, S., et al. ApJ 517, 565 (1999).
13. Cole, S., et al. MNRAS 362, 505 (2005).
14. Alam, S., et al. MNRAS 470, 2617 (2017).
15. Davis, M., Peebles, P. J. E. ApJ 267, 465 (1983).
16. Kaiser, N. MNRAS 227, 1 (1987).
17. Guzzo, L., et al. Nature 451, 541 (2008).
18. Laureijs, R., et al. ArXiv e-print 11103193 (2011).
19. Okumura, T., Jing, Y. P. ApJ 726, 5 (2011).
20. Bianchi, D., et al. MNRAS 427, 2420 (2012).
21. Peacock, J. A., et al. Nature 410, 169 (2001).
22. Peacock, J. A., Dodds, S. J. MNRAS 267, 1020 (1994).
23. Pezzotta, A., et al. AAP 604, A33 (2017).
24. de la Torre, S., Guzzo, L. MNRAS 427, 327 (2012).
25. Taruya, A., Nishimichi, T., Saito, S. Phys. Rev. D 82, 6, 063522 (2010).
26. Bianchi, D., Chiesa, M., Guzzo, L. MNRAS 446, 75 (2015).
27. Fisher, K. B. ApJ 448, 494 (1995).
28. Scoccimarro, R. Phys. Rev. D 70, 8, 083007 (2004).
29. Reid, B. A., et al. MNRAS 426, 2719 (2012).
30. Bianchi, D., Percival, W. J., Bel, J. MNRAS 463, 3783 (2016).
31. Jennings, E., Baugh, C. M., Pascoli, S. MNRAS 410, 2081 (2011).
32. Bel, J., et al. in preparation (2017).
33. Blake, C., et al. MNRAS 406, 803 (2010).
34. Mohammad, F. G., et al. ArXiv e-prints (2017).
35. Hamaus, N., et al. Physical Review Letters 117, 9, 091302 (2016).
36. Micheletti, D., et al. AAP 570, A106 (2014).
37. Hawken, A. J., et al. AAP, in press, ArXiv e-print 161107046 (2017).
38. de la Torre, S., et al. submitted to AAP, ArXiv e-prints 161205647 (2017).
39. Wilson, M., et al. in preparation (2017).
40. Blake, C., et al. MNRAS 425, 405 (2012).
41. Beutler, F., et al. MNRAS 466, 2242 (2017).
42. Beutler, F., et al. MNRAS 423, 3430 (2012).
43. Howlett, C., et al. MNRAS 449, 848 (2015).
44. Okumura, T., et al. Pub Astr Soc Japan 68, 38 (2016).
45. Granett, B. R., et al. AAP 583, A61 (2015).
46. Carbone, C., et al. JCAP 3, 030 (2011).
47. Carbone, C., Petkova, M., Dolag, K. JCAP 7, 034 (2016).
48. Zennaro, M., et al. MNRAS 466, 3244 (2017).
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
|
2019-06-25 07:56:40
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.810505747795105, "perplexity": 1504.8567559951794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999814.77/warc/CC-MAIN-20190625072148-20190625094148-00446.warc.gz"}
|
https://jcom.sissa.it/archive/16/05/JCOM_1605_2017_A01
|
# The influence of temperature on #ClimateChange and #GlobalWarming discourses on Twitter
### Abstract:
Research suggests non-experts associate different content with the terms “global warming” and “climate change.” We test this claim with Twitter content using supervised learning software to categorize tweets by topic and explore differences between content using “global warming” and “climate change” between 1 January 2012 and 31 March 2014. Twitter data were combined with temperature records to observe the extent to which temperature was associated with Twitter discussions. We then used two case studies to examine the relationship between extreme temperature events and Twitter content. Our findings underscore the importance of considering climate change communication on social media.
Keywords:
12 October 2017
### 1 Introduction
Global concerns about climate change vary. Generally, citizens of European nations are more worried about its immediacy compared to Americans and countries that are high emitters of carbon dioxide tend to exhibit less concern about its impacts [Wike, 2016]. Climate change refers to statistical changes in the Earth’s climatic system and associated events over long timescales [American Meteorological Society, 2012]. Global warming, a byproduct of climate change, refers to the increase in average global temperature due to anthropogenic emissions, primarily carbon dioxide. While the terms “global warming” and “climate change”, are often used interchangeably by media to refer to the same phenomenon [IPCC, 2013], they evoke different associations among lay audiences [Leiserowitz et al., 2014; Schuldt, Konrath and Schwarz, 2011; Schuldt and Roh, 2014; Whitmarsh, 2009]. For example, quantitative and qualitative surveys show that the term “global warming”, relative to “climate change”, evokes more concern among residents in the south of England [Whitmarsh, 2009]. Further, the former elicits more associations with temperature and human causality. In the present study, we further scholarship on people’s associations with these terms in the context of social media.
Online media are becoming one of the prime means through which people encounter scientific information. Although Americans, relative to British adults, tend to look to the Internet more for scientific information, the use of social media has increased worldwide. The abundance of interactive, Web-2.0 media have expanded our ability to engage in discussions with each other about a variety of scientific issues [Brossard, 2013; Scheufele, 2013]. These technologies also offer rapid and widespread information sharing. Twitter, a social microblogging platform, has become a significant environment for real-time opinion sharing, interaction with experts and non-experts alike, and information dissemination related to diverse issues ranging from politics [Papacharissi and Fatima Oliveira, 2012] to nanotechnology [Runge et al., 2013]. Understanding and mapping discourses surrounding scientific issues on social media are valuable to the scholarship and practice of science communication. While online opinions are not always representative of public opinion, the sentiments and discussions expressed online represent untapped sources of data that can be leveraged to inform science communication scholars and practitioners [Yeo and Brossard, 2017].
While scholars have linked Twitter discourse to temperature changes and climate change [Kirilenko, Molodtsova and Stepchenkova, 2015], there has been no investigation of the topics of discussion associated with the terms “global warming” and “climate change.” This motivates us to explore the discursive contexts in which audiences use these. Further, while studies have examined the relationship between Twitter activity, local changes in temperature, and mass media, in the present work we explore how regional temperature changes and topics discussed on Twitter using the two terms are related. In doing so, we obtain insight into people’s perceptions and associations with these terms through spontaneous expressions of opinion.
Thus, the goals of this study are two-fold: (i) to determine whether differences exist in topics of Twitter conversation using the terms “climate change” and “global warming” within the context of six topics of discussion in which these terms are often used (energy, weather, policy, environment, political theater, and factual statements; see Methods for further explanation); and (ii) to explore whether temperature variations across geographic regions in the United States and in response to extreme temperature events are related to Twitter reactions using the terms “climate change” or “global warming.” Given the context of our study, we focus our review of the literature on scholarship primarily conducted in the United States.
### 2 Literature review
#### 2.1 Differences in public opinion regarding global warming and climate change
Among Americans, a stark political partisan divide in climate change opinions persists. This divide began to widen in the early 1990s when discussions among non-experts became more politicized [Boykoff and Boykoff, 2004; Boykoff and Boykoff, 2007; Dunlap and McCright, 2008; Leggett, 2001; Trumbo, 1996] and is apparent in how people associate weather events with the two terms. While there is no difference among Democrats, many Republicans and Independents believe global warming, compared to climate change, is more likely to impact weather in the United States “a lot” [Leiserowitz et al., 2014]. Further, Republicans are more likely to suggest a large-scale effort to reduce climate change than to reduce global warming [Leiserowitz et al., 2014]. Other research has shown that the terms have different implications of seriousness across party lines; Republicans rate “climate change” as more serious while Democrats rank “global warming” as more serious [Villar and Krosnick, 2011].
Predilections for climate change-related terms exist across different segments of the public, despite a large portion of people having no preference [Akerlof and Maibach, 2011]. “Global warming” was found to be more polarizing and preferred by those who believe climate change is occurring, while those who believe it was not occurring opted for “climate change.” Similarly, polarization has been observed on coverage of the issue in mass and social media such as Twitter, with differences in the frames and partisanship associated with the two terms [O’Neill et al., 2015; Pearce et al., 2014; Williams et al., 2015]. “Global warming” was more commonly associated with tweets using a hoax frame (“global warming is a hoax/fraud”) and more often used in Republican than Democratic states [Jang and Hart, 2015].
Until recently, most studies of non-expert discourses surrounding global warming and climate change did not focus specifically on social media communications [Nielsen and Kjærgaard, 2011]. Yet, Twitter has risen in popularity over the last several years. In 2014, 23 percent of online American adults used Twitter [Duggan et al., 2015]. Among Twitter users, 59 percent use the platform to attend to news [Gottfried and Shearer, 2016]. Importantly, Twitter is used worldwide and has four times as many international users compared to in the United States [DeSilver, 2016].
While the opinions on Twitter do not necessarily reflect public opinion [Mitchell and Hitlin, 2013], it remains valuable to examine discourses on this platform. Twitter content is posted in real-time, and represents unsolicited, instantaneous responses to current issues in broader society. Studies employing such reactive opinions are not well represented in the literature on lay discourse about global warming and climate change, as earlier studies primarily employ survey methodologies that allow participants to reflect more deeply on the issue.
Recent studies have begun to analyze the nature of a broad range of scientific discourses on Twitter, including the Higgs-Boson particle [Boyle, 2012], nuclear energy [Kim et al., 2016], nanotechnology [Runge et al., 2013; Yeo et al., 2014a], and the arsenic bacteria controversy [Yeo et al., 2016]. Researchers have even used Twitter content to analyze political discourse [Beauchamp, 2016; Small, 2011], as well as in concert with users’ geographic locations to map real-time earthquake events in Japan [Sakaki, Okazaki and Matsuo, 2013]. Many of these scientific issues have been addressed in detail in online news media. Given that scientific issues covered by mainstream media have previously trended on Twitter, that the issue of climate change receives extensive media coverage, and that climate adaptation and mitigation are significant societal issues that have ethical and legal implications, examining opinions expressed on Twitter will improve our understanding of how people spontaneously react to global warming and inform communication efforts around this issue.
Recent studies have begun to use Twitter data to study specific conversations related to climate change [Su, Akin and Brossard, 2017]. For example, Pearce et al. [2014] investigated conversations surrounding the release of the International Panel on Climate Change (IPCC) Working Group I report to examine how Twitter users formed communities around this issue. Using network analysis, they showed that content focused on both the science and politics surrounding climate change and users were more likely to share information with like-minded others, further underscoring the polarized nature of discourse on this issue. Another study tracked changes in climate change sentiment on Twitter using happiness scores to determine how sentiment varied in response to news and events about climate change [Cody et al., 2015]. On average, “global warming” tweets were more negative and profane, contained more climate denier information, and had fewer mentions of science. Over the study period, decreases in happiness were observed to coincide with the occurrence of several natural disasters (e.g., Hurricane Sandy in 2012).
Another recent study investigated changes in the volume of global warming and climate change online searches in concert with emotional response to these topics using Google and Twitter, respectively [Lineman et al., 2015]. They showed that Twitter posts between 12 October and 12 December 2013 were more negative about global warming. While this study provides a foundation for understanding temporal changes in search interest and related emotional response to these two terms, the specific contexts and topics in which these terms have been considered has not been investigated. Therefore, one goal of our study is to investigate differences in global warming and climate change tweets in the context of topics in which these two terms are commonly used. By categorizing daily Twitter discourse into various topics of discussion, we can improve our understanding of how often these terms are used, including whether one term is “preferred” over the other within various topics of discussion.
Given the evident differences in social media conversations about climate change using these terms, we set out to determine whether differences exist in the average daily number of Twitter posts using the terms “climate change” and “global warming” within the context of six topics of discussion. For each topic, we test the following hypothesis:
H1: The average daily number of Twitter posts about “global warming” will differ significantly from that of “climate change” over the period studied (1 January 2012 and 31 March 2014).
#### 2.3 Global warming, climate change, and extreme weather
People tend to rely on cognitive shortcuts when forming attitudes toward scientific issues [Brossard and Nisbet, 2007; Brossard et al., 2009; Finucane et al., 2000; Su et al., 2016; Yeo et al., 2014b], including climate change. For example, the likability of weather forecasters has been linked to greater perceptions of harm caused by the phenomenon [Anderson et al., 2013]. Climate change opinions can also be predicted by geographic variability; patterns of climate opinion among Americans vary with expected political patterns as more politically liberal states exhibit greater levels of concern relative to conservative ones [Howe et al., 2015]. Other scholarship has also shown that global warming opinions are tied to outdoor temperature [Joireman, Truelove and Duell, 2010] as well as perceptions of temperature [Li, Johnson and Zaval, 2011]. Higher actual and perceived temperatures are associated with greater belief in the occurrence of global warming. Moreover, abnormal temperature events have greater influence on people’s belief in, and concern about, climate change [Zaval et al., 2014]. Such examples underscore a demonstrated link between macro-level phenomena and individual behaviors [Schwarz and Clore, 1983]. Thus, occurrences such as weather events can influence people’s perceptions of, and sentiment toward, global warming.
Few studies have examined Twitter discourse related specifically to weather. One study found tweet volume to be highly correlated with the number of people affected by tornado watches and warnings, suggesting that Twitter may be a useful platform for disseminating information and understanding audience reactions to severe weather [Ripberger et al., 2014]. Kirilenko, Molodtsova and Stepchenkova [2015] found that during extreme weather events (quantified using anomalous temperature data), there was an increase in the number of tweets about climate change, especially for colder and wetter regions of the United States and during the months of December to February and June to August.
While these recent studies consider the volume of tweets, these studies do not specifically categorize their content. Understanding differences in content would further develop our understanding of the emotional response Twitter users have when discussing these terms. Furthermore, while Kirilenko, Molodtsova and Stepchenkova [2015] provide a foundation for understanding the relationship between extreme weather and global warming/climate change tweet volume, a more in-depth investigation of this relationship in the context of notable events would shed light on why we observe changes in opinions during such extreme events.
This motivates us to explore the relationship between global warming and climate change tweets and temperature across regions in the United States in addition to during extreme temperature events. We explore these relationships in the context of the research questions below:
RQ1: Is regional temperature in the United States associated with Twitter posts using the term global warming and/or climate change?
RQ2: Are tweets about climate change or global warming related to temperature during the month of an extreme temperature event?
We investigate RQ1 by examining correlations between regional temperature in the United States over the study period and tweets about global warming and climate change. To address RQ2, we use case studies focusing on two events, a heat wave (March 2012) and a cold surge (January 2014). Case studies have been used by atmospheric scientists who aim to investigate relationships between a weather event and its associated atmospheric and/or societal response [e.g., Mohri, 1953; Hakim, Keyser and Bosart, 1996; Winters and Martin, 2016; Bosart et al., 1996]. While the results of case studies are not generalizable, such analyses allow us to observe interesting trends in tweets and extreme temperatures, which can be combined with statistical analyses to further our understanding of relationships of interest.
We build on previous work in the following ways: (i) we investigate how these terms are used in different topics of conversation on social media with a multi-year census of tweets; and (ii) we examine the influence of regional temperature on unsolicited expressions and in reaction to a significant heat wave and cold surge event.
### 3 Methods
We used the software, ForSight, from the social media monitoring company, Crimson Hexagon, to classify tweets into topic categories. ForSight is a supervised learning software that detects and tracks underlying linguistic patterns, based on concepts identified by human coders using an initial training set, and applies the learned algorithm to analyze the remaining, typically large, amounts of social media texts [Hopkins and King, 2010]. Scholars have argued for applying this hybrid content analysis method to social media discourses as it possesses the reliability and efficiency of computer-based coding while preserving the latent validity of human coding [Su et al., 2017; Su, Akin and Brossard, 2017]. Others have examined and verified such supervised learning programs [Collingwood and Wilkerson, 2012]. Specifically, ForSight has been verified through comparison with surveys data and election results [Ceron et al., 2014; Hitlin, 2015]. These scholars, among others, have also verified the resilience of supervised learning programs based on the training set used for the program [Collingwood and Wilkerson, 2012; Hopkins and King, 2010]. Using a large and randomly distributed subset of the sample posts improves the accuracy of the program, in addition to extensive human coding [Collingwood and Wilkerson, 2012; Neuendorf, 2017].
We collected and analyzed a census of publicly-available tweets posted between 1 January 2012 and 31 March 2014 using ForSight. A total of 3,732,058 English-language tweets from the United States were collected and analyzed.1 ForSight uses monitors with intelligent algorithms and a Boolean logic-based keyword search to track linguistic patterns based on training by human coders. To train the algorithm, the program randomly samples from the census of publicly available tweets based on the given keywords. To ensure a representative and high-quality subset of tweets is used to train the algorithm, the posts are categorized by human coders according to a codebook. During the process of manually coding the random sample, only mutually exclusive and unambiguous examples were used to train the monitors. Non-exclusive tweets (i.e., those that could fit into multiple categories) were not included in the training subset. Human-coders thus analyzed more posts that were subsequently included in the training subset. Once consensus between coders is reached, the trained categories are used by the software to analyze the remaining posts. Training the algorithm with human coding requires a minimum of 20 posts, as recommended by Crimson Hexagon, in each defined category. Additional research by Hopkins and King [2010] suggests a total of 100 hand-coded items is sufficient for reliable results (in their analysis, 100 congressional documents were distributed into seven categories).
We used two separate monitors for this study, each with individual keywords.2 Tweets were coded into one of the six categories based on the topic: (i) energy; (ii) weather; (iii) policy implications; (iv) environment; (v) political theater; and (vi) statements about climate change or global warming. Categories were chosen based on an initial inductive examination of a randomly selected sample of tweets as they reflect common themes associated with discussions of the issue. Other categories, such as human health, were not commonly included in Twitter discourse relative to the categories selected. This is relatively unsurprising as climate change is not widely recognized as a health issue among American publics [Akerlof et al., 2010]. This is similarly the case in Canada [Cardwell and Elliott, 2013]. We combined this inductive process with our collective experience with climate science education and research. Examples of each category are shown in Table 1. Tweets that expressed opinions about fracking, fossil fuels, and nuclear or renewable energy were coded in the energy category. Those related to temperature, precipitation, seasons, or extreme weather events were classified as weather. Policy implications included mentions of cap and trade, carbon limits or tax, and public projects. Tweets in the environment category included mentions of agriculture, habitat loss, and extinction. Political theater tweets had to be actor-focused, containing specific mentions of public figures. Lastly, tweets that were declarations such as “Climate change is a fact” were categorized as statements about “climate change” or “global warming.”
Table 1: Examples of categorized tweets containing the keywords climate change and global warming.
#### 3.2 Temperature data and calculations
To identify events of interest, we used surface temperature data from the Climate Forecast System Reanalysis (CFSR) dataset [Saha et al., 2010], which has a horizontal resolution of 0.5°and a temporal resolution of 6 hours. Temperature anomalies were computed by subtracting daily average temperature from climatological temperature for a given day; positive (negative) temperature anomalies indicate the observed temperature was warmer (colder) than average. For each spatial point, the climatological mean temperature was determined by first applying a 21-day running mean centered over the day of interest. Then, the 30-year temperature average at the point of interest over the years 1980–2009 was calculated. Finally, we computed the square of each daily temperature anomaly, which represents a first-order measure of the variability of temperature at each spatial point:
${T}_{sq.anom.}={\left(T-{T}_{climo}\right)}^{2}$
where $T$ is the daily average surface temperature and ${T}_{climo}$ is the climatological mean at that point.
#### 3.3 Data analysis
To address H1, we used independent samples $t$-tests to assess whether average daily posts in each topic of conversation on Twitter containing the keywords “global warming” differed significantly from those containing the keywords “climate change” over the study period (Table 2). To account for multiple comparisons and reduce the risk of Type I error, we adjusted our level of significance ($\alpha$) based on the Bonferroni procedure [Rosenthal and Rubin, 1983; Wright, 1992]; we set $\alpha =\frac{0.05}{6}=0.008$.
Table 2: Descriptive statistics and results of independent samples $t$-tests comparing means of daily global warming and climate change tweets in topic categories over the study period (1 January 2012 – 31 March 2014). Positive values of Cohen’s $d$ indicate that discussions using global warming have higher average daily posts.
To answer RQ1, we examined correlations between monthly average anomalous temperature and ${T}_{sq.anom.}$, and that of number of tweets per capita about global warming and climate change. In the analysis that addressed RQ1, we did not differentiate between topics of conversation. Instead, we compared the monthly average of climate change and global warming tweets with temperature data from six regions in the United States over the study period (Supplemental Table 4 and Supplemental Figure 4). Regions were modified using tagged geographic location in Twitter data. ForSight uses two different methods to assign location data to tweets; approximately 1 percent of the tweets are geo-tagged by the user. The locations of the remaining tweets are estimated based on contextual clues, including users profile information, time zones, and language. The location estimation methodology is similar to that described by Beauchamp [2016]. Of the 3,732,058 total tweets, approximately 15 percent were excluded from analysis as they were not geo-tagged and could not be estimated, resulting in 3,181,229 posts.
The presence of seasonality within our temperature data has the potential to confound our analysis. For example, if number of tweets per capita is significantly correlated with temperature, then peak temperatures due to seasonality may lead to misleading conclusions. To alleviate this, we examined correlations between Twitter posts per capita and anomalous temperature, and between posts per capita and ${T}_{sq.anom.}$.
To address RQ2, we conducted case studies focused on two separate extreme temperature events occurring in March 2012 and January 2014. As weather events are not consistent across the United States, not all delineated regions were affected by these extreme weather events. We identified the specific regions affected and used these as case studies. We did not differentiate between discursive topics in this analysis. During the March 2012 “heat wave”, temperatures were warmer than normal, particularly in the Northeast, Southeast, and Midwest, with average monthly anomalies of $+5.7$°C, $+5.0$°C, and $+8.0$°C, respectively. During the January 2014 “cold surge”, all except for the Western and High Plains regions experienced below average temperatures. Temperature anomalies ranged from $-0.05$°C in the Southeast to $-3.1$°C in the Midwest. This “cold surge” event coincided with President Obama’s 2014 State of the Union address in which he stated “Climate change is a fact” [Obama, 2014]. In both case studies, daily anomalous temperature was compared with daily number of global warming and climate change messages per capita on Twitter. We then determined whether anomalous temperature was significantly correlated with posts for each term.
### 4 Results and discussion
A total of 3,732,058 posts were collected over the study period (Figures 1 and 2). To address our hypothesis, we compared average daily tweets of climate change and global warming in each of the six topics of discourse. We find partial support for H1. Mean differences were significant for five of the six topics with medium to large effect sizes (Table 2); only statements made using the terms “global warming” and “climate change” did not differ significantly. A possible explanation for this finding is that it may indicate Twitter audiences do not hold different associations with these terms when using them in posts unrelated to the other five categories. This emphasizes that the context of discussion matters. When the discursive context was not clearly defined, Twitter users did not appear to hold different associations with these terms.
In discussions of energy and weather, the daily average tweets about global warming were significantly greater than those about climate change. In discussions of the environment and those related to policy or politics, daily mean posts about climate change were significantly greater. The differences were smallest for the weather (Cohen’s $d=.488$) and political theater (Cohen’s $d=-.387$) categories, and highest in the environment category (Cohen’s $d=-1.516$). The significant differences in mean daily posts are consistent with previous studies that suggest these terms are not synonymous for online audiences. In addition to attaching different attitudes to these terms [Cody et al., 2015; Jang and Hart, 2015; Leiserowitz et al., 2014], our results show that Twitter audiences use global warming and climate change in different contexts.
Climate change was used more frequently when discussions were related to political issues. This may reflect the evolution in climate rhetoric [for details, see Besel, 2007] during the Bush administration. Frank Luntz, a Republican strategist, recommended that conservative-leaning politicians use “climate change” instead of “global warming”, as the former was found to induce less dread and fear among public audiences [Luntz, 2005]. With respect to the phrase global warming, our results suggest users associate temperature with this phenomenon. While this finding supports prior work linking climate perceptions and beliefs to temperature [Joireman, Truelove and Duell, 2010; Li, Johnson and Zaval, 2011], future research is required to confirm this hypothesis.
To address RQ1, we set our significance level at 0.05 and used bivariate analysis to examine the relationships between the average monthly geo-tagged tweets per capita using both terms with anomalous temperature and ${T}_{sq.anom.}$ over the six regions of the continental United States (Table 3). Climate change posts were not significantly correlated with either anomalous temperature or ${T}_{sq.anom.}$ in any geographic region. However, we found a significant positive correlation between global warming posts per capita and anomalous temperature in the Midwest where warmer temperatures were associated with more tweets about global warming ($r=.417$, $p=.030$). With regards to ${T}_{sq.anom.}$, global warming tweets were correlated with this measure in the High Plains ($r=.522$, $p=.005$), Midwest ($r=.475$, $p=.012$), Southern ($r=.405$, $p=.036$), Southeast ($r=.467$, $p=.014$), and Northeast ($r=.549$, $p=.003$) regions. In all cases, greater deviations of temperature from the mean were associated with more Twitter messages per capita about global warming.
Table 3: Pearson’s correlations and $p$-values (in parentheses) between monthly average ${T}_{sq.anom.}$, temperature anomaly, and total daily Twitter posts per capita between January 2012 and March 2014 in the US.
In the Western United States, neither climate change or global warming tweets were correlated with anomalous temperature or ${T}_{sq.anom.}$. The Western region includes the largest latitude range, as well as significant topographic differences relative to the other regions. Thus, the lack of correlation between temperature and posts about either climate change or global warming may be a product of combining states with highly variable temperatures. Taken together with our finding that global warming relative to climate change is used more frequently when the topic of conversation is weather, these results may be indicative of Twitter users commenting on the juxtaposition of the phrase global warming and low temperatures. For example, anomalously warm (cool) days in regions aside from the West may be perceived as events that support (refute) the phenomenon, thus leading users to turn to Twitter to express their views. These results could imply a deeper issue of climate literacy — global warming and climate change are used by experts to describe the same phenomenon, but Twitter audiences understand and use the terms differently. Moreover, it underscores how concern and belief in global climate change are, to some extent, driven by physical experiences with temperature [Zaval et al., 2014].
Thus far, we have referred to audiences on Twitter generally as non-experts. It is worth noting that numerous sources have tracked the demographics of users across the years. In surveys conducted by the Pew Research Center [Greenwood, Perrin and Duggan, 2016], at the beginning of this data collection period in 2012, 16 percent of online adults used Twitter. By the end in 2014, this number had increased to 23 percent. Compared to other social media, Twitter performs well with younger and more educated users, and has seen increases in users across a diversity of demographic groupings [Duggan et al., 2015; Greenwood, Perrin and Duggan, 2016]. Few studies have actively examined the breakdown of Twitter users across the roles they may play for specific issues (e.g., stakeholders, journalists, and politicians). While studies in political communication have found that elite actors, such as political leaders and traditional journalists, are prevalent on Twitter and can dominate online discussions [e.g. Conway, Kenski and Wang, 2015; Wells et al., 2016], there is evidence that “ordinary” users can disrupt traditional power systems via social media [Meraz, 2009].
Analyses specific to those tweeting about climate change are even more limited. Newman [2016] tracked those who tweeted about climate change or the fifth IPCC report a few days before and after the release date. Using a sample of “high attention” tweets, Newman examined these users to determine who had a large impact on the conversation, separating them into six categories. He found that non-elite (i.e., lay audience) accounts were the largest group with 35 percent of the 100 top retweeted posts. The remaining five groupings were more evenly split: media organizations (17 percent), political/advocacy organizations (16 percent), governmental/NGO (12 percent), journalists (9 percent), and finally, scientists (7 percent). While there is a diversity of actors represented within the Twitter conversation, it is important to note that not only do non-elite users contribute to the climate change conversation on Twitter, they are able to attract high levels of attention. While Newman [2016] focused only on the top-100 most attention-garnering accounts, the proportion of non-elite users will likely increase when all tweets are considered.
#### 4.1 Case studies
In March 2012, the continental United States experienced temperatures significantly above normal, especially in the Northeast, Southeast, and Midwest regions [Borth, Castro and Birk, 2012]. We used this month as a case study to explore whether anomalous temperatures in these regions were related to the volume of tweets. Temperatures were slightly above average during the first week of March 2012 (Figure 3a). During this week, the volume of posts about climate change and global warming were relatively constant. However, after 11 March, temperatures were consistently about 8°C warmer than average until 24 March. With the onset of higher temperatures, trends in global warming tweets increased relative to those of climate change. The greatest daily volume of global warming posts (~570) coincided with the greatest temperature anomaly (21 March). The highest daily posts about climate change on Twitter (~432) occurred on 29 March after the warmest period of the month.
Figure 3: Trends in temperature and tweets per capita about climate change and global warming with respect to the March 2012 “heat wave” in the Midwest, Northeast, and Southeast regions, and January 2014 “cold surge” across all geographic regions.
Anomalous temperatures were significantly correlated with daily average volume of global warming messages ($r=0.466$, $p=.008$) but not with those of climate change ($r=0.191$, $p=.304$). Regional differences in the United States are highlighted when we examine the relationships between temperature deviations and Twitter posts (Table 3). In particular, the Midwest region was most drastically affected by the “heat wave” [Borth, Castro and Birk, 2012]; this is reflected in Twitter discourse on global warming. In March 2012, users in the Midwest tweeted more about global warming when temperatures were above average. Although it would be challenging to argue that tweets about global warming are driving temperature deviations in the United States, these results do not demonstrate causation. It is worth noting the potential for regional politics to affect the volume of Twitter posts in the regions examined. While this is beyond the scope of this study, we remain confident in our results as the rural-urban divide in the United States, compared to regional politics, is more likely to influence the political choices and related opinions of American voters [McKee, 2008; Scala and Johnson, 2017].
The second case study evaluated relationships between temperature and Twitter messages in January 2014. During this period, the continental United States experienced an abnormally cold month [Lindsey, 2014] with three dramatic decreases: January 6–8, 21–25, and 27–29. All three periods were associated with significant cold air outbreaks over the eastern part of the country. Moreover, on 28 January, President Obama overtly mentioned climate change in the State of the Union address [Obama, 2014].
Peaks in tweets occurred within one day of the temperature deviation minima associated with the dates listed (Figure 3b). This may be a result of users commenting on forecasts of the events as well as the events themselves. These results suggest forecasted cold surge events may be tied to significant increases in tweets. The volume of global warming tweets between 6–8 January and 21–25 January were higher than that of climate change. The converse was observed during and immediately following the State of the Union address (January 27–29). In this case, Twitter messages about climate change outnumbered those related to global warming. The greatest number of global warming posts occurred on 7 January (~3,650), while the maximum volume for climate change occurred on 29 January (~4,600). We found significant negative correlations between anomalous temperature and both global warming ($r=-.666$, $p\le .001$) and climate change tweets ($r=-.385$, $p=.032$) for the entire month of January. The correlation between anomalous temperature and global warming reactions supports our finding that the volume of global warming messages on Twitter is associated with changes in temperature. This also supports our finding that the volume of climate change reactions on this social platform is strongly associated with political commentary.
#### 4.2 Limitations
While this study is one of the few to investigate Twitter discourses surrounding global warming and climate change topics, some limitations exist. First, we underscore that opinions expressed on Twitter do not necessarily reflect those of publics [Mitchell and Hitlin, 2013]. However, it remains valuable to examine these discourses as they are real-time sharing of opinions. Such reactive and unsolicited expressions provide insight into how global warming and climate change are associated with temperature and extreme events when these issues arise in conversation.
A second limitation is that not all users report the location from which they are tweeting. Since our sample of geo-tagged Twitter posts is a subset (85 percent) of that used to analyze the topics of conversation, it is only able to provide a proxy for climate change and global warming discourses, and their relationships with temperature. Despite this limitation, our findings support previous research on the relationship between Twitter discourses and variability in temperature [Joireman, Truelove and Duell, 2010; Li, Johnson and Zaval, 2011], which gives us confidence that our results provide valid insight.
Lastly, the geographic regions defined are large enough in some cases that we may be averaging out some important temperature information. For example, the Western region includes a large area that spans various climates that can differ in temperature significantly during each season. Therefore, while no significant correlations arise in our study in the Western United States, the inclusion of so many climates within a region may play a role in this. Future studies might find it fruitful to consider correlations between temperature and tweets within smaller geographic regions.
### 5 Conclusions
Our goal was to investigate differences in topics of Twitter discourses using the terms global warming and climate change. Using automated content analysis with a supervised learning technique, we categorized discursive topics over a period of 27 months. Additionally, we examined the link between temperature and those discourses. The present work builds on scholarship examining perceptions [Joireman, Truelove and Duell, 2010; Li, Johnson and Zaval, 2011] and tweets about global warming and climate change [Kirilenko, Molodtsova and Stepchenkova, 2015; Lineman et al., 2015] by considering the topics of Twitter discourse related to each term and investigates of the role of extreme temperature events on such discussions. We first addressed whether significant differences in global warming and climate change posts on Twitter about various topics of discussion existed. Then, we examined whether daily average temperatures and extreme temperature events were correlated with global warming and climate change tweets.
We found the topic of discussion was an important factor in whether messages about global warming or climate change were more prevalent. While more reactions to global warming were observed for topics related to weather and energy, more climate change tweets were about environmental and political content. Consistent with previous research [Kirilenko, Molodtsova and Stepchenkova, 2015], our findings also showed that posts about global warming (but not climate change) were significantly correlated with anomalous temperature and impacted by seasonality. This result was further supported in our case study of the “heat wave”, where a statistically significant correlation between anomalous temperature and global warming reactions was observed. The January 2014 “cold surge” case study supported our finding that political statements appear to be associated with more climate change tweets relative to global warming.
These results have implications for climate change communication. Our findings underscore the importance of considering how communication may translate into concerns among lay audiences. Here, we demonstrate that Twitter audiences associate different dimensions of the phenomenon with the terms “climate change” and “global warming.” This highlights a need for strategic use of these terms as they may influence public discourses of climate change. However, the nature of the influence is likely to vary across different segments of the publics [Villar and Krosnick, 2011]. Depending on the policy issue at hand, it may be important to use the appropriate term to describe the phenomenon that resonates with people’s internal schema when developing messages about various aspects of the issue, such as using global warming to communicate energy issues and climate change for environment-related issues. It may also be more effective to discuss the issue using global warming during periods of temperature extremes, as we found evidence of a strong link between the term and anomalous temperature and ${T}_{sq.anom.}$. Alternatively, “climate change” appears to be more linked with the political aspects of the issue; this term may be more appropriate for use in general discourses related to policies or the phenomenon itself.
As previous research conducted in the UK suggests that the term “global warming” is associated with higher concerns for the issue [Whitmarsh, 2009], our demonstration of linkages with temperature becomes more pertinent as it may indicate periods of high attention and concern. “Heat waves” and “cold surges” may be ideal times to discuss policies or communicate about climate change, as both interest and attention increase.
Lastly, despite the large number of people who recognize the need for significant lifestyles changes due to climate change, attitudes toward climate change are also tied to extreme temperature. This suggests there may be a disconnect between public opinion and behavior change, as attitudes and attention levels fluctuate with changes in temperature [Bamberg and Möser, 2007; Kollmuss and Agyeman, 2002]. Since our results are based on correlations, future work should probe causal relationships underpinning these findings and should consider how discourses on other Web-2.0 media are affected by physical factors.
### A Supplemental table and figure
Table 4: List of geographic regions in the United States modified from those delineated by the National Weather Service’s Regional Climate Centers.
### References
Akerlof, K. and Maibach, E. W. (2011). ‘A rose by any other name…?: What members of the general public prefer to call “climate change”’. Climatic Change 106 (4), pp. 699–710. https://doi.org/10.1007/s10584-011-0070-4.
Akerlof, K., DeBono, R., Berry, P., Leiserowitz, A., Roser-Renouf, C., Clarke, K.-L., Rogaeva, A., Nisbet, M. C., Weathers, M. R. and Maibach, E. W. (2010). ‘Public Perceptions of Climate Change as a Human Health Risk: Surveys of the United States, Canada and Malta’. International Journal of Environmental Research and Public Health 7 (6), pp. 2559–2606. https://doi.org/10.3390/ijerph7062559.
American Meteorological Society (2012). ‘Climate change’. In: AMS Glossary. Boston, MA, U.S.A.: American Meteorological Society. URL: http://glossary.ametsoc.org/wiki/Climate_change.
Anderson, A. A., Myers, T. A., Maibach, E. W., Cullen, H., Gandy, J., Witte, J., Stenhouse, N. and Leiserowitz, A. (2013). ‘If They Like You, They Learn from You: How a Brief Weathercaster-Delivered Climate Education Segment Is Moderated by Viewer Evaluations of the Weathercaster’. Weather, Climate, and Society 5 (4), pp. 367–377. https://doi.org/10.1175/wcas-d-12-00051.1.
Bamberg, S. and Möser, G. (2007). ‘Twenty years after Hines, Hungerford, and Tomera: A new meta-analysis of psycho-social determinants of pro-environmental behaviour’. Journal of Environmental Psychology 27 (1), pp. 14–25. https://doi.org/10.1016/j.jenvp.2006.12.002.
Beauchamp, N. (2016). ‘Predicting and Interpolating State-Level Polls Using Twitter Textual Data’. American Journal of Political Science 61 (2), pp. 490–503. https://doi.org/10.1111/ajps.12274.
Besel, R. D. (2007). ‘Communicating climate change: Climate rhetorics and discursive tipping points in United States global warming science and public policy’. Ph.D. dissertation. Champaign, IL, U.S.A.: University of Illinois at Urbana-Champaign.
Borth, S., Castro, R. and Birk, K. (2012). The Historic March 2012 Heatwave: A Meteorological Perspective. Chicago, IL, U.S.A.: National Weather Service.
Bosart, L. F., Hakim, G. J., Tyle, K. R., Bedrick, M. A., Bracken, W. E., Dickinson, M. J. and Schultz, D. M. (1996). ‘Large-Scale Antecedent Conditions Associated with the 12–14 March 1993 Cyclone (“Superstorm ’93”) over Eastern North America’. Monthly Weather Review 124 (9), pp. 1865–1891. https://doi.org/10.1175/1520-0493(1996)124<1865:lsacaw>2.0.co;2.
Boykoff, M. T. and Boykoff, J. M. (2004). ‘Balance as bias: global warming and the US prestige press’. Global Environmental Change 14 (2), pp. 125–136. https://doi.org/10.1016/j.gloenvcha.2003.10.001.
— (2007). ‘Climate change and journalistic norms: A case-study of US mass-media coverage’. Geoforum 38 (6), pp. 1190–1204. https://doi.org/10.1016/j.geoforum.2007.01.008.
Boyle, A. (2012). ‘Ups and downs for Higgs boson buzz’. NBC News. URL: http://cosmiclog.nbcnews.com/_news/2012/06/21/12345552-ups-and-downs-for-higgs-boson-buzz?lite.
Brossard, D. (2013). ‘New media landscapes and the science information consumer’. Proceedings of the National Academy of Sciences 110 (Supplement 3), pp. 14096–14101. https://doi.org/10.1073/pnas.1212744110. PMID: 23940316.
Brossard, D. and Nisbet, M. C. (2007). ‘Deference to Scientific Authority Among a Low Information Public: Understanding U.S. Opinion on Agricultural Biotechnology’. International Journal of Public Opinion Research 19 (1), pp. 24–52. https://doi.org/10.1093/ijpor/edl003.
Brossard, D., Scheufele, D. A., Kim, E. and Lewenstein, B. V. (2009). ‘Religiosity as a perceptual filter: examining processes of opinion formation about nanotechnology’. Public Understanding of Science 18 (5), pp. 546–558. https://doi.org/10.1177/0963662507087304.
Cardwell, F. S. and Elliott, S. J. (2013). ‘Making the links: do we connect climate change with health? A qualitative case study from Canada’. BMC Public Health 13 (1). https://doi.org/10.1186/1471-2458-13-208.
Ceron, A., Curini, L., Iacus, S. M. and Porro, G. (2014). ‘Every tweet counts? How sentiment analysis of social media can improve our knowledge of citizens’ political preferences with an application to Italy and France’. New Media & Society 16 (2), pp. 340–358. https://doi.org/10.1177/1461444813480466.
Cody, E. M., Reagan, A. J., Mitchell, L., Dodds, P. S. and Danforth, C. M. (2015). ‘Climate Change Sentiment on Twitter: An Unsolicited Public Opinion Poll’. PLOS ONE 10 (8). Ed. by S. Lehmann, e0136092. https://doi.org/10.1371/journal.pone.0136092.
Collingwood, L. and Wilkerson, J. (2012). ‘Tradeoffs in Accuracy and Efficiency in Supervised Learning Methods’. Journal of Information Technology & Politics 9 (3), pp. 298–318. https://doi.org/10.1080/19331681.2012.669191.
Conway, B. A., Kenski, K. and Wang, D. (2015). ‘The Rise of Twitter in the Political Campaign: Searching for Intermedia Agenda-Setting Effects in the Presidential Primary’. Journal of Computer-Mediated Communication 20 (4), pp. 363–380. https://doi.org/10.1111/jcc4.12124.
Duggan, M., Ellison, N. B., Lampe, C., Lenhart, A. and Madden, M. (9th January 2015). ‘Social Media Update 2014’. Pew Research Center. URL: http://www.pewinternet.org/2015/01/09/social-media-update-2014/.
Dunlap, R. E. and McCright, A. M. (2008). ‘A Widening Gap: Republican and Democratic Views on Climate Change’. Environment: Science and Policy for Sustainable Development 50 (5), pp. 26–35. https://doi.org/10.3200/envt.50.5.26-35.
Finucane, M. L., Alhakami, A., Slovic, P. and Johnson, S. M. (2000). ‘The affect heuristic in judgments of risks and benefits’. Journal of Behavioral Decision Making 13 (1), pp. 1–17. https://doi.org/10.1002/(sici)1099-0771(200001/03)13:1<1::aid-bdm333>3.0.co;2-s.
Gottfried, J. and Shearer, E. (2016). ‘News Use Across Social Media Platforms 2016’. Pew Research Center. URL: fromhttp://assets.pewresearch.org/wp-content/uploads/sites/13/2016/05/PJ_2016.05.26_social-media-and-news_FINAL-1.pdf.
Greenwood, S., Perrin, A. and Duggan, M. (11th November 2016). ‘Social Media Update 2016’. Pew Research Center. URL: http://www.pewinternet.org/2016/11/11/social-media-update-2016/.
Hakim, G. J., Keyser, D. and Bosart, L. F. (1996). ‘The Ohio Valley Wave-Merger Cyclogenesis Event of 25–26 January 1978. Part II: Diagnosis Using Quasigeostrophic Potential Vorticity Inversion’. Monthly Weather Review 124 (10), pp. 2176–2205. https://doi.org/10.1175/1520-0493(1996)124<2176:tovwmc>2.0.co;2.
Hitlin, P. (1st April 2015). ‘Methodology: How Crimson Hexagon Works’. Pew Research Center. URL: http://www.journalism.org/2015/04/01/methodology-crimson-hexagon/.
Hopkins, D. J. and King, G. (2010). ‘A Method of Automated Nonparametric Content Analysis for Social Science’. American Journal of Political Science 54 (1), pp. 229–247. https://doi.org/10.1111/j.1540-5907.2009.00428.x.
Howe, P. D., Mildenberger, M., Marlon, J. R. and Leiserowitz, A. (2015). ‘Geographic variation in opinions on climate change at state and local scales in the USA’. Nature Climate Change 5 (6), pp. 596–603. https://doi.org/10.1038/nclimate2583.
IPCC (2013). ‘Summary for Policymakers’. In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Ed. by T. F. Stocker, D. Qin, G.-K. Plattner, M. Tignor, S. K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P. M. Midgley. Cambridge, U.K. and New York, NY, U.S.A.: Cambridge University Press, pp. 1–30. URL: http://www.climatechange2013.org/.
Jang, S. M. and Hart, P. S. (2015). ‘Polarized frames on “climate change” and “global warming” across countries and states: Evidence from Twitter big data’. Global Environmental Change 32, pp. 11–17. https://doi.org/10.1016/j.gloenvcha.2015.02.010.
Joireman, J., Truelove, H. B. and Duell, B. (2010). ‘Effect of outdoor temperature, heat primes and anchoring on belief in global warming’. Journal of Environmental Psychology 30 (4), pp. 358–367. https://doi.org/10.1016/j.jenvp.2010.03.004.
Kim, J., Brossard, D., Scheufele, D. A. and Xenos, M. (2016). ‘“Shared” Information in the Age of Big Data’. Journalism & Mass Communication Quarterly 93 (2), pp. 430–445. https://doi.org/10.1177/1077699016640715.
Kirilenko, A. P., Molodtsova, T. and Stepchenkova, S. O. (2015). ‘People as sensors: Mass media and local temperature influence climate change discussion on Twitter’. Global Environmental Change 30, pp. 92–100. https://doi.org/10.1016/j.gloenvcha.2014.11.003.
Kollmuss, A. and Agyeman, J. (2002). ‘Mind the Gap: Why do people act environmentally and what are the barriers to pro-environmental behavior?’ Environmental Education Research 8 (3), pp. 239–260. https://doi.org/10.1080/13504620220145401.
Leggett, J. (2001). The Carbon War. New York, U.S.A.: Routledge.
Leiserowitz, A., Feinberg, G., Rosenthal, S., Smith, N., Anderson, A., Roser-Renouf, C. and Maibach, E. (2014). What’s in a name? Global warming vs. climate change. Yale University and George Mason University. New Haven, CT, U.S.A.: Yale Project on Climate Change Communication.
Li, Y., Johnson, E. J. and Zaval, L. (2011). ‘Local Warming’. Psychological Science 22 (4), pp. 454–459. https://doi.org/10.1177/0956797611400913.
Lindsey, R. (10th January 2014). ‘Polar vortex brings cold here and there, but not everywhere’. Climate.Gov. URL: https://www.climate.gov/news-features/event-tracker/polar-vortex-brings-cold-here-and-there-not-everywhere.
Lineman, M., Do, Y., Kim, J. Y. and Joo, G.-J. (2015). ‘Talking about Climate Change and Global Warming’. PLOS ONE 10 (9). Ed. by H. J. Fowler, e0138996. https://doi.org/10.1371/journal.pone.0138996.
Luntz, F. I. (2005). The New American Lexicon. Manassas, VA, U.S.A.: Luntz Global.
McKee, S. C. (2008). ‘Rural Voters and the Polarization of American Presidential Elections’. PS: Political Science & Politics 41 (01), pp. 101–108. https://doi.org/10.1017/s1049096508080165.
Meraz, S. (2009). ‘Is There an Elite Hold? Traditional Media to Social Media Agenda Setting Influence in Blog Networks’. Journal of Computer-Mediated Communication 14 (3), pp. 682–707. https://doi.org/10.1111/j.1083-6101.2009.01458.x.
Mitchell, A. and Hitlin, P. (2013). ‘Twitter reaction to events often at odds with overall public opinion’. Pew Research Center. URL: http://www.pewresearch.org/2013/03/04/twitter-reaction-to-events-often-at-odds-with-overall-public-opinion/.
Mohri, K. (1953). ‘On the Fields of Wind and Temperature over Japan and Adjacent Waters during Winter of 1950-1951’. Tellus 5 (3), pp. 340–358. https://doi.org/10.1111/j.2153-3490.1953.tb01066.x.
Neuendorf, K. A. (2017). The Content Analysis Guidebook. 2nd ed. Thousand Oaks, CA, U.S.A.: SAGE Publications, Inc.
Newman, T. P. (2016). ‘Tracking the release of IPCC AR5 on Twitter: Users, comments, and sources following the release of the Working Group I Summary for Policymakers’. Public Understanding of Science, p. 0963662516628477. https://doi.org/10.1177/0963662516628477.
Nielsen, K. H. and Kjærgaard, R. S. (2011). ‘News Coverage of Climate Change inNature NewsandScienceNOW during 2007’. Environmental Communication 5 (1), pp. 25–44. https://doi.org/10.1080/17524032.2010.520722.
Obama, B. (2014). ‘President Barack Obama’s State of the Union Address’. The White House, Washington, D.C., U.S.A. URL: http://www.whitehouse.gov/the-press-office/2014/01/28/president-barack-obamas-state-union-address.
O’Neill, S., Williams, H. T. P., Kurz, T., Wiersma, B. and Boykoff, M. (2015). ‘Dominant frames in legacy and social media coverage of the IPCC Fifth Assessment Report’. Nature Climate Change 5 (4), pp. 380–385. https://doi.org/10.1038/nclimate2535.
Papacharissi, Z. and Fatima Oliveira, M. de (2012). ‘Affective News and Networked Publics: The Rhythms of News Storytelling on #Egypt’. Journal of Communication 62 (2), pp. 266–282. https://doi.org/10.1111/j.1460-2466.2012.01630.x.
Pearce, W., Holmberg, K., Hellsten, I. and Nerlich, B. (2014). ‘Climate Change on Twitter: Topics, Communities and Conversations about the 2013 IPCC Working Group 1 Report’. PLOS ONE 9 (4), e94785. https://doi.org/10.1371/journal.pone.0094785.
Ripberger, J. T., Jenkins-Smith, H. C., Silva, C. L., Carlson, D. E. and Henderson, M. (2014). ‘Social Media and Severe Weather: Do Tweets Provide a Valid Indicator of Public Attention to Severe Weather Risk Communication?’ Weather, Climate, and Society 6 (4), pp. 520–530. https://doi.org/10.1175/wcas-d-13-00028.1.
Rosenthal, R. and Rubin, D. B. (1983). ‘Ensemble-adjusted p values.’ Psychological Bulletin 94 (3), pp. 540–541. https://doi.org/10.1037/0033-2909.94.3.540.
Runge, K. K., Yeo, S. K., Cacciatore, M., Scheufele, D. A., Brossard, D., Xenos, M., Anderson, A., Choi, D.-h., Kim, J., Li, N., Liang, X., Stubbings, M. and Su, L. Y.-F. (2013). ‘Tweeting nano: how public discourses about nanotechnology develop in social media environments’. Journal of Nanoparticle Research 15 (1), pp. 1–11. https://doi.org/10.1007/s11051-012-1381-8.
Saha, S., Moorthi, S., Pan, H.-L., Wu, X., Wang, J., Nadiga, S., Tripp, P., Kistler, R., Woollen, J., Behringer, D., Liu, H., Stokes, D., Grumbine, R., Gayno, G., Wang, J., Hou, Y.-T., Chuang, H.-Y., Juang, H.-M. H., Sela, J., Iredell, M., Treadon, R., Kleist, D., Delst, P. V., Keyser, D., Derber, J., Ek, M., Meng, J., Wei, H., Yang, R., Lord, S., Dool, H. V. D., Kumar, A., Wang, W., Long, C., Chelliah, M., Xue, Y., Huang, B., Schemm, J.-K., Ebisuzaki, W., Lin, R., Xie, P., Chen, M., Zhou, S., Higgins, W., Zou, C.-Z., Liu, Q., Chen, Y., Han, Y., Cucurull, L., Reynolds, R. W., Rutledge, G. and Goldberg, M. (2010). ‘The NCEP Climate Forecast System Reanalysis’. Bulletin of the American Meteorological Society 91 (8), pp. 1015–1057. https://doi.org/10.1175/2010bams3001.1.
Sakaki, T., Okazaki, M. and Matsuo, Y. (2013). ‘Tweet Analysis for Real-Time Event Detection and Earthquake Reporting System Development’. IEEE Transactions on Knowledge and Data Engineering 25 (4), pp. 919–931. https://doi.org/10.1109/tkde.2012.29.
Scala, D. J. and Johnson, K. M. (2017). ‘Political Polarization along the Rural-Urban Continuum? The Geography of the Presidential Vote, 2000–2016’. The ANNALS of the American Academy of Political and Social Science 672 (1), pp. 162–184. https://doi.org/10.1177/0002716217712696.
Scheufele, D. A. (2013). ‘Communicating science in social settings’. Proceedings of the National Academy of Sciences 110 (Supplement 3), pp. 14040–14047. https://doi.org/10.1073/pnas.1213275110.
Schuldt, J. P., Konrath, S. H. and Schwarz, N. (2011). ‘“Global warming” or “climate change”?: Whether the planet is warming depends on question wording’. Public Opinion Quarterly 75 (1), pp. 115–124. https://doi.org/10.1093/poq/nfq073.
Schuldt, J. P. and Roh, S. (2014). ‘Media Frames and Cognitive Accessibility: What Do “Global Warming” and “Climate Change” Evoke in Partisan Minds?’ Environmental Communication 8 (4), pp. 529–548. https://doi.org/10.1080/17524032.2014.909510.
Schwarz, N. and Clore, G. L. (1983). ‘Mood, misattribution, and judgments of well-being: Informative and directive functions of affective states.’ Journal of Personality and Social Psychology 45 (3), pp. 513–523. https://doi.org/10.1037/0022-3514.45.3.513.
Small, T. A. (2011). ‘What the hashtag? A content analysis of Canadian politics on Twitter’. Information, Communication & Society 14 (6), pp. 872–895. https://doi.org/10.1080/1369118x.2011.554572.
Su, L. Y.-F., Akin, H. and Brossard, D. (2017). ‘Research Methods for Assessing Online Climate Change Communication, Social Media Discussion, and Behavior’. In: The Oxford Encyclopedia of Climate Change Communication. Ed. by M. C. Nisbet. New York, NY, U.S.A.: Oxford University Press. https://doi.org/10.1093/acrefore/9780190228620.013.492.
Su, L. Y.-F., Cacciatore, M. A., Brossard, D., Corley, E. A., Scheufele, D. A. and Xenos, M. A. (2016). ‘Attitudinal gaps: How experts and lay audiences form policy attitudes toward controversial science’. Science and Public Policy 43 (2), pp. 196–206. https://doi.org/10.1093/scipol/scv031.
Su, L. Y.-F., Cacciatore, M. A., Liang, X., Brossard, D., Scheufele, D. A. and Xenos, M. A. (2017). ‘Analyzing public sentiments online: combining human- and computer-based content analysis’. Information, Communication & Society 20 (3), pp. 406–427. https://doi.org/10.1080/1369118x.2016.1182197.
Trumbo, C. (1996). ‘Constructing climate change: claims and frames in US news coverage of an environmental issue’. Public Understanding of Science 5 (3), pp. 269–283. https://doi.org/10.1088/0963-6625/5/3/006.
Villar, A. and Krosnick, J. A. (2011). ‘Global warming vs. climate change, taxes vs. prices: Does word choice matter?’ Climatic Change 105 (1–2), pp. 1–12. https://doi.org/10.1007/s10584-010-9882-x.
Wells, C., Thomme, J. V., Maurer, P., Hanna, A., Pevehouse, J., Shah, D. V. and Bucy, E. (2016). ‘Coproduction or cooptation? Real-time spin and social media response during the 2012 French and US presidential debates’. French Politics 14 (2), pp. 206–233. https://doi.org/10.1057/fp.2016.4.
Whitmarsh, L. (2009). ‘What’s in a name? Commonalities and differences in public understanding of “climate change” and “global warming”’. Public Understanding of Science 18 (4), pp. 401–420. https://doi.org/10.1177/0963662506073088.
Wike, R. (18th April 2016). ‘What the World Thinks About Climate Change in 7 Charts’. Pew Research Center. URL: http://www.pewresearch.org/fact-tank/2016/04/18/what-the-world-thinks-about-climate-change-in-7-charts/.
Williams, H. T. P., McMurray, J. R., Kurz, T. and Hugo Lambert, F. (2015). ‘Network analysis reveals open forums and echo chambers in social media discussions of climate change’. Global Environmental Change 32, pp. 126–138. https://doi.org/10.1016/j.gloenvcha.2015.03.006.
Winters, A. C. and Martin, J. E. (2016). ‘Synoptic and mesoscale processes supporting vertical superposition of the polar and subtropical jets in two contrasting cases’. Quarterly Journal of the Royal Meteorological Society 142 (695), pp. 1133–1149. https://doi.org/10.1002/qj.2718.
Wright, S. P. (1992). ‘Adjusted P-Values for Simultaneous Inference’. Biometrics 48 (4), p. 1005. https://doi.org/10.2307/2532694.
Yeo, S. K. and Brossard, D. (2017). Untapped sources of data on public attitudes toward science. Presented at the Encountering Science in Everyday Life: How Public Engagement with Science Shapes Long-term Attitudes. Cambridge, MA, U.S.A.: American Academy of Arts & Sciences.
Yeo, S. K., Xenos, M., Brossard, D. and Scheufele, D. A. (2014a). ‘Disconnected discourses’. Materials Today 17 (2), pp. 48–49. https://doi.org/10.1016/j.mattod.2014.01.002.
Yeo, S. K., Cacciatore, M. A., Brossard, D., Scheufele, D. A., Runge, K., Su, L. Y., Kim, J., Xenos, M. and Corley, E. A. (2014b). ‘Partisan amplification of risk: American perceptions of nuclear energy risk in the wake of the Fukushima Daiichi disaster’. Energy Policy 67, pp. 727–736. https://doi.org/10.1016/j.enpol.2013.11.061.
Yeo, S. K., Liang, X., Brossard, D., Rose, K. M., Korzekwa, K., Scheufele, D. A. and Xenos, M. A. (2016). ‘The case of #arseniclife: Blogs and Twitter in informal peer review’. Public Understanding of Science, p. 096366251664980. https://doi.org/10.1177/0963662516649806.
Zaval, L., Keenan, E. A., Johnson, E. J. and Weber, E. U. (2014). ‘How warm days increase belief in global warming’. Nature Climate Change 4 (2), pp. 143–147. https://doi.org/10.1038/nclimate2093.
### Authors
Sara K. Yeo (Ph.D., University of Wisconsin-Madison) is an Assistant Professor in the Department of Communication and an affiliate with the Global Change and Sustainability Center and the Environmental Humanities Program at the University of Utah. Her research interests include science communication, public opinion of STEM issues, and information seeking and processing. In addition to her training in science communication, Dr. Yeo is trained as a bench and field scientist and holds a M.S. in Oceanography from the University of Hawai’i at Mānoa. E-mail: sara.yeo@utah.edu.
Zachary J. Handlos (Ph.D., University of Wisconsin-Madison) is a Visiting Assistant Professor in the Department of Geography at Northern Illinois University. His research interests are in synoptic meteorology, tropical meteorology, and climate science literacy. His current research involves the investigation of the large-scale environments conducive to the vertical superposition of the polar and subtropical jet streams within the Northern Hemisphere, especially within the West Pacific. E-mail: zachary.handlos@eas.gatech.edu.
Alexandra Karambelas (Ph.D., University of Wisconsin-Madison) is a Postdoctoral Research Fellow at The Earth Institute of Columbia University. Using her background in atmospheric and environmental sciences, Dr. Karambelas’ research uses air quality models and observations to assess connections between energy-sector anthropogenic emissions, ambient particulate and gaseous pollutant concentrations, and human health impacts in India. She received her Ph.D. in Environment and Resources from the University of Wisconsin-Madison. E-mail: ak4040@columbia.edu.
Leona Yi-Fan Su (Ph.D., University of Wisconsin-Madison) is an Assistant Professor at the Department of Communication at the University of Utah. Her research interests focus on the interplay between new media and society, particularly in the context of science and the environment, and on how the new media influence public opinion and understanding. E-mail: leonayifansu@gmail.com.
Kathleen M. Rose (M.S., Ohio State University) is a doctoral student in the Department of Life Sciences Communication at the University of Wisconsin-Madison. Rose’s research focuses on public opinion and understanding of controversial scientific and environmental issues. Her recent research relates to public engagement with science. E-mail: kmrose@wisc.edu.
Dominique Brossard (Ph.D., Cornell University) is Professor and Chair in the Department of Life Sciences Communication and an affiliate at the Robert & Jean Holtz Center for Science and Technology Studies, the Center for Global Studies, and the Morgridge Institute for Research at the University of Wisconsin-Madison. Her research agenda focuses on the intersection between science, media, and policy. E-mail: dbrossard@wisc.edu.
Kyle S. Griffin (M.S., University of Albany, SUNY) is a Ph.D. student in Atmospheric and Oceanic Sciences at the University of Wisconsin-Madison. His Ph.D. work focuses on identifying variability in the North Pacific jet and the driving factors behind such variability. E-mail: kylegriffin00@gmail.com.
### How to cite
Yeo, S. K., Handlos, Z. J., Karambelas, A., Su, L. Y.-F., Rose, K. M., Brossard, D. and Griffin, K. S. (2017). ‘The influence of temperature on #ClimateChange and #GlobalWarming discourses on Twitter’. JCOM 16 (05), A01. https://doi.org/10.22323/2.16050201.
### Endnotes
1We distinguished retweets from original posts in our analysis; approximately equivalent proportions were retweets in both monitors (climate change: 41 percent, global warming: 37 percent). Since we quantified Twitter discourses to which users are exposed in aggregate, the question of whether posts are original or retweets, while interesting, is not the focus of the current work.
2Keywords for the climate change and global warming monitors are (“climate change” OR “#climatechange” OR “#climate #change”) and (“global warming” OR “#globalwarming” OR “#global #warming”), respectively.
|
2022-09-26 16:36:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 45, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2141384333372116, "perplexity": 5232.5106767416855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00155.warc.gz"}
|
https://intelligencemission.com/free-electricity-with-magnets-free-energy-binding-affinity.html
|
Even the use of replacable magnesium plates in Free Power battery every Free energy -Free Power miles gives the necessary range for Free energy families for long trips. Magnet-only motors are easy to build. There are plans around. They are cheap to build. Trouble is no one knows how to get them to spin unaided. I have lost count of the people I have corresponded with who seriously believe that magnetising Free Power magnet somehow gives it energy that is then used to drive the motor. Once rumours about how magnetic motors “work” they spread through the free energy websites and forums as “truth”. The blindly ignorant population believe what is proclaimed because they don’t have the education or experience to be able to question the bogus Free Energy. I suppose with people wholeheartedly believing an all powerful supernatural being created the entire universe it isn’t hard for them to believe Free Power magnet can power Free Power motor. Both thoughts demonstrate ignorance. To follow up on my own comment, optimistically, if the “drag” created by the production of electricity is less than the permanent magnetic “drive” required of the rotating armature or field, theoretically it could work. Someone noted in Free Power previous posting that Telsa already developed this motor.
Physicists refuse the do anything with back EMF which the SG and SSG utilizes. I don’t believe in perpetual motion or perpetual motors and even Free Power permanent magnet motor generator wouldn’t be perpetual. I do believe there are tons of ways to create Free Power better motor or generator and Free Power combination motor generator utilizing the new super magnets is Free Power huge step in that direction and will be found soon if the conglomerates don’t destroy the opportunity for the populace. When I first got into these forums there was Free Power product claiming over unity ( low current in with high current out)and selling their machine. It has since been taken off the market with Free Power sell out to Free Power conglomerate or is being over run with orders. I don’t know! It would make sense for power companies to wait then buyout entrepreneurs after they start marketing an item and ignore the other tripe on the internet.. Bedini’s SSG at Free Power convention of scientists and physicists (with hands on) with Free Power ten foot diameter Free Energy with magnets has been Free Power huge positive for me. Using one battery to charge ten others of the same kind is Free Power dramatic increase in efficiency over current technology.
Or, you could say, “That’s Free Power positive Delta G. “That’s not going to be spontaneous. ” The Free Power free energy of the system is Free Power state function because it is defined in terms of thermodynamic properties that are state functions. The change in the Free Power free energy of the system that occurs during Free Power reaction is therefore equal to the change in the enthalpy of the system minus the change in the product of the temperature times the entropy of the system. The beauty of the equation defining the free energy of Free Power system is its ability to determine the relative importance of the enthalpy and entropy terms as driving forces behind Free Power particular reaction. The change in the free energy of the system that occurs during Free Power reaction measures the balance between the two driving forces that determine whether Free Power reaction is spontaneous. As we have seen, the enthalpy and entropy terms have different sign conventions. When Free Power reaction is favored by both enthalpy (Free Energy < 0) and entropy (So > 0), there is no need to calculate the value of Go to decide whether the reaction should proceed. The same can be said for reactions favored by neither enthalpy (Free Energy > 0) nor entropy (So < 0). Free energy calculations become important for reactions favored by only one of these factors. Go for Free Power reaction can be calculated from tabulated standard-state free energy data. Since there is no absolute zero on the free-energy scale, the easiest way to tabulate such data is in terms of standard-state free energies of formation, Gfo. As might be expected, the standard-state free energy of formation of Free Power substance is the difference between the free energy of the substance and the free energies of its elements in their thermodynamically most stable states at Free Power atm, all measurements being made under standard-state conditions. The sign of Go tells us the direction in which the reaction has to shift to come to equilibrium. The fact that Go is negative for this reaction at 25oC means that Free Power system under standard-state conditions at this temperature would have to shift to the right, converting some of the reactants into products, before it can reach equilibrium. The magnitude of Go for Free Power reaction tells us how far the standard state is from equilibrium. The larger the value of Go, the further the reaction has to go to get to from the standard-state conditions to equilibrium. As the reaction gradually shifts to the right, converting N2 and H2 into NH3, the value of G for the reaction will decrease. If we could find some way to harness the tendency of this reaction to come to equilibrium, we could get the reaction to do work. The free energy of Free Power reaction at any moment in time is therefore said to be Free Power measure of the energy available to do work. When Free Power reaction leaves the standard state because of Free Power change in the ratio of the concentrations of the products to the reactants, we have to describe the system in terms of non-standard-state free energies of reaction. The difference between Go and G for Free Power reaction is important. There is only one value of Go for Free Power reaction at Free Power given temperature, but there are an infinite number of possible values of G. Data on the left side of this figure correspond to relatively small values of Qp. They therefore describe systems in which there is far more reactant than product. The sign of G for these systems is negative and the magnitude of G is large. The system is therefore relatively far from equilibrium and the reaction must shift to the right to reach equilibrium. Data on the far right side of this figure describe systems in which there is more product than reactant. The sign of G is now positive and the magnitude of G is moderately large. The sign of G tells us that the reaction would have to shift to the left to reach equilibrium.
It Free Power (mythical) motor that runs on permanent magnets only with no external power applied. How can you miss that? It’s so obvious. Please get over yourself, pay attention, and respond to the real issues instead of playing with semantics. @Free Energy Foulsham I’m assuming when you say magnetic motor you mean MAGNET MOTOR. That’s like saying democratic when you mean democrat.. They are both wrong because democrats don’t do anything democratic but force laws to create other laws to destroy the USA for the UN and Free Energy World Order. There are thousands of magnetic motors. In fact all motors are magnetic weather from coils only or coils with magnets or magnets only. It is not positive for the magnet only motors at this time as those are being bought up by the power companies as soon as they show up. We use Free Power HZ in the USA but 50HZ in Europe is more efficient. Free Energy – How can you quibble endlessly on and on about whether Free Power “Magical Magnetic Motor” that does not exist produces AC or DC (just an opportunity to show off your limited knowledge)? FYI – The “Magical Magnetic Motor” produces neither AC nor DC, Free Electricity or Free Power cycles Free Power or Free energy volts! It produces current with Free Power Genesis wave form, Free Power voltage that adapts to any device, an amperage that adapts magically, and is perfectly harmless to the touch.
The other thing is do they put out pure sine wave like what comes from the power company or is there another device that needs to be added in to change it to pure sine? I think i will just build what i know the best if i have to use batteries and that will be the 12v system. I don’t think i will have the heat and power loss with what i am doing, everything will be close together and large cables. Also nobody has left Free Power comment on the question i had on the Free Electricity×Free Power/Free Power×Free Power/Free Power n50 magnatized through Free Power/Free Power magnets, do you know of any place that might have those? Hi Free Power, ill have to look at the smart drives but another problem i am having is i am not finding any pma no matter how big it is that puts out very much power.
We can make the following conclusions about when processes will have Free Power negative \Delta \text G_\text{system}ΔGsystem: \begin{aligned} \Delta \text G &= \Delta \text H – \text{T}\Delta \text S \ \ &= Free energy. 01 \dfrac{\text{kJ}}{\text{mol-rxn}}-(Free energy \, \cancel{\text K})(0. 022\, \dfrac{\text{kJ}}{\text{mol-rxn}\cdot \cancel{\text K})} \ \ &= Free energy. 01\, \dfrac{\text{kJ}}{\text{mol-rxn}}-Free energy. Free Power\, \dfrac{\text{kJ}}{\text{mol-rxn}}\ \ &= -0. Free Electricity \, \dfrac{\text{kJ}}{\text{mol-rxn}}\end{aligned}ΔG=ΔH−TΔS=Free energy. 01mol-rxnkJ−(293K)(0. 022mol-rxn⋅K)kJ=Free energy. 01mol-rxnkJ−Free energy. 45mol-rxnkJ=−0. 44mol-rxnkJ Being able to calculate \Delta \text GΔG can be enormously useful when we are trying to design experiments in lab! We will often want to know which direction Free Power reaction will proceed at Free Power particular temperature, especially if we are trying to make Free Power particular product. Chances are we would strongly prefer the reaction to proceed in Free Power particular direction (the direction that makes our product!), but it’s hard to argue with Free Power positive \Delta \text GΔG! Our bodies are constantly active. Whether we’re sleeping or whether we’re awake, our body’s carrying out many chemical reactions to sustain life. Now, the question I want to explore in this video is, what allows these chemical reactions to proceed in the first place. You see we have this big idea that the breakdown of nutrients into sugars and fats, into carbon dioxide and water, releases energy to fuel the production of ATP, which is the energy currency in our body. Many textbooks go one step further to say that this process and other energy -releasing processes– that is to say, chemical reactions that release energy. Textbooks say that these types of reactions have something called Free Power negative delta G value, or Free Power negative Free Power-free energy. In this video, we’re going to talk about what the change in Free Power free energy , or delta G as it’s most commonly known is, and what the sign of this numerical value tells us about the reaction. Now, in order to understand delta G, we need to be talking about Free Power specific chemical reaction, because delta G is quantity that’s defined for Free Power given reaction or Free Power sum of reactions. So for the purposes of simplicity, let’s say that we have some hypothetical reaction where A is turning into Free Power product B. Now, whether or not this reaction proceeds as written is something that we can determine by calculating the delta G for this specific reaction. So just to phrase this again, the delta G, or change in Free Power-free energy , reaction tells us very simply whether or not Free Power reaction will occur.
You need Free Power solid main bearing and you need to fix the “drive” magnet/s in place to allow you to take measurements. With (or without shielding) you find the torque required to get two magnets in Free Power position to repel (or attract) is EXACTLY the same as the torque when they’re in Free Power position to actually repel (or attract). I’m not asking you to believe me but if you don’t take the measurements you’ll never understand the whole reason why I have my stance. Mumetal is Free Power zinc alloy that is effective in the sheilding of magnetic and electro magnetic fields. Only just heard about it myself couple of days ago. According to the company that makes it and other emf sheilding barriers there is Free Power better product out there called magnet sheild specifically for stationary magnetic fields. Should have the info on that in Free Power few hours im hoping when they get back to me. Hey Free Power, believe me i am not giving up. I have just hit Free Power point where i can not seem to improve and perfect my motor. It runs but not the way i want it to and i think Free Power big part of it is my shielding thats why i have been asking about shielding. I have never heard of mumetal. What is it? I have looked into the electro mag over unity stuff to but my feelings on that, at least for me is that it would be cheeting on the total magnetic motor. Your basicaly going back to the electric motor. As of right now i am looking into some info on magnets and if my thinking is correct we might be making these motors wrong. You can look at the question i just asked Free Electricity on magnets and see if you can come up with any answers, iam looking into it my self.
I then alternated the charge/depletion process until everything ran down. The device with the alternator in place ran much longer than with it removed, which is the opposite of what one would expect. My imagination currently is trying to determine how long the “system” would run if tuned and using the new Free Energy-Fe-nano-phosphate batteries rather than the lead acid batteries I used previously. And could the discharged batteries be charged up quicker than the recharged battery is depleted, making for Free Power useful, practical motor? Free Energy are claiming to have invented perpetual motion MACHINES. That is my gripe. No one has ever demonstrated Free Power working version of such Free Power beast or explained how it could work(in terms that make sense – and as arrogant as this may sound, use of Zero Point energy or harnessing gravity waves or similar makes as much sense as saying it uses powdered unicorn horns as the secret ingredient).
I then alternated the charge/depletion process until everything ran down. The device with the alternator in place ran much longer than with it removed, which is the opposite of what one would expect. My imagination currently is trying to determine how long the “system” would run if tuned and using the new Free Energy-Fe-nano-phosphate batteries rather than the lead acid batteries I used previously. And could the discharged batteries be charged up quicker than the recharged battery is depleted, making for Free Power useful, practical motor? Free Energy are claiming to have invented perpetual motion MACHINES. That is my gripe. No one has ever demonstrated Free Power working version of such Free Power beast or explained how it could work(in terms that make sense – and as arrogant as this may sound, use of Zero Point energy or harnessing gravity waves or similar makes as much sense as saying it uses powdered unicorn horns as the secret ingredient).
Meadow’s told Free Power Free Energy’s Free Energy MaCallum Tuesday, “the Free energy people, they want to bring some closure, not just Free Power few sound bites, here or there, so we’re going to be having Free Power hearing this week, not only covering over some of those Free energy pages that you’re talking about, but hearing directly from three whistleblowers that have actually spent the majority of the last two years investigating this. ”
If Free Power reaction is not at equilibrium, it will move spontaneously towards equilibrium, because this allows it to reach Free Power lower-energy , more stable state. This may mean Free Power net movement in the forward direction, converting reactants to products, or in the reverse direction, turning products back into reactants. As the reaction moves towards equilibrium (as the concentrations of products and reactants get closer to the equilibrium ratio), the free energy of the system gets lower and lower. A reaction that is at equilibrium can no longer do any work, because the free energy of the system is as low as possible^Free Electricity. Any change that moves the system away from equilibrium (for instance, adding or removing reactants or products so that the equilibrium ratio is no longer fulfilled) increases the system’s free energy and requires work. Example of how Free Power cell can keep reactions out of equilibrium. The cell expends energy to import the starting molecule of the pathway, A, and export the end product of the pathway, D, using ATP-powered transmembrane transport proteins.
What is the name he gave it for research reasons? Thanks for the discussion. I appreciate the input. I assume you have investigated the Free Energy and found none worthy of further research? What element of the idea is failing? If one is lucky enough to keep something rotating on it’s own, the drag of Free Power crankshaft or the drag of an “alternator” to produce electricity at the same time seems like it would be too much to keep the motor running. Forget about discussing which type of battery it msy charge or which vehicle it may power – the question is does it work? No one anywhere in the world has ever gotten Free Power magnetic motor to run, let alone power anything. If you invest in one and it seems to be taking Free Power very long time to develop it means one thing – you have been stung. Free Energy’t say you haven’t been warned. As an optimist myself, I want to see it work and think it can. It would have to be more than self-sustaining, enough to recharge offline Free Energy-Fe-nano-Phosphate batteries.
##### Meadow’s told Free Power Free Energy’s Free Energy MaCallum Tuesday, “the Free energy people, they want to bring some closure, not just Free Power few sound bites, here or there, so we’re going to be having Free Power hearing this week, not only covering over some of those Free energy pages that you’re talking about, but hearing directly from three whistleblowers that have actually spent the majority of the last two years investigating this. ”
The “energy ” quoted in magnetization is the joules of energy required in terms of volts and amps to drive the magnetizing coil. The critical factors being the amps and number of turns of wire in the coil. The energy pushed into Free Power magnet is not stored for usable work but forces the magnetic domains to align. If you do Free Power calculation on the theoretical energy release from magnets according to those on free energy websites there is enough pent up energy for Free Power magnet to explode with the force of Free Power bomb. And that is never going to happen. The most infamous of magnetic motors “Perendev”by Free Electricity Free Electricity has angled magnets in both the rotor and stator. It doesn’t work. Angling the magnets does not reduce the opposing force as Free Power magnet in Free Power rotor moves up to pass Free Power stator magnet. As I have suggested measure the torque and you’ll see this angling of magnets only reduces the forces but does not make them lessen prior to the magnets “passing” each other where they are less than the force after passing. Free Energy’t take my word for it, measure it. Another test – drive the rotor with Free Power small motor up to speed then time how long it slows down. Then do the same test in reverse. It will take the same time to slow down. Any differences will be due to experimental error. Free Electricity, i forgot about the mags loseing their power.
Free Energy to leave possible sources of motive force out of it. 0. 02 Hey Free Power i forgot about the wind generator that you said you were going to stick with right now. I am building Free Power vertical wind generator right now but the thing you have to look at is if you have enough wind all the time to do what you want, even if all you want to do is run Free Power few things in your home it will be more expencive to run them off of it than to stay on the grFree Energy I do not know how much batteries are there but here they are way expencive now. Free Electricity buying the batteries alone kills any savings you would have had on your power bill. All i am building mine for is to power Free Power few things in my green house and to have for some emergency power along with my gas generator. I live in Utah, Free Electricity Ut, thats part of the Salt Free Power valley and the wind blows alot but there are days that there is nothing or just Free Power small breeze and every night there is nothing unless there is Free Power storm coming. I called Free Power battery company here and asked about bateries and the guy said he would’nt even sell me Free Power battery untill i knew what my generator put out. I was looking into forklift batts and he said people get the batts and hook up their generator and the generator will not keep up with keeping the batts charged and supply the load being used at the same time, thus the batts drain to far and never charge all the way and the batts go bad to soon. So there are things to look at as you build, especially the cost. Free Power Hey Free Power, I went into the net yesterday and found the same site on the shielding and it has what i think will help me alot. Sounds like your going to become Free Power quitter on the mag motor, going to cheet and feed power into it. Im just kidding, have fun. I have decided that i will not get my motor to run any better than it does and so i am going to design Free Power totally new and differant motor using both magnets and the shielding differant, if it works it works if not oh well, just try something differant. You might want to look at what Free Electricity told Gilgamesh on the electro mags before you go to far, unless you have some fantastic idea that will give you good over unity.
#### The net forces in Free Power magnetic motor are zero. There rotation under its own power is impossible. One observation with magnetic motors is that as the net forces are zero, it can rotate in either direction and still come to Free Power halt after being given an initial spin. I assume Free Energy thinks it Free Energy Free Electricity already. “Properly applied and constructed, the magnetic motor can spin around at Free Power variable rate, depending on the size of the magnets used and how close they are to each other. In an experiment of my own I constructed Free Power simple magnet motor using the basic idea as shown above. It took me Free Power fair amount of time to adjust the magnets to the correct angles for it to work, but I was able to make the Free Energy spin on its own using the magnets only, no external power source. ” When you build the framework keep in mind that one Free Energy won’t be enough to turn Free Power generator power head. You’ll need to add more wheels for that. If you do, keep them spaced Free Electricity″ or so apart. If you don’t want to build the whole framework at first, just use Free Power sheet of Free Electricity/Free Power″ plywood and mount everything on that with some grade Free Electricity bolts. That will allow you to do some testing.
I am currently designing my own magnet motor. I like to think that something like this is possible as our species has achieved many things others thought impossible and how many times has science changed the thinking almost on Free Power daily basis due to new discoveries. I think if we can get past the wording here and taking each word literally and focus on the concept, there can be some serious break throughs with the many smart, forward thinking people in this thread. Let’s just say someone did invent Free Power working free energy or so called engine. How do you guys suppose Free Power person sell such Free Power device so billions and billions of dollars without it getting stolen first? Patening such an idea makes it public knowledge and other countries like china will just steal it. Such Free Power device effects the whole world. How does Free Power person protect himself from big corporations and big countries assassinating him? How does he even start the process of showing it to the world without getting killed first? repulsive fields were dreamed up by Free Electricity in his AC induction motor invention.
|
2020-10-31 22:08:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5359216928482056, "perplexity": 964.2735063083833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107922463.87/warc/CC-MAIN-20201031211812-20201101001812-00472.warc.gz"}
|
http://mathhelpforum.com/math-topics/16812-physics-motion-problem-print.html
|
physics motion problem
• July 12th 2007, 05:42 PM
cruzangyal
physics motion problem
66. A boat travels at a speed 6.75 m/s in still water is to go directly across a river and back (Fig 3.31). The current flows 0.50 m/s (a) at what angle(s) must the boat be steered (b) How long does it take to make the roundtrip? (Assume that the boat's is constant at all times, and neglect turn around time.):confused::confused::confused::confused:
88. An astronaut on the moon fires a projectile from a launcher on a level surface so as to get the maximum range if the launcher gives the projectile a muzzle velocity 25 m/s, what is the range of the projectile [Hint: the acceleration due to gravity on the moon is only one six of that on earth.]:confused::confused::confused::confused:
• July 12th 2007, 06:19 PM
topsquark
Quote:
Originally Posted by cruzangyal
66. A boat travels at a speed 6.75 m/s in still water is to go directly across a river and back (Fig 3.31). The current flows 0.50 m/s (a) at what angle(s) must the boat be steered (b) How long does it take to make the roundtrip? (Assume that the boat's is constant at all times, and neglect turn around time.):confused::confused::confused::confused:
Hint: the boat's component of velocity in the direction of the flow of the water must equal the opposite of velocity of flow of the water. So if we know that the current flow is 0.50 m/s and the magnitude of the velocity of the boat is 6.75 m/s, how do you find the angle? (Sketch a diagram before you answer this.)
-Dan
• July 12th 2007, 06:29 PM
topsquark
Quote:
Originally Posted by cruzangyal
88. An astronaut on the moon fires a projectile from a launcher on a level surface so as to get the maximum range if the launcher gives the projectile a muzzle velocity 25 m/s, what is the range of the projectile [Hint: the acceleration due to gravity on the moon is only one six of that on earth.]:confused::confused::confused::confused:
Let the +x direction be the horizontal direction of the initial velocity (ie. parallel to the ground) and the +y direction be straight up. Set the origin of this coordinate system be at the point where the projectile is fired. Then we know that
$\begin{matrix}t_0 = 0~s & & t = ? \\ x_0 = 0~m & x = ? & y_0 = 0~m & y = 0~m \\ v_{0x} = 25 \cdot cos(45^o) & v_x = v_{0x} & v{0y} = 25 \cdot sin(45^o) & v_y = ? \\ a_x = 0~m/s^2 & & a_y = -g & \end{matrix}$
(I should mention that I know the angle of incline of the gun is at 45 degrees, since this provides the maximum range no matter what the g value is. I also know that $v_x = v_{0x}$ since $a_x = 0~m/s^2$. Also, y = 0 m since the projectile is being fired over a planar surface, so the projectile hits the ground at the same height it left at. g is, of course, equal to 1/6 times 9.8 m/s^2 in this case.)
Now, we want the range x. So
$x = x_0 + v_{0x}t$
$x = 25 \cdot cos(45^o) \cdot t$
We don't know t.
So look at the y information. We know that
$y = y_0 + v_{0y}t + \frac{1}{2}a_yt^2$
$0 = 25 \cdot sin(45^o) \cdot t - \frac{1}{12} \cdot 9.8 t^2$
Solve this for t and plug it into the x equation to get the range.
-Dan
|
2016-02-08 13:45:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6672230958938599, "perplexity": 441.9796707995317}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701153323.32/warc/CC-MAIN-20160205193913-00295-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://homework.cpm.org/category/CON_FOUND/textbook/ac/chapter/8/lesson/8.1.2/problem/8-21
|
### Home > AC > Chapter 8 > Lesson 8.1.2 > Problem8-21
8-21.
Find the equation of the line that passes through the points $\left(-800,200\right)$ and $\left(-400,300\right)$.
Find the growth.
$\text{Slope (growth) }=\frac{\text{change in }y}{\text{change in }x}=\frac{100}{400}$
$\textit{m}=\frac{1}{4}$
Substitute the growth ($m$) and a point $(x, y)$ from the problem into $y = mx + b$. Solve for $b$.
$200=\frac{1}{4}(-800)+\textit{b}$
$200 = −200 + b$
$400 = b$
$\textit{y}=\frac{1}{4}\textit{x}+400$
Remember to check your work.
Use the eTool below to help find an equation that passes through these two points.
Click on the link at right for the full eTool version: 8-21 HW eTool (Desmos)
|
2023-01-31 16:33:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 12, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.696670651435852, "perplexity": 907.1087052343304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499888.62/warc/CC-MAIN-20230131154832-20230131184832-00618.warc.gz"}
|
https://readingfeynman.org/tag/qed/
|
# Rutherford’s idea of an electron
Pre-scriptum (dated 27 June 2020): Two illustrations in this post were deleted by the dark force. We will not substitute them. The reference is given and it will help you to look them up yourself. In fact, we think it will greatly advance your understanding if you do so. Mr. Gottlieb may actually have done us a favor by trying to pester us.
### Electrons, atoms, elementary particles and wave equations
The New Zealander Ernest Rutherford came to be known as the father of nuclear physics. He was the first to provide a reliable estimate of the order of magnitude of the size of the nucleus. To be precise, in the 1921 paper which we will discuss here, he came up with an estimate of about 15 fm for massive nuclei, which is the current estimate for the size of an uranium nucleus. His experiments also helped to significantly enhance the Bohr model of an atom, culminating – just before WW I started – in the Bohr-Rutherford model of an atom (E. Rutherford, Phil. Mag. 27, 488).
The Bohr-Rutherford model of an atom explained the (gross structure of the) hydrogen spectrum perfectly well, but it could not explain its finer structure—read: the orbital sub-shells which, as we all know now (but not very well then), result from the different states of angular momentum of an electron and the associated magnetic moment.
The issue is probably best illustrated by the two diagrams below, which I copied from Feynman’s Lectures. As you can see, the idea of subshells is not very relevant when looking at the gross structure of the hydrogen spectrum because the energy levels of all subshells are (very nearly) the same. However, the Bohr model of an atom—which is nothing but an exceedingly simple application of the E = h·f equation (see p. 4-6 of my paper on classical quantum physics)—cannot explain the splitting of lines for a lithium atom, which is shown in the diagram on the right. Nor can it explain the splitting of spectral lines when we apply a stronger or weaker magnetic field while exciting the atoms so as to induce emission of electromagnetic radiation.
Schrödinger’s wave equation solves that problem—which is why Feynman and other modern physicists claim this equation is “the most dramatic success in the history of the quantum mechanics” or, more modestly, a “key result in quantum mechanics” at least!
Such dramatic statements are exaggerated. First, an even finer analysis of the emission spectrum (of hydrogen or whatever other atom) reveals that Schrödinger’s wave equation is also incomplete: the hyperfine splitting, the Zeeman splitting (anomalous or not) or the (in)famous Lamb shift are to be explained not only in terms of the magnetic moment of the electron but also in terms of the magnetic moment of the nucleus and its constituents (protons and neutrons)—or of the coupling between those magnetic moments (we may refer to our paper on the Lamb shift here). This cannot be captured in a wave equation: second-order differential equations are – quite simply – not sophisticated enough to capture the complexity of the atomic system here.
Also, as we pointed out previously, the current convention in regard to the use of the imaginary unit (i) in the wavefunction does not capture the spin direction and, therefore, makes abstraction of the direction of the magnetic moment too! The wavefunction therefore models theoretical spin-zero particles, which do not exist. In short, we cannot hope to represent anything real with wave equations and wavefunctions.
More importantly, I would dare to ask this: what use is an ‘explanation’ in terms of a wave equation if we cannot explain what the wave equation actually represents? As Feynman famously writes: “Where did we get it from? Nowhere. It’s not possible to derive it from anything you know. It came out of the mind of Schrödinger, invented in his struggle to find an understanding of the experimental observations of the real world.” Our best guess is that it, somehow, models (the local diffusion of) energy or mass densities as well as non-spherical orbital geometries. We explored such interpretations in our very first paper(s) on quantum mechanics, but the truth is this: we do not think wave equations are suitable mathematical tools to describe simple or complex systems that have some internal structure—atoms (think of Schrödinger’s wave equation here), electrons (think of Dirac’s wave equation), or protons (which is what some others tried to do, but I will let you do some googling here yourself).
We need to get back to the matter at hand here, which is Rutherford’s idea of an electron back in 1921. What can we say about it?
### Rutherford’s contributions to the 1921 Solvay Conference
From what you know, and from what I write above, you will understand that Rutherford’s research focus was not on electrons: his prime interest was in explaining the atomic structure and in solving the mysteries of nuclear radiation—most notably the emission of alpha– and beta-particles as well as highly energetic gamma-rays by unstable or radioactive nuclei. In short, the nature of the electron was not his prime interest. However, this intellectual giant was, of course, very much interested in whatever experiment or whatever theory that might contribute to his thinking, and that explains why, in his contribution to the 1921 Solvay Conference—which materialized as an update of his seminal 1914 paper on The Structure of the Atom—he devotes considerable attention to Arthur Compton’s work on the scattering of light from electrons which, at the time (1921), had not even been published yet (Compton’s seminal article on (Compton) scattering was published in 1923 only).
It is also very interesting that, in the very same 1921 paper—whose 30 pages are more than a multiple of his 1914 article and later revisions of it (see, for example, the 1920 version of it, which actually has wider circulation on the Internet)—Rutherford also offers some short reflections on the magnetic properties of electrons while referring to Parson’s ring current model which, in French, he refers to as “l’électron annulaire de Parson.” Again, it is very strange that we should translate Rutherford’s 1921 remarks back in English—as we are sure the original paper must have been translated from English to French rather than the other way around.
However, it is what it is, and so here we do what we have to do: we give you a free translation of Rutherford’s remarks during the 1921 Solvay Conference on the state of research regarding the electron at that time. The reader should note these remarks are buried in a larger piece on the emission of β particles by radioactive nuclei which, as it turns out, are nothing but high-energy electrons (or their anti-matter counterpart—positrons). In fact, we should—before we proceed—draw attention to the fact that the physicists at the time had no clear notion of the concepts of protons and neutrons.
This is, indeed, another remarkable historical contribution of the 1921 Solvay Conference because, as far as I know, this is the first time Rutherford talks about the neutron hypothesis. It is quite remarkable he does not advance the neutron hypothesis to explain the atomic mass of atoms combining what we know think of as protons and neutrons (Rutherford regularly talks of a mix of ‘positive and negative electrons’ in the nucleus—neither the term proton or neutron was in use at the time) but as part of a possible explanation of nuclear fusion reactions in stars or stellar nebulae. This is, indeed, his response to a question during the discussions on Rutherford’s paper on the possibility of nuclear synthesis in stars or nebulae from the French physicist Jean Baptise Perrin who, independently from the American chemist William Draper Harkins, had proposed the possibility of hydrogen fusion just the year before (1919):
“We can, in fact, think of enormous energies being released from hydrogen nuclei merging to form helium—much larger energies than what can come from the Kelvin-Helmholtz mechanism. I have been thinking that the hydrogen in the nebulae might come from particles which we may refer to as ‘neutrons’: these would consist of a positive nucleus with an electron at an exceedingly small distance (“un noyau positif avec un électron à toute petite distance”). These would mediate the assembly of the nuclei of more massive elements. It is, otherwise, difficult to understand how the positively charged particles could come together against the repulsive force that pushes them apart—unless we would envisage they are driven by enormous velocities.”
We may add that, just to make sure he get this right, Rutherford is immediately requested to elaborate his point by the Danish physicist Martin Knudsen: “What’s the difference between a hydrogen atom and this neutron?”—which Rutherford simply answers as follows: “In a neutron, the electron would be very much closer to the nucleus.” In light of the fact that it was only in 1932 that James Chadwick would experimentally prove the existence of neutrons (and positively charged protons), we are, once again, deeply impressed by the the foresight of Rutherford and the other pioneers here: the predictive power of their theories and ideas is, effectively, truly amazing by any standard—including today’s. I should, perhaps, also add that I fully subscribe to Rutherford’s intuition that a neutron should be a composite particle consisting of a proton and an electron—but that’s a different discussion altogether.
We must come back to the topic of this post, which we will do now. Before we proceed, however, we should highlight one other contextual piece of information here: at the time, very little was known about the nature of α and β particles. We now know that beta-particles are electrons, and that alpha-particles combine two protons and two neutrons. That was not known in the 1920s, however: Rutherford and his associates could basically only see positive or negative particles coming out of these radioactive processes. This further underscores how much knowledge they were able to gain from rather limited sets of data.
### Rutherford’s idea of an electron in 1921
So here is the translation of some crucial text. Needless to say, the italics, boldface and additions between [brackets] are not Rutherford’s but mine, of course.
“We may think the same laws should apply in regard to the scattering [“diffusion”] of α and β particles. [Note: Rutherford noted, earlier in his paper, that, based on the scattering patterns and other evidence, the force around the nucleus must respect the inverse square law near the nucleus—moreover, it must also do so very near to it.] However, we see marked differences. Anyone who has carefully studied the trajectories [photographs from the Wilson cloud chamber] of beta-particles will note the trajectories show a regular curvature. Such curved trajectories are even more obvious when they are illuminated by X-rays. Indeed, A.H. Compton noted that these trajectories seem to end in a converging helical path turning right or left. To explain this, Compton assumes the electron acts like a magnetic dipole whose axis is more or less fixed, and that the curvature of its path is caused by the magnetic field [from the (paramagnetic) materials that are used].
Further examination would be needed to make sure this curvature is not some coincidence, but the general impression is that the hypothesis may be quite right. We also see similar curvature and helicity with α particles in the last millimeters of their trajectories. [Note: α-particles are, obviously, also charged particles but we think Rutherford’s remark in regard to α particles also following a curved or helical path must be exaggerated: the order of magnitude of the magnetic moment of protons and neutrons is much smaller and, in any case, they tend to cancel each other out. Also, because of the rather enormous mass of α particles (read: helium nuclei) as compared to electrons, the effect would probably not be visible in a Wilson cloud chamber.]
The idea that an electron has magnetic properties is still sketchy and we would need new and more conclusive experiments before accepting it as a scientific fact. However, it would surely be natural to assume its magnetic properties would result from a rotation of the electron. Parson’s ring electron model [“électron annulaire“] was specifically imagined to incorporate such magnetic polarity [“polarité magnétique“].
A very interesting question here would be to wonder whether such rotation would be some intrinsic property of the electron or if it would just result from the rotation of the electron in its atomic orbital around the nucleus. Indeed, James Jeans usefully reminded me any asymmetry in an electron should result in it rotating around its own axis at the same frequency of its orbital rotation. [Note: The reader can easily imagine this: think of an asymmetric object going around in a circle and returning to its original position. In order to return to the same orientation, it must rotate around its own axis one time too!]
We should also wonder if an electron might acquire some rotational motion from being accelerated in an electric field and if such rotation, once acquired, would persist when decelerating in an(other) electric field or when passing through matter. If so, some of the properties of electrons would, to some extent, depend on their past.”
Each and every sentence in these very brief remarks is wonderfully consistent with modern-day modelling of electron behavior. We should add, of course, non-mainstream modeling of electrons but the addition is superfluous because mainstream physicists stubbornly continue to pretend electrons have no internal structure, and nor would they have any physical dimension. In light of the numerous experimental measurements of the effective charge radius as well as of the dimensions of the physical space in which photons effectively interfere with electrons, such mainstream assumptions seem completely ridiculous. However, such is the sad state of physics today.
### Thinking backward and forward
We think that it is pretty obvious that Rutherford and others would have been able to adapt their model of an atom to better incorporate the magnetic properties not only of electrons but also of the nucleus and its constituents (protons and neutrons). Unfortunately, scientists at the time seem to have been swept away by the charisma of Bohr, Heisenberg and others, as well as by the mathematical brilliance of the likes of Sommerfeld, Dirac, and Pauli.
The road then was taken then has not led us very far. We concur with Oliver Consa’s scathing but essentially correct appraisal of the current sorry state of physics:
“QED should be the quantized version of Maxwell’s laws, but it is not that at all. QED is a simple addition to quantum mechanics that attempts to justify two experimental discrepancies in the Dirac equation: the Lamb shift and the anomalous magnetic moment of the electron. The reality is that QED is a bunch of fudge factors, numerology, ignored infinities, hocus-pocus, manipulated calculations, illegitimate mathematics, incomprehensible theories, hidden data, biased experiments, miscalculations, suspicious coincidences, lies, arbitrary substitutions of infinite values and budgets of 600 million dollars to continue the game. Maybe it is time to consider alternative proposals. Winter is coming.”
I would suggest we just go back where we went wrong: it may be warmer there, and thinking both backward as well as forward must, in any case, be a much more powerful problem solving technique than relying only on expert guessing on what linear differential equation(s) might give us some S-matrix linking all likely or possible initial and final states of some system or process. 🙂
Post scriptum: The sad state of physics is, of course, not limited to quantum electrodynamics only. We were briefly in touch with the PRad experimenters who put an end to the rather ridiculous ‘proton radius puzzle’ by re-confirming the previously established 0.83-0.84 range for the effective charge radius of a proton: we sent them our own classical back-of-the-envelope calculation of the Compton scattering radius of a proton based on the ring current model (see p. 15-16 of our paper on classical physics), which is in agreement with these measurements and courteously asked what alternative theories they were suggesting. Their spokesman replied equally courteously:
“There is no any theoretical prediction in QCD. Lattice [theorists] are trying to come up [with something] but that will take another decade before any reasonable number [may come] from them.”
This e-mail exchange goes back to early February 2020. There has been no news since. One wonders if there is actually any real interest in solving puzzles. The physicist who wrote the above may have been nominated for a Nobel Prize in Physics—I surely hope so because, in contrast to some others, he and his team surely deserve one— but I think it is rather incongruous to finally firmly establish the size of a proton while, at the same time, admit that protons should not have any size according to mainstream theory—and we are talking the respected QCD sector of the equally respected Standard Model here!
We understand, of course! As Freddy Mercury famously sang: The Show Must Go On.
# Wavefunctions as gravitational waves
This is the paper I always wanted to write. It is there now, and I think it is good – and that‘s an understatement. 🙂 It is probably best to download it as a pdf-file from the viXra.org site because this was a rather fast ‘copy and paste’ job from the Word version of the paper, so there may be issues with boldface notation (vector notation), italics and, most importantly, with formulas – which I, sadly, have to ‘snip’ into this WordPress blog, as they don’t have an easy copy function for mathematical formulas.
It’s great stuff. If you have been following my blog – and many of you have – you will want to digest this. 🙂
Abstract : This paper explores the implications of associating the components of the wavefunction with a physical dimension: force per unit mass – which is, of course, the dimension of acceleration (m/s2) and gravitational fields. The classical electromagnetic field equations for energy densities, the Poynting vector and spin angular momentum are then re-derived by substituting the electromagnetic N/C unit of field strength (mass per unit charge) by the new N/kg = m/s2 dimension.
The results are elegant and insightful. For example, the energy densities are proportional to the square of the absolute value of the wavefunction and, hence, to the probabilities, which establishes a physical normalization condition. Also, Schrödinger’s wave equation may then, effectively, be interpreted as a diffusion equation for energy, and the wavefunction itself can be interpreted as a propagating gravitational wave. Finally, as an added bonus, concepts such as the Compton scattering radius for a particle, spin angular momentum, and the boson-fermion dichotomy, can also be explained more intuitively.
While the approach offers a physical interpretation of the wavefunction, the author argues that the core of the Copenhagen interpretations revolves around the complementarity principle, which remains unchallenged because the interpretation of amplitude waves as traveling fields does not explain the particle nature of matter.
# Introduction
This is not another introduction to quantum mechanics. We assume the reader is already familiar with the key principles and, importantly, with the basic math. We offer an interpretation of wave mechanics. As such, we do not challenge the complementarity principle: the physical interpretation of the wavefunction that is offered here explains the wave nature of matter only. It explains diffraction and interference of amplitudes but it does not explain why a particle will hit the detector not as a wave but as a particle. Hence, the Copenhagen interpretation of the wavefunction remains relevant: we just push its boundaries.
The basic ideas in this paper stem from a simple observation: the geometric similarity between the quantum-mechanical wavefunctions and electromagnetic waves is remarkably similar. The components of both waves are orthogonal to the direction of propagation and to each other. Only the relative phase differs : the electric and magnetic field vectors (E and B) have the same phase. In contrast, the phase of the real and imaginary part of the (elementary) wavefunction (ψ = a·ei∙θ = a∙cosθ – a∙sinθ) differ by 90 degrees (π/2).[1] Pursuing the analogy, we explore the following question: if the oscillating electric and magnetic field vectors of an electromagnetic wave carry the energy that one associates with the wave, can we analyze the real and imaginary part of the wavefunction in a similar way?
We show the answer is positive and remarkably straightforward. If the physical dimension of the electromagnetic field is expressed in newton per coulomb (force per unit charge), then the physical dimension of the components of the wavefunction may be associated with force per unit mass (newton per kg).[2] Of course, force over some distance is energy. The question then becomes: what is the energy concept here? Kinetic? Potential? Both?
The similarity between the energy of a (one-dimensional) linear oscillator (E = m·a2·ω2/2) and Einstein’s relativistic energy equation E = m∙c2 inspires us to interpret the energy as a two-dimensional oscillation of mass. To assist the reader, we construct a two-piston engine metaphor.[3] We then adapt the formula for the electromagnetic energy density to calculate the energy densities for the wave function. The results are elegant and intuitive: the energy densities are proportional to the square of the absolute value of the wavefunction and, hence, to the probabilities. Schrödinger’s wave equation may then, effectively, be interpreted as a diffusion equation for energy itself.
As an added bonus, concepts such as the Compton scattering radius for a particle and spin angular, as well as the boson-fermion dichotomy can be explained in a fully intuitive way.[4]
Of course, such interpretation is also an interpretation of the wavefunction itself, and the immediate reaction of the reader is predictable: the electric and magnetic field vectors are, somehow, to be looked at as real vectors. In contrast, the real and imaginary components of the wavefunction are not. However, this objection needs to be phrased more carefully. First, it may be noted that, in a classical analysis, the magnetic force is a pseudovector itself.[5] Second, a suitable choice of coordinates may make quantum-mechanical rotation matrices irrelevant.[6]
Therefore, the author is of the opinion that this little paper may provide some fresh perspective on the question, thereby further exploring Einstein’s basic sentiment in regard to quantum mechanics, which may be summarized as follows: there must be some physical explanation for the calculated probabilities.[7]
We will, therefore, start with Einstein’s relativistic energy equation (E = mc2) and wonder what it could possibly tell us.
# I. Energy as a two-dimensional oscillation of mass
The structural similarity between the relativistic energy formula, the formula for the total energy of an oscillator, and the kinetic energy of a moving body, is striking:
1. E = mc2
2. E = mω2/2
3. E = mv2/2
In these formulas, ω, v and c all describe some velocity.[8] Of course, there is the 1/2 factor in the E = mω2/2 formula[9], but that is exactly the point we are going to explore here: can we think of an oscillation in two dimensions, so it stores an amount of energy that is equal to E = 2·m·ω2/2 = m·ω2?
That is easy enough. Think, for example, of a V-2 engine with the pistons at a 90-degree angle, as illustrated below. The 90° angle makes it possible to perfectly balance the counterweight and the pistons, thereby ensuring smooth travel at all times. With permanently closed valves, the air inside the cylinder compresses and decompresses as the pistons move up and down and provides, therefore, a restoring force. As such, it will store potential energy, just like a spring, and the motion of the pistons will also reflect that of a mass on a spring. Hence, we can describe it by a sinusoidal function, with the zero point at the center of each cylinder. We can, therefore, think of the moving pistons as harmonic oscillators, just like mechanical springs.
Figure 1: Oscillations in two dimensions
If we assume there is no friction, we have a perpetuum mobile here. The compressed air and the rotating counterweight (which, combined with the crankshaft, acts as a flywheel[10]) store the potential energy. The moving masses of the pistons store the kinetic energy of the system.[11]
At this point, it is probably good to quickly review the relevant math. If the magnitude of the oscillation is equal to a, then the motion of the piston (or the mass on a spring) will be described by x = a·cos(ω·t + Δ).[12] Needless to say, Δ is just a phase factor which defines our t = 0 point, and ω is the natural angular frequency of our oscillator. Because of the 90° angle between the two cylinders, Δ would be 0 for one oscillator, and –π/2 for the other. Hence, the motion of one piston is given by x = a·cos(ω·t), while the motion of the other is given by x = a·cos(ω·t–π/2) = a·sin(ω·t).
The kinetic and potential energy of one oscillator (think of one piston or one spring only) can then be calculated as:
1. K.E. = T = m·v2/2 = (1/2)·m·ω2·a2·sin2(ω·t + Δ)
2. P.E. = U = k·x2/2 = (1/2)·k·a2·cos2(ω·t + Δ)
The coefficient k in the potential energy formula characterizes the restoring force: F = −k·x. From the dynamics involved, it is obvious that k must be equal to m·ω2. Hence, the total energy is equal to:
E = T + U = (1/2)· m·ω2·a2·[sin2(ω·t + Δ) + cos2(ω·t + Δ)] = m·a2·ω2/2
To facilitate the calculations, we will briefly assume k = m·ω2 and a are equal to 1. The motion of our first oscillator is given by the cos(ω·t) = cosθ function (θ = ω·t), and its kinetic energy will be equal to sin2θ. Hence, the (instantaneous) change in kinetic energy at any point in time will be equal to:
d(sin2θ)/dθ = 2∙sinθ∙d(sinθ)/dθ = 2∙sinθ∙cosθ
Let us look at the second oscillator now. Just think of the second piston going up and down in the V-2 engine. Its motion is given by the sinθ function, which is equal to cos(θ−π /2). Hence, its kinetic energy is equal to sin2(θ−π /2), and how it changes – as a function of θ – will be equal to:
2∙sin(θ−π /2)∙cos(θ−π /2) = = −2∙cosθ∙sinθ = −2∙sinθ∙cosθ
We have our perpetuum mobile! While transferring kinetic energy from one piston to the other, the crankshaft will rotate with a constant angular velocity: linear motion becomes circular motion, and vice versa, and the total energy that is stored in the system is T + U = ma2ω2.
We have a great metaphor here. Somehow, in this beautiful interplay between linear and circular motion, energy is borrowed from one place and then returns to the other, cycle after cycle. We know the wavefunction consist of a sine and a cosine: the cosine is the real component, and the sine is the imaginary component. Could they be equally real? Could each represent half of the total energy of our particle? Should we think of the c in our E = mc2 formula as an angular velocity?
These are sensible questions. Let us explore them.
# II. The wavefunction as a two-dimensional oscillation
The elementary wavefunction is written as:
ψ = a·ei[E·t − px]/ħa·ei[E·t − px]/ħ = a·cos(px E∙t/ħ) + i·a·sin(px E∙t/ħ)
When considering a particle at rest (p = 0) this reduces to:
ψ = a·ei∙E·t/ħ = a·cos(E∙t/ħ) + i·a·sin(E∙t/ħ) = a·cos(E∙t/ħ) i·a·sin(E∙t/ħ)
Let us remind ourselves of the geometry involved, which is illustrated below. Note that the argument of the wavefunction rotates clockwise with time, while the mathematical convention for measuring the phase angle (ϕ) is counter-clockwise.
Figure 2: Euler’s formula
If we assume the momentum p is all in the x-direction, then the p and x vectors will have the same direction, and px/ħ reduces to p∙x/ħ. Most illustrations – such as the one below – will either freeze x or, else, t. Alternatively, one can google web animations varying both. The point is: we also have a two-dimensional oscillation here. These two dimensions are perpendicular to the direction of propagation of the wavefunction. For example, if the wavefunction propagates in the x-direction, then the oscillations are along the y– and z-axis, which we may refer to as the real and imaginary axis. Note how the phase difference between the cosine and the sine – the real and imaginary part of our wavefunction – appear to give some spin to the whole. I will come back to this.
Figure 3: Geometric representation of the wavefunction
Hence, if we would say these oscillations carry half of the total energy of the particle, then we may refer to the real and imaginary energy of the particle respectively, and the interplay between the real and the imaginary part of the wavefunction may then describe how energy propagates through space over time.
Let us consider, once again, a particle at rest. Hence, p = 0 and the (elementary) wavefunction reduces to ψ = a·ei∙E·t/ħ. Hence, the angular velocity of both oscillations, at some point x, is given by ω = -E/ħ. Now, the energy of our particle includes all of the energy – kinetic, potential and rest energy – and is, therefore, equal to E = mc2.
Can we, somehow, relate this to the m·a2·ω2 energy formula for our V-2 perpetuum mobile? Our wavefunction has an amplitude too. Now, if the oscillations of the real and imaginary wavefunction store the energy of our particle, then their amplitude will surely matter. In fact, the energy of an oscillation is, in general, proportional to the square of the amplitude: E µ a2. We may, therefore, think that the a2 factor in the E = m·a2·ω2 energy will surely be relevant as well.
However, here is a complication: an actual particle is localized in space and can, therefore, not be represented by the elementary wavefunction. We must build a wave packet for that: a sum of wavefunctions, each with their own amplitude ak, and their own ωi = -Ei/ħ. Each of these wavefunctions will contribute some energy to the total energy of the wave packet. To calculate the contribution of each wave to the total, both ai as well as Ei will matter.
What is Ei? Ei varies around some average E, which we can associate with some average mass m: m = E/c2. The Uncertainty Principle kicks in here. The analysis becomes more complicated, but a formula such as the one below might make sense:We can re-write this as:What is the meaning of this equation? We may look at it as some sort of physical normalization condition when building up the Fourier sum. Of course, we should relate this to the mathematical normalization condition for the wavefunction. Our intuition tells us that the probabilities must be related to the energy densities, but how exactly? We will come back to this question in a moment. Let us first think some more about the enigma: what is mass?
Before we do so, let us quickly calculate the value of c2ħ2: it is about 1´1051 N2∙m4. Let us also do a dimensional analysis: the physical dimensions of the E = m·a2·ω2 equation make sense if we express m in kg, a in m, and ω in rad/s. We then get: [E] = kg∙m2/s2 = (N∙s2/m)∙m2/s2 = N∙m = J. The dimensions of the left- and right-hand side of the physical normalization condition is N3∙m5.
# III. What is mass?
We came up, playfully, with a meaningful interpretation for energy: it is a two-dimensional oscillation of mass. But what is mass? A new aether theory is, of course, not an option, but then what is it that is oscillating? To understand the physics behind equations, it is always good to do an analysis of the physical dimensions in the equation. Let us start with Einstein’s energy equation once again. If we want to look at mass, we should re-write it as m = E/c2:
[m] = [E/c2] = J/(m/s)2 = N·m∙s2/m2 = N·s2/m = kg
This is not very helpful. It only reminds us of Newton’s definition of a mass: mass is that what gets accelerated by a force. At this point, we may want to think of the physical significance of the absolute nature of the speed of light. Einstein’s E = mc2 equation implies we can write the ratio between the energy and the mass of any particle is always the same, so we can write, for example:This reminds us of the ω2= C1/L or ω2 = k/m of harmonic oscillators once again.[13] The key difference is that the ω2= C1/L and ω2 = k/m formulas introduce two or more degrees of freedom.[14] In contrast, c2= E/m for any particle, always. However, that is exactly the point: we can modulate the resistance, inductance and capacitance of electric circuits, and the stiffness of springs and the masses we put on them, but we live in one physical space only: our spacetime. Hence, the speed of light c emerges here as the defining property of spacetime – the resonant frequency, so to speak. We have no further degrees of freedom here.
The Planck-Einstein relation (for photons) and the de Broglie equation (for matter-particles) have an interesting feature: both imply that the energy of the oscillation is proportional to the frequency, with Planck’s constant as the constant of proportionality. Now, for one-dimensional oscillations – think of a guitar string, for example – we know the energy will be proportional to the square of the frequency. It is a remarkable observation: the two-dimensional matter-wave, or the electromagnetic wave, gives us two waves for the price of one, so to speak, each carrying half of the total energy of the oscillation but, as a result, we get a proportionality between E and f instead of between E and f2.
However, such reflections do not answer the fundamental question we started out with: what is mass? At this point, it is hard to go beyond the circular definition that is implied by Einstein’s formula: energy is a two-dimensional oscillation of mass, and mass packs energy, and c emerges us as the property of spacetime that defines how exactly.
When everything is said and done, this does not go beyond stating that mass is some scalar field. Now, a scalar field is, quite simply, some real number that we associate with a position in spacetime. The Higgs field is a scalar field but, of course, the theory behind it goes much beyond stating that we should think of mass as some scalar field. The fundamental question is: why and how does energy, or matter, condense into elementary particles? That is what the Higgs mechanism is about but, as this paper is exploratory only, we cannot even start explaining the basics of it.
What we can do, however, is look at the wave equation again (Schrödinger’s equation), as we can now analyze it as an energy diffusion equation.
# IV. Schrödinger’s equation as an energy diffusion equation
The interpretation of Schrödinger’s equation as a diffusion equation is straightforward. Feynman (Lectures, III-16-1) briefly summarizes it as follows:
“We can think of Schrödinger’s equation as describing the diffusion of the probability amplitude from one point to the next. […] But the imaginary coefficient in front of the derivative makes the behavior completely different from the ordinary diffusion such as you would have for a gas spreading out along a thin tube. Ordinary diffusion gives rise to real exponential solutions, whereas the solutions of Schrödinger’s equation are complex waves.”[17]
Let us review the basic math. For a particle moving in free space – with no external force fields acting on it – there is no potential (U = 0) and, therefore, the Uψ term disappears. Therefore, Schrödinger’s equation reduces to:
∂ψ(x, t)/∂t = i·(1/2)·(ħ/meff)·∇2ψ(x, t)
The ubiquitous diffusion equation in physics is:
∂φ(x, t)/∂t = D·∇2φ(x, t)
The structural similarity is obvious. The key difference between both equations is that the wave equation gives us two equations for the price of one. Indeed, because ψ is a complex-valued function, with a real and an imaginary part, we get the following equations[18]:
1. Re(∂ψ/∂t) = −(1/2)·(ħ/meffIm(∇2ψ)
2. Im(∂ψ/∂t) = (1/2)·(ħ/meffRe(∇2ψ)
These equations make us think of the equations for an electromagnetic wave in free space (no stationary charges or currents):
1. B/∂t = –∇×E
2. E/∂t = c2∇×B
The above equations effectively describe a propagation mechanism in spacetime, as illustrated below.
Figure 4: Propagation mechanisms
The Laplacian operator (∇2), when operating on a scalar quantity, gives us a flux density, i.e. something expressed per square meter (1/m2). In this case, it is operating on ψ(x, t), so what is the dimension of our wavefunction ψ(x, t)? To answer that question, we should analyze the diffusion constant in Schrödinger’s equation, i.e. the (1/2)·(ħ/meff) factor:
1. As a mathematical constant of proportionality, it will quantify the relationship between both derivatives (i.e. the time derivative and the Laplacian);
2. As a physical constant, it will ensure the physical dimensions on both sides of the equation are compatible.
Now, the ħ/meff factor is expressed in (N·m·s)/(N· s2/m) = m2/s. Hence, it does ensure the dimensions on both sides of the equation are, effectively, the same: ∂ψ/∂t is a time derivative and, therefore, its dimension is s1 while, as mentioned above, the dimension of ∇2ψ is m2. However, this does not solve our basic question: what is the dimension of the real and imaginary part of our wavefunction?
At this point, mainstream physicists will say: it does not have a physical dimension, and there is no geometric interpretation of Schrödinger’s equation. One may argue, effectively, that its argument, (px – E∙t)/ħ, is just a number and, therefore, that the real and imaginary part of ψ is also just some number.
To this, we may object that ħ may be looked as a mathematical scaling constant only. If we do that, then the argument of ψ will, effectively, be expressed in action units, i.e. in N·m·s. It then does make sense to also associate a physical dimension with the real and imaginary part of ψ. What could it be?
We may have a closer look at Maxwell’s equations for inspiration here. The electric field vector is expressed in newton (the unit of force) per unit of charge (coulomb). Now, there is something interesting here. The physical dimension of the magnetic field is N/C divided by m/s.[19] We may write B as the following vector cross-product: B = (1/c)∙ex×E, with ex the unit vector pointing in the x-direction (i.e. the direction of propagation of the wave). Hence, we may associate the (1/c)∙ex× operator, which amounts to a rotation by 90 degrees, with the s/m dimension. Now, multiplication by i also amounts to a rotation by 90° degrees. Hence, we may boldly write: B = (1/c)∙ex×E = (1/c)∙iE. This allows us to also geometrically interpret Schrödinger’s equation in the way we interpreted it above (see Figure 3).[20]
Still, we have not answered the question as to what the physical dimension of the real and imaginary part of our wavefunction should be. At this point, we may be inspired by the structural similarity between Newton’s and Coulomb’s force laws:Hence, if the electric field vector E is expressed in force per unit charge (N/C), then we may want to think of associating the real part of our wavefunction with a force per unit mass (N/kg). We can, of course, do a substitution here, because the mass unit (1 kg) is equivalent to 1 N·s2/m. Hence, our N/kg dimension becomes:
N/kg = N/(N·s2/m)= m/s2
What is this: m/s2? Is that the dimension of the a·cosθ term in the a·eiθ a·cosθ − i·a·sinθ wavefunction?
My answer is: why not? Think of it: m/s2 is the physical dimension of acceleration: the increase or decrease in velocity (m/s) per second. It ensures the wavefunction for any particle – matter-particles or particles with zero rest mass (photons) – and the associated wave equation (which has to be the same for all, as the spacetime we live in is one) are mutually consistent.
In this regard, we should think of how we would model a gravitational wave. The physical dimension would surely be the same: force per mass unit. It all makes sense: wavefunctions may, perhaps, be interpreted as traveling distortions of spacetime, i.e. as tiny gravitational waves.
# V. Energy densities and flows
Pursuing the geometric equivalence between the equations for an electromagnetic wave and Schrödinger’s equation, we can now, perhaps, see if there is an equivalent for the energy density. For an electromagnetic wave, we know that the energy density is given by the following formula:E and B are the electric and magnetic field vector respectively. The Poynting vector will give us the directional energy flux, i.e. the energy flow per unit area per unit time. We write:Needless to say, the ∙ operator is the divergence and, therefore, gives us the magnitude of a (vector) field’s source or sink at a given point. To be precise, the divergence gives us the volume density of the outward flux of a vector field from an infinitesimal volume around a given point. In this case, it gives us the volume density of the flux of S.
We can analyze the dimensions of the equation for the energy density as follows:
1. E is measured in newton per coulomb, so [EE] = [E2] = N2/C2.
2. B is measured in (N/C)/(m/s), so we get [BB] = [B2] = (N2/C2)·(s2/m2). However, the dimension of our c2 factor is (m2/s2) and so we’re also left with N2/C2.
3. The ϵ0 is the electric constant, aka as the vacuum permittivity. As a physical constant, it should ensure the dimensions on both sides of the equation work out, and they do: [ε0] = C2/(N·m2) and, therefore, if we multiply that with N2/C2, we find that is expressed in J/m3.[21]
Replacing the newton per coulomb unit (N/C) by the newton per kg unit (N/kg) in the formulas above should give us the equivalent of the energy density for the wavefunction. We just need to substitute ϵ0 for an equivalent constant. We may to give it a try. If the energy densities can be calculated – which are also mass densities, obviously – then the probabilities should be proportional to them.
Let us first see what we get for a photon, assuming the electromagnetic wave represents its wavefunction. Substituting B for (1/c)∙iE or for −(1/c)∙iE gives us the following result:Zero!? An unexpected result! Or not? We have no stationary charges and no currents: only an electromagnetic wave in free space. Hence, the local energy conservation principle needs to be respected at all points in space and in time. The geometry makes sense of the result: for an electromagnetic wave, the magnitudes of E and B reach their maximum, minimum and zero point simultaneously, as shown below.[22] This is because their phase is the same.
Figure 5: Electromagnetic wave: E and B
Should we expect a similar result for the energy densities that we would associate with the real and imaginary part of the matter-wave? For the matter-wave, we have a phase difference between a·cosθ and a·sinθ, which gives a different picture of the propagation of the wave (see Figure 3).[23] In fact, the geometry of the suggestion suggests some inherent spin, which is interesting. I will come back to this. Let us first guess those densities. Making abstraction of any scaling constants, we may write:We get what we hoped to get: the absolute square of our amplitude is, effectively, an energy density !
|ψ|2 = |a·ei∙E·t/ħ|2 = a2 = u
This is very deep. A photon has no rest mass, so it borrows and returns energy from empty space as it travels through it. In contrast, a matter-wave carries energy and, therefore, has some (rest) mass. It is therefore associated with an energy density, and this energy density gives us the probabilities. Of course, we need to fine-tune the analysis to account for the fact that we have a wave packet rather than a single wave, but that should be feasible.
As mentioned, the phase difference between the real and imaginary part of our wavefunction (a cosine and a sine function) appear to give some spin to our particle. We do not have this particularity for a photon. Of course, photons are bosons, i.e. spin-zero particles, while elementary matter-particles are fermions with spin-1/2. Hence, our geometric interpretation of the wavefunction suggests that, after all, there may be some more intuitive explanation of the fundamental dichotomy between bosons and fermions, which puzzled even Feynman:
“Why is it that particles with half-integral spin are Fermi particles, whereas particles with integral spin are Bose particles? We apologize for the fact that we cannot give you an elementary explanation. An explanation has been worked out by Pauli from complicated arguments of quantum field theory and relativity. He has shown that the two must necessarily go together, but we have not been able to find a way of reproducing his arguments on an elementary level. It appears to be one of the few places in physics where there is a rule which can be stated very simply, but for which no one has found a simple and easy explanation. The explanation is deep down in relativistic quantum mechanics. This probably means that we do not have a complete understanding of the fundamental principle involved.” (Feynman, Lectures, III-4-1)
The physical interpretation of the wavefunction, as presented here, may provide some better understanding of ‘the fundamental principle involved’: the physical dimension of the oscillation is just very different. That is all: it is force per unit charge for photons, and force per unit mass for matter-particles. We will examine the question of spin somewhat more carefully in section VII. Let us first examine the matter-wave some more.
# VI. Group and phase velocity of the matter-wave
The geometric representation of the matter-wave (see Figure 3) suggests a traveling wave and, yes, of course: the matter-wave effectively travels through space and time. But what is traveling, exactly? It is the pulse – or the signal – only: the phase velocity of the wave is just a mathematical concept and, even in our physical interpretation of the wavefunction, the same is true for the group velocity of our wave packet. The oscillation is two-dimensional, but perpendicular to the direction of travel of the wave. Hence, nothing actually moves with our particle.
Here, we should also reiterate that we did not answer the question as to what is oscillating up and down and/or sideways: we only associated a physical dimension with the components of the wavefunction – newton per kg (force per unit mass), to be precise. We were inspired to do so because of the physical dimension of the electric and magnetic field vectors (newton per coulomb, i.e. force per unit charge) we associate with electromagnetic waves which, for all practical purposes, we currently treat as the wavefunction for a photon. This made it possible to calculate the associated energy densities and a Poynting vector for energy dissipation. In addition, we showed that Schrödinger’s equation itself then becomes a diffusion equation for energy. However, let us now focus some more on the asymmetry which is introduced by the phase difference between the real and the imaginary part of the wavefunction. Look at the mathematical shape of the elementary wavefunction once again:
ψ = a·ei[E·t − px]/ħa·ei[E·t − px]/ħ = a·cos(px/ħ − E∙t/ħ) + i·a·sin(px/ħ − E∙t/ħ)
The minus sign in the argument of our sine and cosine function defines the direction of travel: an F(x−v∙t) wavefunction will always describe some wave that is traveling in the positive x-direction (with the wave velocity), while an F(x+v∙t) wavefunction will travel in the negative x-direction. For a geometric interpretation of the wavefunction in three dimensions, we need to agree on how to define i or, what amounts to the same, a convention on how to define clockwise and counterclockwise directions: if we look at a clock from the back, then its hand will be moving counterclockwise. So we need to establish the equivalent of the right-hand rule. However, let us not worry about that now. Let us focus on the interpretation. To ease the analysis, we’ll assume we’re looking at a particle at rest. Hence, p = 0, and the wavefunction reduces to:
ψ = a·ei∙E·t/ħ = a·cos(−E∙t/ħ) + i·a·sin(−E0∙t/ħ) = a·cos(E0∙t/ħ) − i·a·sin(E0∙t/ħ)
E0 is, of course, the rest mass of our particle and, now that we are here, we should probably wonder whose time we are talking about: is it our time, or is the proper time of our particle? Well… In this situation, we are both at rest so it does not matter: t is, effectively, the proper time so perhaps we should write it as t0. It does not matter. You can see what we expect to see: E0/ħ pops up as the natural frequency of our matter-particle: (E0/ħ)∙t = ω∙t. Remembering the ω = 2π·f = 2π/T and T = 1/formulas, we can associate a period and a frequency with this wave, using the ω = 2π·f = 2π/T. Noting that ħ = h/2π, we find the following:
T = 2π·(ħ/E0) = h/E0 ⇔ = E0/h = m0c2/h
This is interesting, because we can look at the period as a natural unit of time for our particle. What about the wavelength? That is tricky because we need to distinguish between group and phase velocity here. The group velocity (vg) should be zero here, because we assume our particle does not move. In contrast, the phase velocity is given by vp = λ·= (2π/k)·(ω/2π) = ω/k. In fact, we’ve got something funny here: the wavenumber k = p/ħ is zero, because we assume the particle is at rest, so p = 0. So we have a division by zero here, which is rather strange. What do we get assuming the particle is not at rest? We write:
vp = ω/k = (E/ħ)/(p/ħ) = E/p = E/(m·vg) = (m·c2)/(m·vg) = c2/vg
This is interesting: it establishes a reciprocal relation between the phase and the group velocity, with as a simple scaling constant. Indeed, the graph below shows the shape of the function does not change with the value of c, and we may also re-write the relation above as:
vp/= βp = c/vp = 1/βg = 1/(c/vp)
Figure 6: Reciprocal relation between phase and group velocity
We can also write the mentioned relationship as vp·vg = c2, which reminds us of the relationship between the electric and magnetic constant (1/ε0)·(1/μ0) = c2. This is interesting in light of the fact we can re-write this as (c·ε0)·(c·μ0) = 1, which shows electricity and magnetism are just two sides of the same coin, so to speak.[24]
Interesting, but how do we interpret the math? What about the implications of the zero value for wavenumber k = p/ħ. We would probably like to think it implies the elementary wavefunction should always be associated with some momentum, because the concept of zero momentum clearly leads to weird math: something times zero cannot be equal to c2! Such interpretation is also consistent with the Uncertainty Principle: if Δx·Δp ≥ ħ, then neither Δx nor Δp can be zero. In other words, the Uncertainty Principle tells us that the idea of a pointlike particle actually being at some specific point in time and in space does not make sense: it has to move. It tells us that our concept of dimensionless points in time and space are mathematical notions only. Actual particles – including photons – are always a bit spread out, so to speak, and – importantly – they have to move.
For a photon, this is self-evident. It has no rest mass, no rest energy, and, therefore, it is going to move at the speed of light itself. We write: p = m·c = m·c2/= E/c. Using the relationship above, we get:
vp = ω/k = (E/ħ)/(p/ħ) = E/p = c ⇒ vg = c2/vp = c2/c = c
This is good: we started out with some reflections on the matter-wave, but here we get an interpretation of the electromagnetic wave as a wavefunction for the photon. But let us get back to our matter-wave. In regard to our interpretation of a particle having to move, we should remind ourselves, once again, of the fact that an actual particle is always localized in space and that it can, therefore, not be represented by the elementary wavefunction ψ = a·ei[E·t − px]/ħ or, for a particle at rest, the ψ = a·ei∙E·t/ħ function. We must build a wave packet for that: a sum of wavefunctions, each with their own amplitude ai, and their own ωi = −Ei/ħ. Indeed, in section II, we showed that each of these wavefunctions will contribute some energy to the total energy of the wave packet and that, to calculate the contribution of each wave to the total, both ai as well as Ei matter. This may or may not resolve the apparent paradox. Let us look at the group velocity.
To calculate a meaningful group velocity, we must assume the vg = ∂ωi/∂ki = ∂(Ei/ħ)/∂(pi/ħ) = ∂(Ei)/∂(pi) exists. So we must have some dispersion relation. How do we calculate it? We need to calculate ωi as a function of ki here, or Ei as a function of pi. How do we do that? Well… There are a few ways to go about it but one interesting way of doing it is to re-write Schrödinger’s equation as we did, i.e. by distinguishing the real and imaginary parts of the ∂ψ/∂t =i·[ħ/(2m)]·∇2ψ wave equation and, hence, re-write it as the following pair of two equations:
1. Re(∂ψ/∂t) = −[ħ/(2meff)]·Im(∇2ψ) ⇔ ω·cos(kx − ωt) = k2·[ħ/(2meff)]·cos(kx − ωt)
2. Im(∂ψ/∂t) = [ħ/(2meff)]·Re(∇2ψ) ⇔ ω·sin(kx − ωt) = k2·[ħ/(2meff)]·sin(kx − ωt)
Both equations imply the following dispersion relation:
ω = ħ·k2/(2meff)
Of course, we need to think about the subscripts now: we have ωi, ki, but… What about meff or, dropping the subscript, m? Do we write it as mi? If so, what is it? Well… It is the equivalent mass of Ei obviously, and so we get it from the mass-energy equivalence relation: mi = Ei/c2. It is a fine point, but one most people forget about: they usually just write m. However, if there is uncertainty in the energy, then Einstein’s mass-energy relation tells us we must have some uncertainty in the (equivalent) mass too. Here, I should refer back to Section II: Ei varies around some average energy E and, therefore, the Uncertainty Principle kicks in.
# VII. Explaining spin
The elementary wavefunction vector – i.e. the vector sum of the real and imaginary component – rotates around the x-axis, which gives us the direction of propagation of the wave (see Figure 3). Its magnitude remains constant. In contrast, the magnitude of the electromagnetic vector – defined as the vector sum of the electric and magnetic field vectors – oscillates between zero and some maximum (see Figure 5).
We already mentioned that the rotation of the wavefunction vector appears to give some spin to the particle. Of course, a circularly polarized wave would also appear to have spin (think of the E and B vectors rotating around the direction of propagation – as opposed to oscillating up and down or sideways only). In fact, a circularly polarized light does carry angular momentum, as the equivalent mass of its energy may be thought of as rotating as well. But so here we are looking at a matter-wave.
The basic idea is the following: if we look at ψ = a·ei∙E·t/ħ as some real vector – as a two-dimensional oscillation of mass, to be precise – then we may associate its rotation around the direction of propagation with some torque. The illustration below reminds of the math here.
Figure 7: Torque and angular momentum vectors
A torque on some mass about a fixed axis gives it angular momentum, which we can write as the vector cross-product L = r×p or, perhaps easier for our purposes here as the product of an angular velocity (ω) and rotational inertia (I), aka as the moment of inertia or the angular mass. We write:
L = I·ω
Note we can write L and ω in boldface here because they are (axial) vectors. If we consider their magnitudes only, we write L = I·ω (no boldface). We can now do some calculations. Let us start with the angular velocity. In our previous posts, we showed that the period of the matter-wave is equal to T = 2π·(ħ/E0). Hence, the angular velocity must be equal to:
ω = 2π/[2π·(ħ/E0)] = E0
We also know the distance r, so that is the magnitude of r in the Lr×p vector cross-product: it is just a, so that is the magnitude of ψ = a·ei∙E·t/ħ. Now, the momentum (p) is the product of a linear velocity (v) – in this case, the tangential velocity – and some mass (m): p = m·v. If we switch to scalar instead of vector quantities, then the (tangential) velocity is given by v = r·ω. So now we only need to think about what we should use for m or, if we want to work with the angular velocity (ω), the angular mass (I). Here we need to make some assumption about the mass (or energy) distribution. Now, it may or may not sense to assume the energy in the oscillation – and, therefore, the mass – is distributed uniformly. In that case, we may use the formula for the angular mass of a solid cylinder: I = m·r2/2. If we keep the analysis non-relativistic, then m = m0. Of course, the energy-mass equivalence tells us that m0 = E0/c2. Hence, this is what we get:
L = I·ω = (m0·r2/2)·(E0/ħ) = (1/2)·a2·(E0/c2)·(E0/ħ) = a2·E02/(2·ħ·c2)
Does it make sense? Maybe. Maybe not. Let us do a dimensional analysis: that won’t check our logic, but it makes sure we made no mistakes when mapping mathematical and physical spaces. We have m2·J2 = m2·N2·m2 in the numerator and N·m·s·m2/s2 in the denominator. Hence, the dimensions work out: we get N·m·s as the dimension for L, which is, effectively, the physical dimension of angular momentum. It is also the action dimension, of course, and that cannot be a coincidence. Also note that the E = mc2 equation allows us to re-write it as:
L = a2·E02/(2·ħ·c2)
Of course, in quantum mechanics, we associate spin with the magnetic moment of a charged particle, not with its mass as such. Is there way to link the formula above to the one we have for the quantum-mechanical angular momentum, which is also measured in N·m·s units, and which can only take on one of two possible values: J = +ħ/2 and −ħ/2? It looks like a long shot, right? How do we go from (1/2)·a2·m02/ħ to ± (1/2)∙ħ? Let us do a numerical example. The energy of an electron is typically 0.510 MeV » 8.1871×10−14 N∙m, and a… What value should we take for a?
We have an obvious trio of candidates here: the Bohr radius, the classical electron radius (aka the Thompon scattering length), and the Compton scattering radius.
Let us start with the Bohr radius, so that is about 0.×10−10 N∙m. We get L = a2·E02/(2·ħ·c2) = 9.9×10−31 N∙m∙s. Now that is about 1.88×104 times ħ/2. That is a huge factor. The Bohr radius cannot be right: we are not looking at an electron in an orbital here. To show it does not make sense, we may want to double-check the analysis by doing the calculation in another way. We said each oscillation will always pack 6.626070040(81)×10−34 joule in energy. So our electron should pack about 1.24×10−20 oscillations. The angular momentum (L) we get when using the Bohr radius for a and the value of 6.626×10−34 joule for E0 and the Bohr radius is equal to 6.49×10−59 N∙m∙s. So that is the angular momentum per oscillation. When we multiply this with the number of oscillations (1.24×10−20), we get about 8.01×10−51 N∙m∙s, so that is a totally different number.
The classical electron radius is about 2.818×10−15 m. We get an L that is equal to about 2.81×10−39 N∙m∙s, so now it is a tiny fraction of ħ/2! Hence, this leads us nowhere. Let us go for our last chance to get a meaningful result! Let us use the Compton scattering length, so that is about 2.42631×10−12 m.
This gives us an L of 2.08×10−33 N∙m∙s, which is only 20 times ħ. This is not so bad, but it is good enough? Let us calculate it the other way around: what value should we take for a so as to ensure L = a2·E02/(2·ħ·c2) = ħ/2? Let us write it out:
In fact, this is the formula for the so-called reduced Compton wavelength. This is perfect. We found what we wanted to find. Substituting this value for a (you can calculate it: it is about 3.8616×10−33 m), we get what we should find:
This is a rather spectacular result, and one that would – a priori – support the interpretation of the wavefunction that is being suggested in this paper.
# VIII. The boson-fermion dichotomy
Let us do some more thinking on the boson-fermion dichotomy. Again, we should remind ourselves that an actual particle is localized in space and that it can, therefore, not be represented by the elementary wavefunction ψ = a·ei[E·t − px]/ħ or, for a particle at rest, the ψ = a·ei∙E·t/ħ function. We must build a wave packet for that: a sum of wavefunctions, each with their own amplitude ai, and their own ωi = −Ei/ħ. Each of these wavefunctions will contribute some energy to the total energy of the wave packet. Now, we can have another wild but logical theory about this.
Think of the apparent right-handedness of the elementary wavefunction: surely, Nature can’t be bothered about our convention of measuring phase angles clockwise or counterclockwise. Also, the angular momentum can be positive or negative: J = +ħ/2 or −ħ/2. Hence, we would probably like to think that an actual particle – think of an electron, or whatever other particle you’d think of – may consist of right-handed as well as left-handed elementary waves. To be precise, we may think they either consist of (elementary) right-handed waves or, else, of (elementary) left-handed waves. An elementary right-handed wave would be written as:
ψ(θi= ai·(cosθi + i·sinθi)
In contrast, an elementary left-handed wave would be written as:
ψ(θi= ai·(cosθii·sinθi)
How does that work out with the E0·t argument of our wavefunction? Position is position, and direction is direction, but time? Time has only one direction, but Nature surely does not care how we count time: counting like 1, 2, 3, etcetera or like −1, −2, −3, etcetera is just the same. If we count like 1, 2, 3, etcetera, then we write our wavefunction like:
ψ = a·cos(E0∙t/ħ) − i·a·sin(E0∙t/ħ)
If we count time like −1, −2, −3, etcetera then we write it as:
ψ = a·cos(E0∙t/ħ) − i·a·sin(E0∙t/ħ)= a·cos(E0∙t/ħ) + i·a·sin(E0∙t/ħ)
Hence, it is just like the left- or right-handed circular polarization of an electromagnetic wave: we can have both for the matter-wave too! This, then, should explain why we can have either positive or negative quantum-mechanical spin (+ħ/2 or −ħ/2). It is the usual thing: we have two mathematical possibilities here, and so we must have two physical situations that correspond to it.
It is only natural. If we have left- and right-handed photons – or, generalizing, left- and right-handed bosons – then we should also have left- and right-handed fermions (electrons, protons, etcetera). Back to the dichotomy. The textbook analysis of the dichotomy between bosons and fermions may be epitomized by Richard Feynman’s Lecture on it (Feynman, III-4), which is confusing and – I would dare to say – even inconsistent: how are photons or electrons supposed to know that they need to interfere with a positive or a negative sign? They are not supposed to know anything: knowledge is part of our interpretation of whatever it is that is going on there.
Hence, it is probably best to keep it simple, and think of the dichotomy in terms of the different physical dimensions of the oscillation: newton per kg versus newton per coulomb. And then, of course, we should also note that matter-particles have a rest mass and, therefore, actually carry charge. Photons do not. But both are two-dimensional oscillations, and the point is: the so-called vacuum – and the rest mass of our particle (which is zero for the photon and non-zero for everything else) – give us the natural frequency for both oscillations, which is beautifully summed up in that remarkable equation for the group and phase velocity of the wavefunction, which applies to photons as well as matter-particles:
(vphase·c)·(vgroup·c) = 1 ⇔ vp·vg = c2
The final question then is: why are photons spin-zero particles? Well… We should first remind ourselves of the fact that they do have spin when circularly polarized.[25] Here we may think of the rotation of the equivalent mass of their energy. However, if they are linearly polarized, then there is no spin. Even for circularly polarized waves, the spin angular momentum of photons is a weird concept. If photons have no (rest) mass, then they cannot carry any charge. They should, therefore, not have any magnetic moment. Indeed, what I wrote above shows an explanation of quantum-mechanical spin requires both mass as well as charge.[26]
# IX. Concluding remarks
There are, of course, other ways to look at the matter – literally. For example, we can imagine two-dimensional oscillations as circular rather than linear oscillations. Think of a tiny ball, whose center of mass stays where it is, as depicted below. Any rotation – around any axis – will be some combination of a rotation around the two other axes. Hence, we may want to think of a two-dimensional oscillation as an oscillation of a polar and azimuthal angle.
Figure 8: Two-dimensional circular movement
The point of this paper is not to make any definite statements. That would be foolish. Its objective is just to challenge the simplistic mainstream viewpoint on the reality of the wavefunction. Stating that it is a mathematical construct only without physical significance amounts to saying it has no meaning at all. That is, clearly, a non-sustainable proposition.
The interpretation that is offered here looks at amplitude waves as traveling fields. Their physical dimension may be expressed in force per mass unit, as opposed to electromagnetic waves, whose amplitudes are expressed in force per (electric) charge unit. Also, the amplitudes of matter-waves incorporate a phase factor, but this may actually explain the rather enigmatic dichotomy between fermions and bosons and is, therefore, an added bonus.
The interpretation that is offered here has some advantages over other explanations, as it explains the how of diffraction and interference. However, while it offers a great explanation of the wave nature of matter, it does not explain its particle nature: while we think of the energy as being spread out, we will still observe electrons and photons as pointlike particles once they hit the detector. Why is it that a detector can sort of ‘hook’ the whole blob of energy, so to speak?
The interpretation of the wavefunction that is offered here does not explain this. Hence, the complementarity principle of the Copenhagen interpretation of the wavefunction surely remains relevant.
# Appendix 1: The de Broglie relations and energy
The 1/2 factor in Schrödinger’s equation is related to the concept of the effective mass (meff). It is easy to make the wrong calculations. For example, when playing with the famous de Broglie relations – aka as the matter-wave equations – one may be tempted to derive the following energy concept:
1. E = h·f and p = h/λ. Therefore, f = E/h and λ = p/h.
2. v = λ = (E/h)∙(p/h) = E/p
3. p = m·v. Therefore, E = v·p = m·v2
E = m·v2? This resembles the E = mc2 equation and, therefore, one may be enthused by the discovery, especially because the m·v2 also pops up when working with the Least Action Principle in classical mechanics, which states that the path that is followed by a particle will minimize the following integral:Now, we can choose any reference point for the potential energy but, to reflect the energy conservation law, we can select a reference point that ensures the sum of the kinetic and the potential energy is zero throughout the time interval. If the force field is uniform, then the integrand will, effectively, be equal to KE − PE = m·v2.[27]
However, that is classical mechanics and, therefore, not so relevant in the context of the de Broglie equations, and the apparent paradox should be solved by distinguishing between the group and the phase velocity of the matter wave.
# Appendix 2: The concept of the effective mass
The effective mass – as used in Schrödinger’s equation – is a rather enigmatic concept. To make sure we are making the right analysis here, I should start by noting you will usually see Schrödinger’s equation written as:This formulation includes a term with the potential energy (U). In free space (no potential), this term disappears, and the equation can be re-written as:
∂ψ(x, t)/∂t = i·(1/2)·(ħ/meff)·∇2ψ(x, t)
We just moved the i·ħ coefficient to the other side, noting that 1/i = –i. Now, in one-dimensional space, and assuming ψ is just the elementary wavefunction (so we substitute a·ei∙[E·t − p∙x]/ħ for ψ), this implies the following:
a·i·(E/ħ)·ei∙[E·t − p∙x]/ħ = −i·(ħ/2meffa·(p22 ei∙[E·t − p∙x]/ħ
⇔ E = p2/(2meff) ⇔ meff = m∙(v/c)2/2 = m∙β2/2
It is an ugly formula: it resembles the kinetic energy formula (K.E. = m∙v2/2) but it is, in fact, something completely different. The β2/2 factor ensures the effective mass is always a fraction of the mass itself. To get rid of the ugly 1/2 factor, we may re-define meff as two times the old meff (hence, meffNEW = 2∙meffOLD), as a result of which the formula will look somewhat better:
meff = m∙(v/c)2 = m∙β2
We know β varies between 0 and 1 and, therefore, meff will vary between 0 and m. Feynman drops the subscript, and just writes meff as m in his textbook (see Feynman, III-19). On the other hand, the electron mass as used is also the electron mass that is used to calculate the size of an atom (see Feynman, III-2-4). As such, the two mass concepts are, effectively, mutually compatible. It is confusing because the same mass is often defined as the mass of a stationary electron (see, for example, the article on it in the online Wikipedia encyclopedia[28]).
In the context of the derivation of the electron orbitals, we do have the potential energy term – which is the equivalent of a source term in a diffusion equation – and that may explain why the above-mentioned meff = m∙(v/c)2 = m∙β2 formula does not apply.
# References
This paper discusses general principles in physics only. Hence, references can be limited to references to physics textbooks only. For ease of reading, any reference to additional material has been limited to a more popular undergrad textbook that can be consulted online: Feynman’s Lectures on Physics (http://www.feynmanlectures.caltech.edu). References are per volume, per chapter and per section. For example, Feynman III-19-3 refers to Volume III, Chapter 19, Section 3.
# Notes
[1] Of course, an actual particle is localized in space and can, therefore, not be represented by the elementary wavefunction ψ = a·ei∙θa·ei[E·t − px]/ħ = a·(cosθ i·a·sinθ). We must build a wave packet for that: a sum of wavefunctions, each with its own amplitude ak and its own argument θk = (Ek∙t – pkx)/ħ. This is dealt with in this paper as part of the discussion on the mathematical and physical interpretation of the normalization condition.
[2] The N/kg dimension immediately, and naturally, reduces to the dimension of acceleration (m/s2), thereby facilitating a direct interpretation in terms of Newton’s force law.
[3] In physics, a two-spring metaphor is more common. Hence, the pistons in the author’s perpetuum mobile may be replaced by springs.
[4] The author re-derives the equation for the Compton scattering radius in section VII of the paper.
[5] The magnetic force can be analyzed as a relativistic effect (see Feynman II-13-6). The dichotomy between the electric force as a polar vector and the magnetic force as an axial vector disappears in the relativistic four-vector representation of electromagnetism.
[6] For example, when using Schrödinger’s equation in a central field (think of the electron around a proton), the use of polar coordinates is recommended, as it ensures the symmetry of the Hamiltonian under all rotations (see Feynman III-19-3)
[7] This sentiment is usually summed up in the apocryphal quote: “God does not play dice.”The actual quote comes out of one of Einstein’s private letters to Cornelius Lanczos, another scientist who had also emigrated to the US. The full quote is as follows: “You are the only person I know who has the same attitude towards physics as I have: belief in the comprehension of reality through something basically simple and unified… It seems hard to sneak a look at God’s cards. But that He plays dice and uses ‘telepathic’ methods… is something that I cannot believe for a single moment.” (Helen Dukas and Banesh Hoffman, Albert Einstein, the Human Side: New Glimpses from His Archives, 1979)
[8] Of course, both are different velocities: ω is an angular velocity, while v is a linear velocity: ω is measured in radians per second, while v is measured in meter per second. However, the definition of a radian implies radians are measured in distance units. Hence, the physical dimensions are, effectively, the same. As for the formula for the total energy of an oscillator, we should actually write: E = m·a2∙ω2/2. The additional factor (a) is the (maximum) amplitude of the oscillator.
[9] We also have a 1/2 factor in the E = mv2/2 formula. Two remarks may be made here. First, it may be noted this is a non-relativistic formula and, more importantly, incorporates kinetic energy only. Using the Lorentz factor (γ), we can write the relativistically correct formula for the kinetic energy as K.E. = E − E0 = mvc2 − m0c2 = m0γc2 − m0c2 = m0c2(γ − 1). As for the exclusion of the potential energy, we may note that we may choose our reference point for the potential energy such that the kinetic and potential energy mirror each other. The energy concept that then emerges is the one that is used in the context of the Principle of Least Action: it equals E = mv2. Appendix 1 provides some notes on that.
[10] Instead of two cylinders with pistons, one may also think of connecting two springs with a crankshaft.
[11] It is interesting to note that we may look at the energy in the rotating flywheel as potential energy because it is energy that is associated with motion, albeit circular motion. In physics, one may associate a rotating object with kinetic energy using the rotational equivalent of mass and linear velocity, i.e. rotational inertia (I) and angular velocity ω. The kinetic energy of a rotating object is then given by K.E. = (1/2)·I·ω2.
[12] Because of the sideways motion of the connecting rods, the sinusoidal function will describe the linear motion only approximately, but you can easily imagine the idealized limit situation.
[13] The ω2= 1/LC formula gives us the natural or resonant frequency for a electric circuit consisting of a resistor (R), an inductor (L), and a capacitor (C). Writing the formula as ω2= C1/L introduces the concept of elastance, which is the equivalent of the mechanical stiffness (k) of a spring.
[14] The resistance in an electric circuit introduces a damping factor. When analyzing a mechanical spring, one may also want to introduce a drag coefficient. Both are usually defined as a fraction of the inertia, which is the mass for a spring and the inductance for an electric circuit. Hence, we would write the resistance for a spring as γm and as R = γL respectively.
[15] Photons are emitted by atomic oscillators: atoms going from one state (energy level) to another. Feynman (Lectures, I-33-3) shows us how to calculate the Q of these atomic oscillators: it is of the order of 108, which means the wave train will last about 10–8 seconds (to be precise, that is the time it takes for the radiation to die out by a factor 1/e). For example, for sodium light, the radiation will last about 3.2×10–8 seconds (this is the so-called decay time τ). Now, because the frequency of sodium light is some 500 THz (500×1012 oscillations per second), this makes for some 16 million oscillations. There is an interesting paradox here: the speed of light tells us that such wave train will have a length of about 9.6 m! How is that to be reconciled with the pointlike nature of a photon? The paradox can only be explained by relativistic length contraction: in an analysis like this, one need to distinguish the reference frame of the photon – riding along the wave as it is being emitted, so to speak – and our stationary reference frame, which is that of the emitting atom.
[16] This is a general result and is reflected in the K.E. = T = (1/2)·m·ω2·a2·sin2(ω·t + Δ) and the P.E. = U = k·x2/2 = (1/2)· m·ω2·a2·cos2(ω·t + Δ) formulas for the linear oscillator.
[17] Feynman further formalizes this in his Lecture on Superconductivity (Feynman, III-21-2), in which he refers to Schrödinger’s equation as the “equation for continuity of probabilities”. The analysis is centered on the local conservation of energy, which confirms the interpretation of Schrödinger’s equation as an energy diffusion equation.
[18] The meff is the effective mass of the particle, which depends on the medium. For example, an electron traveling in a solid (a transistor, for example) will have a different effective mass than in an atom. In free space, we can drop the subscript and just write meff = m. Appendix 2 provides some additional notes on the concept. As for the equations, they are easily derived from noting that two complex numbers a + i∙b and c + i∙d are equal if, and only if, their real and imaginary parts are the same. Now, the ∂ψ/∂t = i∙(ħ/meff)∙∇2ψ equation amounts to writing something like this: a + i∙b = i∙(c + i∙d). Now, remembering that i2 = −1, you can easily figure out that i∙(c + i∙d) = i∙c + i2∙d = − d + i∙c.
[19] The dimension of B is usually written as N/(m∙A), using the SI unit for current, i.e. the ampere (A). However, 1 C = 1 A∙s and, hence, 1 N/(m∙A) = 1 (N/C)/(m/s).
[20] Of course, multiplication with i amounts to a counterclockwise rotation. Hence, multiplication by –i also amounts to a rotation by 90 degrees, but clockwise. Now, to uniquely identify the clockwise and counterclockwise directions, we need to establish the equivalent of the right-hand rule for a proper geometric interpretation of Schrödinger’s equation in three-dimensional space: if we look at a clock from the back, then its hand will be moving counterclockwise. When writing B = (1/c)∙iE, we assume we are looking in the negative x-direction. If we are looking in the positive x-direction, we should write: B = -(1/c)∙iE. Of course, Nature does not care about our conventions. Hence, both should give the same results in calculations. We will show in a moment they do.
[21] In fact, when multiplying C2/(N·m2) with N2/C2, we get N/m2, but we can multiply this with 1 = m/m to get the desired result. It is significant that an energy density (joule per unit volume) can also be measured in newton (force per unit area.
[22] The illustration shows a linearly polarized wave, but the obtained result is general.
[23] The sine and cosine are essentially the same functions, except for the difference in the phase: sinθ = cos(θ−π /2).
[24] I must thank a physics blogger for re-writing the 1/(ε0·μ0) = c2 equation like this. See: http://reciprocal.systems/phpBB3/viewtopic.php?t=236 (retrieved on 29 September 2017).
[25] A circularly polarized electromagnetic wave may be analyzed as consisting of two perpendicular electromagnetic plane waves of equal amplitude and 90° difference in phase.
[26] Of course, the reader will now wonder: what about neutrons? How to explain neutron spin? Neutrons are neutral. That is correct, but neutrons are not elementary: they consist of (charged) quarks. Hence, neutron spin can (or should) be explained by the spin of the underlying quarks.
[27] We detailed the mathematical framework and detailed calculations in the following online article: https://readingfeynman.org/2017/09/15/the-principle-of-least-action-re-visited.
[28] https://en.wikipedia.org/wiki/Electron_rest_mass (retrieved on 29 September 2017).
# The Strange Theory of Light and Matter (III)
Pre-script (dated 26 June 2020): This post has become less relevant (even irrelevant, perhaps) because my views on all things quantum-mechanical have evolved significantly as a result of my progression towards a more complete realist (classical) interpretation of quantum physics. I keep blog posts like these mainly because I want to keep track of where I came from. I might review them one day, but I currently don’t have the time or energy for it. 🙂
Original post:
This is my third and final comments on Feynman’s popular little booklet: The Strange Theory of Light and Matter, also known as Feynman’s Lectures on Quantum Electrodynamics (QED).
The origin of this short lecture series is quite moving: the death of Alix G. Mautner, a good friend of Feynman’s. She was always curious about physics but her career was in English literature and so she did not manage the math. Hence, Feynman introduces this 1985 publication by writing: “Here are the lectures I really prepared for Alix, but unfortunately I can’t tell them to her directly, now.”
Alix Mautner died from a brain tumor, and it is her husband, Leonard Mautner, who sponsored the QED lectures series at the UCLA, which Ralph Leigton transcribed and published as the booklet that we’re talking about here. Feynman himself died a few years later, at the relatively young age of 69. Tragic coincidence: he died of cancer too. Despite all this weirdness, Feynman’s QED never quite got the same iconic status of, let’s say, Stephen Hawking’s Brief History of Time. I wonder why, but the answer to that question is probably in the realm of chaos theory. 🙂 I actually just saw the movie on Stephen Hawking’s life (The Theory of Everything), and I noted another strange coincidence: Jane Wilde, Hawking’s first wife, also has a PhD in literature. It strikes me that, while the movie documents that Jane Wilde gave Hawking three children, after which he divorced her to marry his nurse, Elaine, the movie does not mention that he separated from Elaine too, and that he has some kind of ‘working relationship’ with Jane again.
Hmm… What to say? I should get back to quantum mechanics here or, to be precise, to quantum electrodynamics.
One reason why Feynman’s Strange Theory of Light and Matter did not sell like Hawking’s Brief History of Time, might well be that, in some places, the text is not entirely accurate. Why? Who knows? It would make for an interesting PhD thesis in History of Science. Unfortunately, I have no time for such PhD thesis. Hence, I must assume that Richard Feynman simply didn’t have much time or energy left to correct some of the writing of Ralph Leighton, who transcribed and edited these four short lectures a few years before Feynman’s death. Indeed, when everything is said and done, Ralph Leighton is not a physicist and, hence, I think he did compromise – just a little bit – on accuracy for the sake of readability. Ralph Leighton’s father, Robert Leighton, an eminent physicist who worked with Feynman, would probably have done a much better job.
I feel that one should not compromise on accuracy, even when trying to write something reader-friendly. That’s why I am writing this blog, and why I am writing three posts specifically on this little booklet. Indeed, while I’d warmly recommend that little book on QED as an excellent non-mathematical introduction to the weird world of quantum mechanics, I’d also say that, while Ralph Leighton’s story is great, it’s also, in some places, not entirely accurate indeed.
So… Well… I want to do better than Ralph Leighton here. Nothing more. Nothing less. 🙂 Let’s go for it.
I. Probability amplitudes: what are they?
The greatest achievement of that little QED publication is that it manages to avoid any reference to wave functions and other complicated mathematical constructs: all of the complexity of quantum mechanics is reduced to three basic events or actions and, hence, three basic amplitudes which are represented as ‘arrows’—literally.
Now… Well… You may or may not know that a (probability) amplitude is actually a complex number, but it’s not so easy to intuitively understand the concept of a complex number. In contrast, everyone easily ‘gets’ the concept of an ‘arrow’. Hence, from a pedagogical point of view, representing complex numbers by some ‘arrow’ is truly a stroke of genius.
Whatever we call it, a complex number or an ‘arrow’, a probability amplitude is something with (a) a magnitude and (b) a phase. As such, it resembles a vector, but it’s not quite the same, if only because we’ll impose some restrictions on the magnitude. But I shouldn’t get ahead of myself. Let’s start with the basics.
A magnitude is some real positive number, like a length, but you should not associate it with some spatial dimension in physical space: it’s just a number. As for the phase, we could associate that concept with some direction but, again, you should just think of it as a direction in a mathematical space, not in the real (physical) space.
Let me insert a parenthesis here. If I say the ‘real’ or ‘physical’ space, I mean the space in which the electrons and photons and all other real-life objects that we’re looking at exist and move. That’s a non-mathematical definition. In fact, in math, the real space is defined as a coordinate space, with sets of real numbers (vectors) as coordinates, so… Well… That’s a mathematical space only, not the ‘real’ (physical) space. So the real (vector) space is not real. 🙂 The mathematical real space may, or may not, accurately describe the real (physical) space. Indeed, you may have heard that physical space is curved because of the presence of massive objects, which means that the real coordinate space will actually not describe it very accurately. I know that’s a bit confusing but I hope you understand what I mean: if mathematicians talk about the real space, they do not mean the real space. They refer to a vector space, i.e. a mathematical construct. To avoid confusion, I’ll use the term ‘physical space’ rather than ‘real’ space in the future. So I’ll let the mathematicians get away with using the term ‘real space’ for something that isn’t real actually. 🙂
End of digression. Let’s discuss these two mathematical concepts – magnitude and phase – somewhat more in detail.
A. The magnitude
Let’s start with the magnitude or ‘length’ of our arrow. We know that we have to square these lengths to find some probability, i.e. some real number between 0 and 1. Hence, the length of our arrows cannot be larger than one. That’s the restriction I mentioned already, and this ‘normalization’ condition reinforces the point that these ‘arrows’ do not have any spatial dimension (not in any real space anyway): they represent a function. To be specific, they represent a wavefunction.
If we’d be talking complex numbers instead of ‘arrows’, we’d say the absolute value of the complex number cannot be larger than one. We’d also say that, to find the probability, we should take the absolute square of the complex number, so that’s the square of the magnitude or absolute value of the complex number indeed. We cannot just square the complex number: it has to be the square of the absolute value.
Why? Well… Just write it out. [You can skip this section if you’re not interested in complex numbers, but I would recommend you try to understand. It’s not that difficult. Indeed, if you’re reading this, you’re most likely to understand something of complex numbers and, hence, you should be able to work your way through it. Just remember that a complex number is like a two-dimensional number, which is why it’s sometimes written using bold-face (z), rather than regular font (z). However, I should immediately add this convention is usually not followed. I like the boldface though, and so I’ll try to use it in this post.] The square of a complex number z = a + bi is equal to z= a+ 2abi – b2, while the square of its absolute value (i.e. the absolute square) is |z|= [√(a+ b2)]2 = a+ b2. So you can immediately see that the square and the absolute square of a complex numbers are two very different things indeed: it’s not only the 2abi term, but there’s also the minus sign in the first expression, because of the i= –1 factor. In case of doubt, always remember that the square of a complex number may actually yield a negative number, as evidenced by the definition of the imaginary unit itself: i= –1.
End of digression. Feynman and Leighton manage to avoid any reference to complex numbers in that short series of four lectures and, hence, all they need to do is explain how one squares a length. Kids learn how to do that when making a square out of rectangular paper: they’ll fold one corner of the paper until it meets the opposite edge, forming a triangle first. They’ll then cut or tear off the extra paper, and then unfold. Done. [I could note that the folding is a 90 degree rotation of the original length (or width, I should say) which, in mathematical terms, is equivalent to multiplying that length with the imaginary unit (i). But I am sure the kids involved would think I am crazy if I’d say this. 🙂 So let me get back to Feynman’s arrows.
B. The phase
Feynman and Leighton’s second pedagogical stroke of genius is the metaphor of the ‘stopwatch’ and the ‘stopwatch hand’ for the variable phase. Indeed, although I think it’s worth explaining why z = a + bi = rcosφ + irsinφ in the illustration below can be written as z = reiφ = |z|eiφ, understanding Euler’s representation of complex number as a complex exponential requires swallowing a very substantial piece of math and, if you’d want to do that, I’ll refer you to one of my posts on complex numbers).
The metaphor of the stopwatch represents a periodic function. To be precise, it represents a sinusoid, i.e. a smooth repetitive oscillation. Now, the stopwatch hand represents the phase of that function, i.e. the φ angle in the illustration above. That angle is a function of time: the speed with which the stopwatch turns is related to some frequency, i.e. the number of oscillations per unit of time (i.e. per second).
You should now wonder: what frequency? What oscillations are we talking about here? Well… As we’re talking photons and electrons here, we should distinguish the two:
1. For photons, the frequency is given by Planck’s energy-frequency relation, which relates the energy (E) of a photon (1.5 to 3.5 eV for visible light) to its frequency (ν). It’s a simple proportional relation, with Planck’s constant (h) as the proportionality constant: E = hν, or ν = E/h.
2. For electrons, we have the de Broglie relation, which looks similar to the Planck relation (E = hf, or f = E/h) but, as you know, it’s something different. Indeed, these so-called matter waves are not so easy to interpret because there actually is no precise frequency f. In fact, the matter wave representing some particle in space will consist of a potentially infinite number of waves, all superimposed one over another, as illustrated below.
For the sake of accuracy, I should mention that the animation above has its limitations: the wavetrain is complex-valued and, hence, has a real as well as an imaginary part, so it’s something like the blob underneath. Two functions in one, so to speak: the imaginary part follows the real part with a phase difference of 90 degrees (or π/2 radians). Indeed, if the wavefunction is a regular complex exponential reiθ, then rsin(φ–π/2) = rcos(φ), which proves the point: we have two functions in one here. 🙂 I am actually just repeating what I said before already: the probability amplitude, or the wavefunction, is a complex number. You’ll usually see it written as Ψ (psi) or Φ (phi). Here also, using boldface (Ψ or Φ instead of Ψ or Φ) would usefully remind the reader that we’re talking something ‘two-dimensional’ (in mathematical space, that is), but this convention is usually not followed.
In any case… Back to frequencies. The point to note is that, when it comes to analyzing electrons (or any other matter-particle), we’re dealing with a range of frequencies f really (or, what amounts to the same, a range of wavelengths λ) and, hence, we should write Δf = ΔE/h, which is just one of the many expressions of the Uncertainty Principle in quantum mechanics.
Now, that’s just one of the complications. Another difficulty is that matter-particles, such as electrons, have some rest mass, and so that enters the energy equation as well (literally). Last but not least, one should distinguish between the group velocity and the phase velocity of matter waves. As you can imagine, that makes for a very complicated relationship between ‘the’ wavelength and ‘the’ frequency. In fact, what I write above should make it abundantly clear that there’s no such thing as the wavelength, or the frequency: it’s a range really, related to the fundamental uncertainty in quantum physics. I’ll come back to that, and so you shouldn’t worry about it here. Just note that the stopwatch metaphor doesn’t work very well for an electron!
In his postmortem lectures for Alix Mautner, Feynman avoids all these complications. Frankly, I think that’s a missed opportunity because I do not think it’s all that incomprehensible. In fact, I write all that follows because I do want you to understand the basics of waves. It’s not difficult. High-school math is enough here. Let’s go for it.
One turn of the stopwatch corresponds to one cycle. One cycle, or 1 Hz (i.e. one oscillation per second) covers 360 degrees or, to use a more natural unit, 2π radians. [Why is radian a more natural unit? Because it measures an angle in terms of the distance unit itself, rather than in arbitrary 1/360 cuts of a full circle. Indeed, remember that the circumference of the unit circle is 2π.] So our frequency ν (expressed in cycles per second) corresponds to a so-called angular frequency ω = 2πν. From this formula, it should be obvious that ω is measured in radians per second.
We can also link this formula to the period of the oscillation, T, i.e. the duration of one cycle. T = 1/ν and, hence, ω = 2π/T. It’s all nicely illustrated below. [And, yes, it’s an animation from Wikipedia: nice and simple.]
The easy math above now allows us to formally write the phase of a wavefunction – let’s denote the wavefunction as φ (phi), and the phase as θ (theta) – as a function of time (t) using the angular frequency ω. So we can write: θ = ωt = 2π·ν·t. Now, the wave travels through space, and the two illustrations above (i.e. the one with the super-imposed waves, and the one with the complex wave train) would usually represent a wave shape at some fixed point in time. Hence, the horizontal axis is not t but x. Hence, we can and should write the phase not only as a function of time but also of space. So how do we do that? Well… If the hypothesis is that the wave travels through space at some fixed speed c, then its frequency ν will also determine its wavelength λ. It’s a simple relationship: c = λν (the number of oscillations per second times the length of one wavelength should give you the distance traveled per second, so that’s, effectively, the wave’s speed).
Now that we’ve expressed the frequency in radians per second, we can also express the wavelength in radians per unit distance too. That’s what the wavenumber does: think of it as the spatial frequency of the wave. We denote the wavenumber by k, and write: k = 2π/λ. [Just do a numerical example when you have difficulty following. For example, if you’d assume the wavelength is 5 units distance (i.e. 5 meter) – that’s a typical VHF radio frequency: ν = (3×10m/s)/(5 m) = 0.6×108 Hz = 60 MHz – then that would correspond to (2π radians)/(5 m) ≈ 1.2566 radians per meter. Of course, we can also express the wave number in oscillations per unit distance. In that case, we’d have to divide k by 2π, because one cycle corresponds to 2π radians. So we get the reciprocal of the wavelength: 1/λ. In our example, 1/λ is, of course, 1/5 = 0.2, so that’s a fifth of a full cycle. You can also think of it as the number of waves (or wavelengths) per meter: if the wavelength is λ, then one can fit 1/λ waves in a meter.
Now, from the ω = 2πν, c = λν and k = 2π/λ relations, it’s obvious that k = 2π/λ = 2π/(c/ν) = (2πν)/c = ω/c. To sum it all up, frequencies and wavelengths, in time and in space, are all related through the speed of propagation of the wave c. More specifically, they’re related as follows:
c = λν = ω/k
From that, it’s easy to see that k = ω/c, which we’ll use in a moment. Now, it’s obvious that the periodicity of the wave implies that we can find the same phase by going one oscillation (or a multiple number of oscillations back or forward in time, or in space. In fact, we can also find the same phase by letting both time and space vary. However, if we want to do that, it should be obvious that we should either (a) go forward in space and back in time or, alternatively, (b) go back in space and forward in time. In other words, if we want to get the same phase, then time and space sort of substitute for each other. Let me quote Feynman on this: “This is easily seen by considering the mathematical behavior of a. Evidently, if we add a little time , we get the same value for as we would have if we had subtracted a little distance: .” The variable a stands for the acceleration of an electric charge here, causing an electromagnetic wave, but the same logic is valid for the phase, with a minor twist though: we’re talking a nice periodic function here, and so we need to put the angular frequency in front. Hence, the rate of change of the phase in respect to time is measured by the angular frequency ω. In short, we write:
θ = ω(t–x/c) = ωt–kx
Hence, we can re-write the wavefunction, in terms of its phase, as follows:
φ(θ) = φ[θ(x, t)] = φ[ωt–kx]
Note that, if the wave would be traveling in the ‘other’ direction (i.e. in the negative x-direction), we’d write φ(θ) = φ[kx+ωt]. Time travels in one direction only, of course, but so one minus sign has to be there because of the logic involved in adding time and subtracting distance. You can work out an example (with a sine or cosine wave, for example) for yourself.
So what, you’ll say? Well… Nothing. I just hope you agree that all of this isn’t rocket science: it’s just high-school math. But so it shows you what that stopwatch really is and, hence, – but who am I? – would have put at least one or two footnotes on this in a text like Feynman’s QED.
Now, let me make a much longer and more serious digression:
Digression 1: on relativity and spacetime
As you can see from the argument (or phase) of that wave function φ(θ) = φ[θ(x, t)] = φ[ωt–kx] = φ[–k(x–ct)], any wave equation establishes a deep relation between the wave itself (i.e. the ‘thing’ we’re describing) and space and time. In fact, that’s what the whole wave equation is all about! So let me say a few things more about that.
Because you know a thing or two about physics, you may ask: when we’re talking time, whose time are we talking about? Indeed, if we’re talking photons going from A to B, these photons will be traveling at or near the speed of light and, hence, their clock, as seen from our (inertial) frame of reference, doesn’t move. Likewise, according to the photon, our clock seems to be standing still.
Let me put the issue to bed immediately: we’re looking at things from our point of view. Hence, we’re obviously using our clock, not theirs. Having said that, the analysis is actually fully consistent with relativity theory. Why? Well… What do you expect? If it wasn’t, the analysis would obviously not be valid. 🙂 To illustrate that it’s consistent with relativity theory, I can mention, for example, that the (probability) amplitude for a photon to travel from point A to B depends on the spacetime interval, which is invariant. Hence, A and B are four-dimensional points in spacetime, involving both spatial as well as time coordinates: A = (xA, yA, zA, tA) and B = (xB, yB, zB, tB). And so the ‘distance’ – as measured through the spacetime interval – is invariant.
Now, having said that, we should draw some attention to the intimate relationship between space and time which, let me remind you, results from the absoluteness of the speed of light. Indeed, one will always measure the speed of light c as being equal to 299,792,458 m/s, always and everywhere. It does not depend on your reference frame (inertial or moving). That’s why the constant c anchors all laws in physics, and why we can write what we write above, i.e. include both distance (x) as well as time (t) in the wave function φ = φ(x, t) = φ[ωt–kx] = φ[–k(x–ct)]. The k and ω are related through the ω/k = c relationship: the speed of light links the frequency in time (ν = ω/2π = 1/T) with the frequency in space (i.e. the wavenumber or spatial frequency k). There is only degree of freedom here: the frequency—in space or in time, it doesn’t matter: ν and ω are not independent. [As noted above, the relationship between the frequency in time and in space is not so obvious for electrons, or for matter waves in general: for those matter-waves, we need to distinguish group and phase velocity, and so we don’t have a unique frequency.]
Let me make another small digression within the digression here. Thinking about travel at the speed of light invariably leads to paradoxes. In previous posts, I explained the mechanism of light emission: a photon is emitted – one photon only – when an electron jumps back to its ground state after being excited. Hence, we may imagine a photon as a transient electromagnetic wave–something like what’s pictured below. Now, the decay time of this transient oscillation (τ) is measured in nanoseconds, i.e. billionths of a second (1 ns = 1×10–9 s): the decay time for sodium light, for example, is some 30 ns only.
However, because of the tremendous speed of light, that still makes for a wavetrain that’s like ten meter long, at least (30×10–9 s times 3×10m/s is nine meter, but you should note that the decay time measures the time for the oscillation to die out by a factor 1/e, so the oscillation itself lasts longer than that). Those nine or ten meters cover like 16 to 17 million oscillations (the wavelength of sodium light is about 600 nm and, hence, 10 meter fits almost 17 million oscillations indeed). Now, how can we reconcile the image of a photon as a ten-meter long wavetrain with the image of a photon as a point particle?
The answer to that question is paradoxical: from our perspective, anything traveling at the speed of light – including this nine or ten meter ‘long’ photon – will have zero length because of the relativistic length contraction effect. Length contraction? Yes. I’ll let you look it up, because… Well… It’s not easy to grasp. Indeed, from the three measurable effects on objects moving at relativistic speeds – i.e. (1) an increase of the mass (the energy needed to further accelerate particles in particle accelerators increases dramatically at speeds nearer to c), (2) time dilation, i.e. a slowing down of the (internal) clock (because of their relativistic speeds when entering the Earth’s atmosphere, the measured half-life of muons is five times that when at rest), and (3) length contraction – length contraction is probably the most paradoxical of all.
Let me end this digression with yet another short note. I said that one will always measure the speed of light c as being equal to 299,792,458 m/s, always and everywhere and, hence, that it does not depend on your reference frame (inertial or moving). Well… That’s true and not true at the same time. I actually need to nuance that statement a bit in light of what follows: an individual photon does have an amplitude to travel faster or slower than c, and when discussing matter waves (such as the wavefunction that’s associated with an electron), we can have phase velocities that are faster than light! However, when calculating those amplitudes, is a constant.
That doesn’t make sense, you’ll say. Well… What can I say? That’s how it is unfortunately. I need to move on and, hence, I’ll end this digression and get back to the main story line. Part I explained what probability amplitudes are—or at least tried to do so. Now it’s time for part II: the building blocks of all of quantum electrodynamics (QED).
II. The building blocks: P(A to B), E(A to B) and j
The three basic ‘events’ (and, hence, amplitudes) in QED are the following:
1. P(A to B)
P(A to B) is the (probability) amplitude for a photon to travel from point A to B. However, I should immediately note that A and B are points in spacetime. Therefore, we associate them not only with some specific (x, y, z) position in space, but also with a some specific time t. Now, quantum-mechanical theory gives us an easy formula for P(A to B): it depends on the so-called (spacetime) interval between the two points A and B, i.e. I = Δr– Δt= (x2–x1)2+(y2–y1)2+(z2–z1)– (t2–t1)2. The point to note is that the spacetime interval takes both the distance in space as well as the ‘distance’ in time into account. As I mentioned already, this spacetime interval does not depend on our reference frame and, hence, it’s invariant (as long as we’re talking reference frames that move with constant speed relative to each other). Also note that we should measure time and distance in equivalent units when using that Δr– Δtformula for I. So we either measure distance in light-seconds or, else, we measure time in units that correspond to the time that’s needed for light to travel one meter. If no equivalent units are adopted, the formula is I = Δrc·Δt2.
Now, in quantum theory, anything is possible and, hence, not only do we allow for crooked paths, but we also allow for the difference in time to differ from the time you’d expect a photon to need to travel along some curve (whose length we’ll denote by l), i.e. l/c. Hence, our photon may actually travel slower or faster than the speed of light c! There is one lucky break, however, that makes all come out alright: it’s easy to show that the amplitudes associated with the odd paths and strange timings generally cancel each other out. [That’s what the QED booklet shows.] Hence, what remains, are the paths that are equal or, importantly, those that very near to the so-called ‘light-like’ intervals in spacetime only. The net result is that light – even one single photon – effectively uses a (very) small core of space as it travels, as evidenced by the fact that even one single photon interferes with itself when traveling through a slit or a small hole!
[If you now wonder what it means for a photon to interfere for itself, let me just give you the easy explanation: it may change its path. We assume it was traveling in a straight line – if only because it left the source at some point in time and then arrived at the slit obviously – but so it no longer travels in a straight line after going through the slit. So that’s what we mean here.]
2. E(A to B)
E(A to B) is the (probability) amplitude for an electron to travel from point A to B. The formula for E(A to B) is much more complicated, and it’s the one I want to discuss somewhat more in detail in this post. It depends on some complex number j (see the next remark) and some real number n.
3. j
Finally, an electron could emit or absorb a photon, and the amplitude associated with this event is denoted by j, for junction number. It’s the same number j as the one mentioned when discussing E(A to B) above.
Now, this junction number is often referred to as the coupling constant or the fine-structure constant. However, the truth is, as I pointed out in my previous post, that these numbers are related, but they are not quite the same: α is the square of j, so we have α = j2. There is also one more, related, number: the gauge parameter, which is denoted by g (despite the g notation, it has nothing to do with gravitation). The value of g is the square root of 4πε0α, so g= 4πε0α. I’ll come back to this. Let me first make an awfully long digression on the fine-structure constant. It will be awfully long. So long that it’s actually part of the ‘core’ of this post actually.
Digression 2: on the fine-structure constant, Planck units and the Bohr radius
The value for j is approximately –0.08542454.
How do we know that?
The easy answer to that question is: physicists measured it. In fact, they usually publish the measured value as the square root of the (absolute value) of j, which is that fine-structure constant α. Its value is published (and updated) by the US National Institute on Standards and Technology. To be precise, the currently accepted value of α is 7.29735257×10−3. In case you doubt, just check that square root:
j = –0.08542454 ≈ –√0.00729735257 = –√α
As noted in Feynman’s (or Leighton’s) QED, older and/or more popular books will usually mention 1/α as the ‘magical’ number, so the ‘special’ number you may have seen is the inverse fine-structure constant, which is about 137, but not quite:
1/α = 137.035999074 ± 0.000000044
I am adding the standard uncertainty just to give you an idea of how precise these measurements are. 🙂 About 0.32 parts per billion (just divide the 137.035999074 number by the uncertainty). So that‘s the number that excites popular writers, including Leighton. Indeed, as Leighton puts it:
“Where does this number come from? Nobody knows. It’s one of the greatest damn mysteries of physics: a magic number that comes to us with no understanding by man. You might say the “hand of God” wrote that number, and “we don’t know how He pushed his pencil.” We know what kind of a dance to do experimentally to measure this number very accurately, but we don’t know what kind of dance to do on the computer to make this number come out, without putting it in secretly!”
Is it Leighton, or did Feynman really say this? Not sure. While the fine-structure constant is a very special number, it’s not the only ‘special’ number. In fact, we derive it from other ‘magical’ numbers. To be specific, I’ll show you how we derive it from the fundamental properties – as measured, of course – of the electron. So, in fact, I should say that we do know how to make this number come out, which makes me doubt whether Feynman really said what Leighton said he said. 🙂
So we can derive α from some other numbers. That brings me to the more complicated answer to the question as to what the value of j really is: j‘s value is the electron charge expressed in Planck units, which I’ll denote by –eP:
j = –eP
[You may want to reflect on this, and quickly verify on the Web. The Planck unit of electric charge, expressed in Coulomb, is about 1.87555×10–18 C. If you multiply that j = –eP, so with –0.08542454, you get the right answer: the electron charge is about –0.160217×10–18 C.]
Now that is strange.
Why? Well… For starters, when doing all those quantum-mechanical calculations, we like to think of j as a dimensionless number: a coupling constant. But so here we do have a dimension: electric charge.
Let’s look at the basics. If is –√α, and it’s also equal to –eP, then the fine-structure constant must also be equal to the square of the electron charge eP, so we can write:
α = eP2
You’ll say: yes, so what? Well… I am pretty sure that, if you’ve ever seen a formula for α, it’s surely not this simple j = –eP or α = eP2 formula. What you’ve seen, most likely, is one or more of the following expressions below :
That’s a pretty impressive collection of physical constants, isn’t it? 🙂 They’re all different but, somehow, when we combine them in one or the other ratio (we have not less than five different expressions here (each identity is a separate expression), and I could give you a few more!), we get the very same number: α. Now that is what I call strange. Truly strange. Incomprehensibly weird!
You’ll say… Well… Those constants must all be related… Of course! That’s exactly the point I am making here. They are, but look how different they are: mmeasures mass, rmeasures distance, e is a charge, and so these are all very different numbers with very different dimensions. Yet, somehow, they are all related through this α number. Frankly, I do not know of any other expression that better illustrates some kind of underlying unity in Nature than the one with those five identities above.
Let’s have a closer look at those constants. You know most of them already. The only constants you may not have seen before are μ0Rand, perhaps, ras well as m. However, these can easily be defined as some easy function of the constants that you did see before, so let me quickly do that:
1. The μ0 constant is the so-called magnetic constant. It’s something similar as ε0 and it’s referred to as the magnetic permeability of the vacuum. So it’s just like the (electric) permittivity of the vacuum (i.e. the electric constant ε0) and the only reason why this blog hasn’t mentioned this constant before is because I haven’t really discussed magnetic fields so far. I only talked about the electric field vector. In any case, you know that the electric and magnetic force are part and parcel of the same phenomenon (i.e. the electromagnetic interaction between charged particles) and, hence, they are closely related. To be precise, μ0ε0 = 1/c= c–2. So that shows the first and second expression for α are, effectively, fully equivalent. [Just in case you’d doubt that μ0ε0 = 1/c2, let me give you the values: μ0 = 4π·10–7 N/A2, and ε0 = (1/4π·c2)·10C2/N·m2. Just plug them in, and you’ll see it’s bang on. Moreover, note that the ampere (A) unit is equal to the coulomb per second unit (C/s), so even the units come out alright. 🙂 Of course they do!]
2. The ke constant is the Coulomb constant and, from its definition ke = 1/4πε0, it’s easy to see how those two expressions are, in turn, equivalent with the third expression for α.
3. The Rconstant is the so-called von Klitzing constant. Huh? Yes. I know. I am pretty sure you’ve never ever heard of that one before. Don’t worry about it. It’s, quite simply, equal to Rh/e2. Hence, substituting (and don’t forget that h = 2πħ) will demonstrate the equivalence of the fourth expression for α.
4. Finally, the re factor is the classical electron radius, which is usually written as a function of me, i.e. the electron mass: re = e2/4πε0mec2. Also note that this also implies that reme = e2/4πε0c2. In words: the product of the electron mass and the electron radius is equal to some constant involving the electron (e), the electric constant (ε0), and c (the speed of light).
I am sure you’re under some kind of ‘formula shock’ now. But you should just take a deep breath and read on. The point to note is that all these very different things are all related through α.
So, again, what is that α really? Well… A strange number indeed. It’s dimensionless (so we don’t measure in kg, m/s, eV·s or whatever) and it pops up everywhere. [Of course, you’ll say: “What’s everywhere? This is the first time I‘ve heard of it!” :-)]
Well… Let me start by explaining the term itself. The fine structure in the name refers to the splitting of the spectral lines of atoms. That’s a very fine structure indeed. 🙂 We also have a so-called hyperfine structure. Both are illustrated below for the hydrogen atom. The numbers n, JI, and are quantum numbers used in the quantum-mechanical explanation of the emission spectrum, which is also depicted below, but note that the illustration gives you the so-called Balmer series only, i.e. the colors in the visible light spectrum (there are many more ‘colors’ in the high-energy ultraviolet and the low-energy infrared range).
To be precise: (1) n is the principal quantum number: here it takes the values 1 or 2, and we could say these are the principal shells; (2) the S, P, D,… orbitals (which are usually written in lower case: s, p, d, f, g, h and i) correspond to the (orbital) angular momentum quantum number l = 0, 1, 2,…, so we could say it’s the subshell; (3) the J values correspond to the so-called magnetic quantum number m, which goes from –l to +l; (4) the fourth quantum number is the spin angular momentum s. I’ve copied another diagram below so you see how it works, more or less, that is.
Now, our fine-structure constant is related to these quantum numbers. How exactly is a bit of a long story, and so I’ll just copy Wikipedia’s summary on this: ” The gross structure of line spectra is the line spectra predicted by the quantum mechanics of non-relativistic electrons with no spin. For a hydrogenic atom, the gross structure energy levels only depend on the principal quantum number n. However, a more accurate model takes into account relativistic and spin effects, which break the degeneracy of the the energy levels and split the spectral lines. The scale of the fine structure splitting relative to the gross structure energies is on the order of ()2, where Z is the atomic number and α is the fine-structure constant.” There you go. You’ll say: so what? Well… Nothing. If you aren’t amazed by that, you should stop reading this.
It is an ‘amazing’ number, indeed, and, hence, it does quality for being “one of the greatest damn mysteries of physics”, as Feynman and/or Leighton put it. Having said that, I would not go as far as to write that it’s “a magic number that comes to us with no understanding by man.” In fact, I think Feynman/Leighton could have done a much better job when explaining what it’s all about. So, yes, I hope to do better than Leighton here and, as he’s still alive, I actually hope he reads this. 🙂
The point is: α is not the only weird number. What’s particular about it, as a physical constant, is that it’s dimensionless, because it relates a number of other physical constants in such a way that the units fall away. Having said that, the Planck or Boltzmann constant are at least as weird.
So… What is this all about? Well… You’ve probably heard about the so-called fine-tuning problem in physics and, if you’re like me, your first reaction will be to associate fine-tuning with fine-structure. However, the two terms have nothing in common, except for four letters. 🙂 OK. Well… I am exaggerating here. The two terms are actually related, to some extent at least, but let me explain how.
The term fine-tuning refers to the fact that all the parameters or constants in the so-called Standard Model of physics are, indeed, all related to each other in the way they are. We can’t sort of just turn the knob of one and change it, because everything falls apart then. So, in essence, the fine-tuning problem in physics is more like a philosophical question: why is the value of all these physical constants and parameters exactly what it is? So it’s like asking: could we change some of the ‘constants’ and still end up with the world we’re living in? Or, if it would be some different world, how would it look like? What if was some other number? What if ke or ε0 was some other number? In short, and in light of those expressions for α, we may rephrase the question as: why is α what is is?
Of course, that’s a question one shouldn’t try to answer before answering some other, more fundamental, question: how many degrees of freedom are there really? Indeed, we just saw that ke and εare intimately related through some equation, and other constants and parameters are related too. So the question is like: what are the ‘dependent’ and the ‘independent’ variables in this so-called Standard Model?
There is no easy answer to that question. In fact, one of the reasons why I find physics so fascinating is that one cannot easily answer such questions. There are the obvious relationships, of course. For example, the ke = 1/4πεrelationship, and the context in which they are used (Coulomb’s Law) does, indeed, strongly suggest that both constants are actually part and parcel of the same thing. Identical, I’d say. Likewise, the μ0ε0 = 1/crelation also suggests there’s only one degree of freedom here, just like there’s only one degree of freedom in that ω/k = relationship (if we set a value for ω, we have k, and vice versa). But… Well… I am not quite sure how to phrase this, but… What physical constants could be ‘variables’ indeed?
It’s pretty obvious that the various formulas for α cannot answer that question: you could stare at them for days and weeks and months and years really, but I’d suggest you use your time to read more of Feynman’s real Lectures instead. 🙂 One point that may help to come to terms with this question – to some extent, at least – is what I casually mentioned above already: the fine-structure constant is equal to the square of the electron charge expressed in Planck units: α = eP2.
Now, that’s very remarkable because Planck units are some kind of ‘natural units’ indeed (for the detail, see my previous post: among other things, it explains what these Planck units really are) and, therefore, it is quite tempting to think that we’ve actually got only one degree of freedom here: α itself. All the rest should follow from it.
[…]
It should… But… Does it?
The answer is: yes and no. To be frank, it’s more no than yes because, as I noted a couple of times already, the fine-structure constant relates a lot of stuff but it’s surely not the only significant number in the Universe. For starters, I said that our E(A to B) formula has two ‘variables’:
1. We have that complex number j, which, as mentioned, is equal to the electron charge expressed in Planck units. [In case you wonder why –eP ≈ –0.08542455 is said to be an amplitude, i.e. a complex number or an ‘arrow’… Well… Complex numbers include the real numbers and, hence, –0.08542455 is both real and complex. When combining ‘arrows’ or, to be precise, when multiplying some complex number with –0.08542455, we will (a) shrink the original arrow to about 8.5% of its original value (8.542455% to be precise) and (b) rotate it over an angle of plus or minus 180 degrees. In other words, we’ll reverse its direction. Hence, using Euler’s notation for complex numbers, we can write: –1 = eiπ eiπ and, hence, –0.085 = 0.085·eiπ = 0.085·eiπ. So, in short, yes, j is a complex number, or an ‘arrow’, if you prefer that term.]
2. We also have some some real number n in the E(A to B) formula. So what’s the n? Well… Believe it or not, it’s the electron mass! Isn’t that amazing?
You’ll say: “Well… Hmm… I suppose so.” But then you may – and actually should – also wonder: the electron mass? In what units? Planck units again? And are we talking relativistic mass (i.e. its total mass, including the equivalent mass of its kinetic energy) or its rest mass only? And we were talking α here, so can we relate it to α too, just like the electron charge?
These are all very good questions. Let’s start with the second one. We’re talking rather slow-moving electrons here, so the relativistic mass (m) and its rest mass (m0) is more or less the same. Indeed, the Lorentz factor γ in the m = γm0 equation is very close to 1 for electrons moving at their typical speed. So… Well… That question doesn’t matter very much. Really? Yes. OK. Because you’re doubting, I’ll quickly show it to you. What is their ‘typical’ speed?
We know we shouldn’t attach too much importance to the concept of an electron in orbit around some nucleus (we know it’s not like some planet orbiting around some star) and, hence, to the concept of speed or velocity (velocity is speed with direction) when discussing an electron in an atom. The concept of momentum (i.e. velocity combined with mass or energy) is much more relevant. There’s a very easy mathematical relationship that gives us some clue here: the Uncertainty Principle. In fact, we’ll use the Uncertainty Principle to relate the momentum of an electron (p) to the so-called Bohr radius r (think of it as the size of a hydrogen atom) as follows: p ≈ ħ/r. [I’ll come back on this in a moment, and show you why this makes sense.]
Now we also know its kinetic energy (K.E.) is mv2/2, which we can write as p2/2m. Substituting our p ≈ ħ/r conjecture, we get K.E. = mv2/2 = ħ2/2mr2. This is equivalent to m2v2 = ħ2/r(just multiply both sides with m). From that, we get v = ħ/mr. Now, one of the many relations we can derive from the formulas for the fine-structure constant is re = α2r. [I haven’t showed you that yet, but I will shortly. It’s a really amazing expression. However, as for now, just accept it as a simple formula for interim use in this digression.] Hence, r = re2. The rfactor in this expression is the so-called classical electron radius. So we can now write v = ħα2/mre. Let’s now throw c in: v/c = α2ħ/mcre. However, from that fifth expression for α, we know that ħ/mcre = α, so we get v/c = α. We have another amazing result here: the v/c ratio for an electron (i.e. its speed expressed as a fraction of the speed of light) is equal to that fine-structure constant α. So that’s about 1/137, so that’s less than 1% of the speed of light. Now… I’ll leave it to you to calculate the Lorentz factor γ but… Well… It’s obvious that it will be very close to 1. 🙂 Hence, the electron’s speed – however we want to visualize that – doesn’t matter much indeed, so we should not worry about relativistic corrections in the formulas.
Let’s now look at the question in regard to the Planck units. If you know nothing at all about them, I would advise you to read what I wrote about them in my previous post. Let me just note we get those Planck units by equating not less than five fundamental physical constants to 1, notably (1) the speed of light, (2) Planck’s (reduced) constant, (3) Boltzmann’s constant, (4) Coulomb’s constant and (5) Newton’s constant (i.e. the gravitational constant). Hence, we have a set of five equations here (ħ = kB = ke = G = 1), and so we can solve that to get the five Planck units, i.e. the Planck length unit, the Planck time unit, the Planck mass unit, the Planck energy unit, the Planck charge unit and, finally (oft forgotten), the Planck temperature unit. Of course, you should note that all mass and energy units are directly related because of the mass-energy equivalence relation E = mc2, which simplifies to E = m if c is equated to 1. [I could also say something about the relation between temperature and (kinetic) energy, but I won’t, as it would only further confuse you.]
Now, you may or may not remember that the Planck time and length units are unimaginably small, but that the Planck mass unit is actually quite sizable—at the atomic scale, that is. Indeed, the Planck mass is something huge, like the mass of an eyebrow hair, or a flea egg. Is that huge? Yes. Because if you’d want to pack it in a Planck-sized particle, it would make for a tiny black hole. 🙂 No kidding. That’s the physical significance of the Planck mass and the Planck length and, yes, it’s weird. 🙂
Let me give you some values. First, the Planck mass itself: it’s about 2.1765×10−8 kg. Again, if you think that’s tiny, think again. From the E = mc2 equivalence relationship, we get that this is equivalent to 2 giga-joule, approximately. Just to give an idea, that’s like the monthly electricity consumption of an average American family. So that’s huge indeed! 🙂 [Many people think that nuclear energy involves the conversion of mass into energy, but the story is actually more complicated than that. In any case… I need to move on.]
Let me now give you the electron mass expressed in the Planck mass unit:
1. Measured in our old-fashioned super-sized SI kilogram unit, the electron mass is me = 9.1×10–31 kg.
2. The Planck mass is mP = 2.1765×10−8 kg.
3. Hence, the electron mass expressed in Planck units is meP = me/mP = (9.1×10–31 kg)/(2.1765×10−8 kg) = 4.181×10−23.
We can, once again, write that as some function of the fine-structure constant. More specifically, we can write:
meP = α/reP = α/α2rP = 1/αrP
So… Well… Yes: yet another amazing formula involving α.
In this formula, we have reP and rP, which are the (classical) electron radius and the Bohr radius expressed in Planck (length) units respectively. So you can see what’s going on here: we have all kinds of numbers here expressed in Planck units: a charge, a radius, a mass,… And we can relate all of them to the fine-structure constant
Why? Who knows? I don’t. As Leighton puts it: that’s just the way “God pushed His pencil.” 🙂
Note that the beauty of natural units ensures that we get the same number for the (equivalent) energy of an electron. Indeed, from the E = mc2 relation, we know the mass of an electron can also be written as 0.511 MeV/c2. Hence, the equivalent energy is 0.511 MeV (so that’s, quite simply, the same number but without the 1/cfactor). Now, the Planck energy EP (in eV) is 1.22×1028 eV, so we get EeP = Ee/EP = (0.511×10eV)/(1.22×1028 eV) = 4.181×10−23. So it’s exactly the same as the electron mass expressed in Planck units. Isn’t that nice? 🙂
Now, are all these numbers dimensionless, just like α? The answer to that question is complicated. Yes, and… Well… No:
1. Yes. They’re dimensionless because they measure something in natural units, i.e. Planck units, and, hence, that’s some kind of relative measure indeed so… Well… Yes, dimensionless.
2. No. They’re not dimensionless because they do measure something, like a charge, a length, or a mass, and when you chose some kind of relative measure, you still need to define some gauge, i.e. some kind of standard measure. So there’s some ‘dimension’ involved there.
So what’s the final answer? Well… The Planck units are not dimensionless. All we can say is that they are closely related, physically. I should also add that we’ll use the electron charge and mass (expressed in Planck units) in our amplitude calculations as a simple (dimensionless) number between zero and one. So the correct answer to the question as to whether these numbers have any dimension is: expressing some quantities in Planck units sort of normalizes them, so we can use them directly in dimensionless calculations, like when we multiply and add amplitudes.
Hmm… Well… I can imagine you’re not very happy with this answer but it’s the best I can do. Sorry. I’ll let you further ponder that question. I need to move on.
Note that that 4.181×10−23 is still a very small number (23 zeroes after the decimal point!), even if it’s like 46 million times larger than the electron mass measured in our conventional SI unit (i.e. 9.1×10–31 kg). Does such small number make any sense? The answer is: yes, it does. When we’ll finally start discussing that E(A to B) formula (I’ll give it to you in a moment), you’ll see that a very small number for n makes a lot of sense.
Before diving into it all, let’s first see if that formula for that alpha, that fine-structure constant, still makes sense with me expressed in Planck units. Just to make sure. 🙂 To do that, we need to use the fifth (last) expression for a, i.e. the one with re in it. Now, in my previous post, I also gave some formula for re: re = e2/4πε0mec2, which we can re-write as reme = e2/4πε0c2. If we substitute that expression for reme in the formula for α, we can calculate α from the electron charge, which indicates both the electron radius and its mass are not some random God-given variable, or “some magic number that comes to us with no understanding by man“, as Feynman – well… Leighton, I guess – puts it. No. They are magic numbers alright, one related to another through the equally ‘magic’ number α, but so I do feel we actually can create some understanding here.
At this point, I’ll digress once again, and insert some quick back-of-the-envelope argument from Feynman’s very serious Caltech Lectures on Physics, in which, as part of the introduction to quantum mechanics, he calculates the so-called Bohr radius from Planck’s constant h. Let me quickly explain: the Bohr radius is, roughly speaking, the size of the simplest atom, i.e. an atom with one electron (so that’s hydrogen really). So it’s not the classical electron radius re. However, both are also related to that ‘magical number’ α. To be precise, if we write the Bohr radius as r, then re = α2r ≈ 0.000053… times r, which we can re-write as:
α = √(re /r) = (re /r)1/2
So that’s yet another amazing formula involving the fine-structure constant. In fact, it’s the formula I used as an ‘interim’ expression to calculate the relative speed of electrons. I just used it without any explanation there, but I am coming back to it here. Alpha again…
Just think about it for a while. In case you’d still doubt the magic of that number, let me write what we’ve discovered so far:
(1) α is the square of the electron charge expressed in Planck units: α = eP2.
(2) α is the square root of the ratio of (a) the classical electron radius and (b) the Bohr radius: α = √(re /r). You’ll see this more often written as re = α2r. Also note that this is an equation that does not depend on the units, in contrast to equation 1 (above), and 4 and 5 (below), which require you to switch to Planck units. It’s the square of a ratio and, hence, the units don’t matter. They fall away.
(3) α is the (relative) speed of an electron: α = v/c. [The relative speed is the speed as measured against the speed of light. Note that the ‘natural’ unit of speed in the Planck system of units is equal to c. Indeed, if you divide one Planck length by one Planck time unit, you get (1.616×10−35 m)/(5.391×10−44 s) = m/s. However, this is another equation, just like (2), that does not depend on the units: we can express v and c in whatever unit we want, as long we’re consistent and express both in the same units.]
(4) Finally – I’ll show you in a moment – α is also equal to the product of (a) the electron mass (which I’ll simply write as me here) and (b) the classical electron radius re (if both are expressed in Planck units): α = me·re. Now think that’s, perhaps, the most amazing of all of the expressions for α. If you don’t think that’s amazing, I’d really suggest you stop trying to study physics. 🙂
Note that, from (2) and (4), we find that:
(5) The electron mass (in Planck units) is equal me = α/r= α/α2r = 1/αr. So that gives us an expression, using α once again, for the electron mass as a function of the Bohr radius r expressed in Planck units.
Finally, we can also substitute (1) in (5) to get:
(6) The electron mass (in Planck units) is equal to me = α/r = eP2/re. Using the Bohr radius, we get me = 1/αr = 1/eP2r.
So… As you can see, this fine-structure constant really links ALL of the fundamental properties of the electron: its charge, its radius, its distance to the nucleus (i.e. the Bohr radius), its velocity, its mass (and, hence, its energy),… In short,
IT IS ALL IN ALPHA!
Now that should answer the question in regard to the degrees of freedom we have here, doesn’t it? It looks like we’ve got only one degree of freedom here. Indeed, if we’ve got some value for α, then we’ve have the electron charge, and from the electron charge, we can calculate the Bohr radius r (as I will show below), and if we have r, we have mand re. And then we can also calculate v, which gives us its momentum (mv) and its kinetic energy (mv2/2). In short,
ALPHA GIVES US EVERYTHING!
Isn’t that amazing? Hmm… You should reserve your judgment as for now, and carefully go over all of the formulas above and verify my statement. If you do that, you’ll probably struggle to find the Bohr radius from the charge (i.e. from α). So let me show you how you do that, because it will also show you why you should, indeed, reserve your judgment. In other words, I’ll show you why alpha does NOT give us everything! The argument below will, finally, prove some of the formulas that I didn’t prove above. Let’s go for it:
1. If we assume that (a) an electron takes some space – which I’ll denote by r 🙂 – and (b) that it has some momentum p because of its mass m and its velocity v, then the ΔxΔp = ħ relation (i.e. the Uncertainty Principle in its roughest form) suggests that the order of magnitude of r and p should be related in the very same way. Hence, let’s just boldly write r ≈ ħ/p and see what we can do with that. So we equate Δx with r and Δp with p. As Feynman notes, this is really more like a ‘dimensional analysis’ (he obviously means something very ‘rough’ with that) and so we don’t care about factors like 2 or 1/2. [Indeed, note that the more precise formulation of the Uncertainty Principle is σxσ≥ ħ/2.] In fact, we didn’t even bother to define r very rigorously. We just don’t care about precise statements at this point. We’re only concerned about orders of magnitude. [If you’re appalled by the rather rude approach, I am sorry for that, but just try to go along with it.]
2. From our discussions on energy, we know that the kinetic energy is mv2/2, which we can write as p2/2m so we get rid of the velocity factor. [Why? Because we can’t really imagine what it is anyway. As I said a couple of times already, we shouldn’t think of electrons as planets orbiting around some star. That model doesn’t work.] So… What’s next? Well… Substituting our p ≈ ħ/r conjecture, we get K.E. = ħ2/2mr2. So that’s a formula for the kinetic energy. Next is potential.
3. Unfortunately, the discussion on potential energy is a bit more complicated. You’ll probably remember that we had an easy and very comprehensible formula for the energy that’s needed (i.e. the work that needs to be done) to bring two charges together from a large distance (i.e. infinity). Indeed, we derived that formula directly from Coulomb’s Law (and Newton’s law of force) and it’s U = q1q2/4πε0r12. [If you think I am going too fast, sorry, please check for yourself by reading my other posts.] Now, we’re actually talking about the size of an atom here in my previous post, so one charge is the proton (+e) and the other is the electron (–e), so the potential energy is U = P.E. = –e2/4πε0r, with r the ‘distance’ between the proton and the electron—so that’s the Bohr radius we’re looking for!
[In case you’re struggling a bit with those minus signs when talking potential energy – I am not ashamed to admit I did! – let me quickly help you here. It has to do with our reference point: the reference point for measuring potential energy is at infinity, and it’s zero there (that’s just our convention). Now, to separate the proton and the electron, we’d have to do quite a lot of work. To use an analogy: imagine we’re somewhere deep down in a cave, and we have to climb back to the zero level. You’ll agree that’s likely to involve some sweat, don’t you? Hence, the potential energy associated with us being down in the cave is negative. Likewise, if we write the potential energy between the proton and the electron as U(r), and the potential energy at the reference point as U(∞) = 0, then the work to be done to separate the charges, i.e. the potential difference U(∞) – U(r), will be positive. So U(∞) – U(r) = 0 – U(r) > 0 and, hence, U(r) < 0. If you still don’t ‘get’ this, think of the electron being in some (potential) well, i.e. below the zero level, and so it’s potential energy is less than zero. Huh? Sorry. I have to move on. :-)]
4. We can now write the total energy (which I’ll denote by E, but don’t confuse it with the electric field vector!) as
E = K.E. + P.E. = ħ2/2mr– e2/4πε0r
Now, the electron (whatever it is) is, obviously, in some kind of equilibrium state. Why is that obvious? Well… Otherwise our hydrogen atom wouldn’t or couldn’t exist. 🙂 Hence, it’s in some kind of energy ‘well’ indeed, at the bottom. Such equilibrium point ‘at the bottom’ is characterized by its derivative (in respect to whatever variable) being equal to zero. Now, the only ‘variable’ here is r (all the other symbols are physical constants), so we have to solve for dE/dr = 0. Writing it all out yields:
dE/dr = –ħ2/mr+ e2/4πε0r= 0 ⇔ r = 4πε0ħ2/me2
You’ll say: so what? Well… We’ve got a nice formula for the Bohr radius here, and we got it in no time! 🙂 But the analysis was rough, so let’s check if it’s any good by putting the values in:
r = 4πε0h2/me2
= [(1/(9×109) C2/N·m2)·(1.055×10–34 J·s)2]/[(9.1×10–31 kg)·(1.6×10–19 C)2]
= 53×10–12 m = 53 pico-meter (pm)
So what? Well… Double-check it on the Internet: the Bohr radius is, effectively, about 53 trillionths of a meter indeed! So we’re right on the spot!
[In case you wonder about the units, note that mass is a measure of inertia: one kg is the mass of an object which, subject to a force of 1 newton, will accelerate at the rate of 1 m/s per second. Hence, we write F = m·a, which is equivalent to m = F/a. Hence, the kg, as a unit, is equivalent to 1 N/(m/s2). If you make this substitution, we get r in the unit we want to see: [(C2/N·m2)·(N2·m2·s2)/[(N·s2/m)·C2] = m.]
Moreover, if we take that value for r and put it in the (total) energy formula above, we’d find that the energy of the electron is –13.6 eV. [Don’t forget to convert from joule to electronvolt when doing the calculation!] Now you can check that on the Internet too: 13.6 eV is exactly the amount of energy that’s needed to ionize a hydrogen atom (i.e. the energy that’s needed to kick the electron out of that energy well)!
Waw ! Isn’t it great that such simple calculations yield such great results? 🙂 [Of course, you’ll note that the omission of the 1/2 factor in the Uncertainty Principle was quite strategic. :-)] Using the r = 4πε0ħ2/meformula for the Bohr radius, you can now easily check the re = α2r formula. You should find what we jotted down already: the classical electron radius is equal to re = e2/4πε0mec2. To be precise, re = (53×10–6)·(53×10–12m) = 2.8×10–15 m. Now that’s again something you should check on the Internet. Guess what? […] It’s right on the spot again. 🙂
We can now also check that α = m·re formula: α = m·r= 4.181×10−23 times… Hey! Wait! We have to express re in Planck units as well, of course! Now, (2.81794×10–15 m)/(1.616×10–35 m) ≈ 1.7438 ×1020. So now we get 4.181×10−23 times 1.7438×1020 = 7.29×10–3 = 0.00729 ≈ 1/137. Bingo! We got the magic number once again. 🙂
So… Well… Doesn’t that confirm we actually do have it all with α?
Well… Yes and no… First, you should note that I had to use h in that calculation of the Bohr radius. Moreover, the other physical constants (most notably c and the Coulomb constant) were actually there as well, ‘in the background’ so to speak, because one needs them to derive the formulas we used above. And then we have the equations themselves, of course, most notably that Uncertainty Principle… So… Well…
It’s not like God gave us one number only (α) and that all the rest flows out of it. We have a whole bunch of ‘fundamental’ relations and ‘fundamental’ constants here.
Having said that, it’s true that statement still does not diminish the magic of alpha.
Hmm… Now you’ll wonder: how many? How many constants do we need in all of physics?
Well… I’d say, you should not only ask about the constants: you should also ask about the equations: how many equations do we need in all of physics? [Just for the record, I had to smile when the Hawking of the movie says that he’s actually looking for one formula that sums up all of physics. Frankly, that’s a nonsensical statement. Hence, I think the real Hawking never said anything like that. Or, if he did, that it was one of those statements one needs to interpret very carefully.]
But let’s look at a few constants indeed. For example, if we have c, h and α, then we can calculate the electric charge e and, hence, the electric constant ε= e2/2αhc. From that, we get Coulomb’s constant ke, because ke is defined as 1/4πε0… But…
Hey! Wait a minute! How do we know that ke = 1/4πε0? Well… From experiment. But… Yes? That means 1/4π is some fundamental proportionality coefficient too, isn’t it?
Wow! You’re smart. That’s a good and valid remark. In fact, we use the so-called reduced Planck constant ħ in a number of calculations, and so that involves a 2π factor too (ħ = h/2π). Hence… Well… Yes, perhaps we should consider 2π as some fundamental constant too! And, then, well… Now that I think of it, there’s a few other mathematical constants out there, like Euler’s number e, for example, which we use in complex exponentials.
# ?!?
I am joking, right? I am not saying that 2π and Euler’s number are fundamental ‘physical’ constants, am I? [Note that it’s a bit of a nuisance we’re also using the symbol for Euler’s number, but so we’re not talking the electron charge here: we’re talking that 2.71828…etc number that’s used in so-called ‘natural’ exponentials and logarithms.]
Well… Yes and no. They’re mathematical constants indeed, rather than physical, but… Well… I hope you get my point. What I want to show here, is that it’s quite hard to say what’s fundamental and what isn’t. We can actually pick and choose a bit among all those constants and all those equations. As one physicist puts its: it depends on how we slice it. The one thing we know for sure is that a great many things are related, in a physical way (α connects all of the fundamental properties of the electron, for example) and/or in a mathematical way (2π connects not only the circumference of the unit circle with the radius but quite a few other constants as well!), but… Well… What to say? It’s a tough discussion and I am not smart enough to give you an unambiguous answer. From what I gather on the Internet, when looking at the whole Standard Model (including the strong force, the weak force and the Higgs field), we’ve got a few dozen physical ‘fundamental’ constants, and then a few mathematical ones as well.
That’s a lot, you’ll say. Yes. At the same time, it’s not an awful lot. Whatever number it is, it does raise a very fundamental question: why are they what they are? That brings us back to that ‘fine-tuning’ problem. Now, I can’t make this post too long (it’s way too long already), so let me just conclude this discussion by copying Wikipedia on that question, because what it has on this topic is not so bad:
“Some physicists have explored the notion that if the physical constants had sufficiently different values, our Universe would be so radically different that intelligent life would probably not have emerged, and that our Universe therefore seems to be fine-tuned for intelligent life. The anthropic principle states a logical truism: the fact of our existence as intelligent beings who can measure physical constants requires those constants to be such that beings like us can exist.
I like this. But the article then adds the following, which I do not like so much, because I think it’s a bit too ‘frivolous’:
“There are a variety of interpretations of the constants’ values, including that of a divine creator (the apparent fine-tuning is actual and intentional), or that ours is one universe of many in a multiverse (e.g. the many-worlds interpretation of quantum mechanics), or even that, if information is an innate property of the universe and logically inseparable from consciousness, a universe without the capacity for conscious beings cannot exist.”
Hmm… As said, I am quite happy with the logical truism: we are there because alpha (and a whole range of other stuff) is what it is, and we can measure alpha (and a whole range of other stuff) as what it is, because… Well… Because we’re here. Full stop. As for the ‘interpretations’, I’ll let you think about that for yourself. 🙂
I need to get back to the lesson. Indeed, this was just a ‘digression’. My post was about the three fundamental events or actions in quantum electrodynamics, and so I was talking about that E(A to B) formula. However, I had to do that digression on alpha to ensure you understand what I want to write about that. So let me now get back to it. End of digression. 🙂
The E(A to B) formula
Indeed, I must assume that, with all these digressions, you are truly despairing now. Don’t. We’re there! We’re finally ready for the E(A to B) formula! Let’s go for it.
We’ve now got those two numbers measuring the electron charge and the electron mass in Planck units respectively. They’re fundamental indeed and so let’s loosen up on notation and just write them as e and m respectively. Let me recap:
1. The value of e is approximately –0.08542455, and it corresponds to the so-called junction number j, which is the amplitude for an electron-photon coupling. When multiplying it with another amplitude (to find the amplitude for an event consisting of two sub-events, for example), it corresponds to a ‘shrink’ to less than one-tenth (something like 8.5% indeed, corresponding to the magnitude of e) and a ‘rotation’ (or a ‘turn’) over 180 degrees, as mentioned above.
Please note what’s going on here: we have a physical quantity, the electron charge (expressed in Planck units), and we use it in a quantum-mechanical calculation as a dimensionless (complex) number, i.e. as an amplitude. So… Well… That’s what physicists mean when they say that the charge of some particle (usually the electric charge but, in quantum chromodynamics, it will be the ‘color’ charge of a quark) is a ‘coupling constant’.
2. We also have m, the electron mass, and we’ll use in the same way, i.e. as some dimensionless amplitude. As compared to j, it’s is a very tiny number: approximately 4.181×10−23. So if you look at it as an amplitude, indeed, then it corresponds to an enormous ‘shrink’ (but no turn) of the amplitude(s) that we’ll be combining it with.
So… Well… How do we do it?
Well… At this point, Leighton goes a bit off-track. Just a little bit. 🙂 From what he writes, it’s obvious that he assumes the frequency (or, what amounts to the same, the de Broglie wavelength) of an electron is just like the frequency of a photon. Frankly, I just can’t imagine why and how Feynman let this happen. It’s wrong. Plain wrong. As I mentioned in my introduction already, an electron traveling through space is not like a photon traveling through space.
For starters, an electron is much slower (because it’s a matter-particle: hence, it’s got mass). Secondly, the de Broglie wavelength and/or frequency of an electron is not like that of a photon. For example, if we take an electron and a photon having the same energy, let’s say 1 eV (that corresponds to infrared light), then the de Broglie wavelength of the electron will be 1.23 nano-meter (i.e. 1.23 billionths of a meter). Now that’s about one thousand times smaller than the wavelength of our 1 eV photon, which is about 1240 nm. You’ll say: how is that possible? If they have the same energy, then the f = E/h and ν = E/h should give the same frequency and, hence, the same wavelength, no?
Well… No! Not at all! Because an electron, unlike the photon, has a rest mass indeed – measured as not less than 0.511 MeV/c2, to be precise (note the rather particular MeV/c2 unit: it’s from the E = mc2 formula) – one should use a different energy value! Indeed, we should include the rest mass energy, which is 0.511 MeV. So, almost all of the energy here is rest mass energy! There’s also another complication. For the photon, there is an easy relationship between the wavelength and the frequency: it has no mass and, hence, all its energy is kinetic, or movement so to say, and so we can use that ν = E/h relationship to calculate its frequency ν: it’s equal to ν = E/h = (1 eV)/(4.13567×10–15 eV·s) ≈ 0.242×1015 Hz = 242 tera-hertz (1 THz = 1012 oscillations per second). Now, knowing that light travels at the speed of light, we can check the result by calculating the wavelength using the λ = c/ν relation. Let’s do it: (2.998×10m/s)/(242×1012 Hz) ≈ 1240 nm. So… Yes, done!
But so we’re talking photons here. For the electron, the story is much more complicated. That wavelength I mentioned was calculated using the other of the two de Broglie relations: λ = h/p. So that uses the momentum of the electron which, as you know, is the product of its mass (m) and its velocity (v): p = mv. You can amuse yourself and check if you find the same wavelength (1.23 nm): you should! From the other de Broglie relation, f = E/h, you can also calculate its frequency: for an electron moving at non-relativistic speeds, it’s about 0.123×1021 Hz, so that’s like 500,000 times the frequency of the photon we we’re looking at! When multiplying the frequency and the wavelength, we should get its speed. However, that’s where we get in trouble. Here’s the problem with matter waves: they have a so-called group velocity and a so-called phase velocity. The idea is illustrated below: the green dot travels with the wave packet – and, hence, its velocity corresponds to the group velocity – while the red dot travels with the oscillation itself, and so that’s the phase velocity. [You should also remember, of course, that the matter wave is some complex-valued wavefunction, so we have both a real as well as an imaginary part oscillating and traveling through space.]
To be precise, the phase velocity will be superluminal. Indeed, using the usual relativistic formula, we can write that p = γm0v and E = γm0c2, with v the (classical) velocity of the electron and what it always is, i.e. the speed of light. Hence, λ = h/γm0v and = γm0c2/h, and so λf = c2/v. Because v is (much) smaller than c, we get a superluminal velocity. However, that’s the phase velocity indeed, not the group velocity, which corresponds to v. OK… I need to end this digression.
So what? Well, to make a long story short, the ‘amplitude framework’ for electrons is differerent. Hence, the story that I’ll be telling here is different from what you’ll read in Feynman’s QED. I will use his drawings, though, and his concepts. Indeed, despite my misgivings above, the conceptual framework is sound, and so the corrections to be made are relatively minor.
So… We’re looking at E(A to B), i.e. the amplitude for an electron to go from point A to B in spacetime, and I said the conceptual framework is exactly the same as that for a photon. Hence, the electron can follow any path really. It may go in a straight line and travel at a speed that’s consistent with what we know of its momentum (p), but it may also follow other paths. So, just like the photon, we’ll have some so-called propagator function, which gives you amplitudes based on the distance in space as well as in the distance in ‘time’ between two points. Now, Ralph Leighton identifies that propagator function with the propagator function for the photon, i.e. P(A to B), but that’s wrong: it’s not the same.
The propagator function for an electron depends on its mass and its velocity, and/or on the combination of both (like it momentum p = mv and/or its kinetic energy: K.E. = mv2 = p2/2m). So we have a different propagator function here. However, I’ll use the same symbol for it: P(A to B).
So, the bottom line is that, because of the electron’s mass (which, remember, is a measure for inertia), momentum and/or kinetic energy (which, remember, are conserved in physics), the straight line is definitely the most likely path, but (big but!), just like the photon, the electron may follow some other path as well.
So how do we formalize that? Let’s first associate an amplitude P(A to B) with an electron traveling from point A to B in a straight line and in a time that’s consistent with its velocity. Now, as mentioned above, the P here stands for propagator function, not for photon, so we’re talking a different P(A to B) here than that P(A to B) function we used for the photon. Sorry for the confusion. 🙂 The left-hand diagram below then shows what we’re talking about: it’s the so-called ‘one-hop flight’, and so that’s what the P(A to B) amplitude is associated with.
Now, the electron can follow other paths. For photons, we said the amplitude depended on the spacetime interval I: when negative or positive (i.e. paths that are not associated with the photon traveling in a straight line and/or at the speed of light), the contribution of those paths to the final amplitudes (or ‘final arrow’, as it was called) was smaller.
For an electron, we have something similar, but it’s modeled differently. We say the electron could take a ‘two-hop flight’ (via point C or C’), or a ‘three-hop flight’ (via D and E) from point A to B. Now, it makes sense that these paths should be associated with amplitudes that are much smaller. Now that’s where that n-factor comes in. We just put some real number n in the formula for the amplitude for an electron to go from A to B via C, which we write as:
P(A to C)∗n2∗P(C to B)
Note what’s going on here. We multiply two amplitudes, P(A to C) and P(C to B), which is OK, because that’s what the rules of quantum mechanics tell us: if an ‘event’ consists of two sub-events, we need to multiply the amplitudes (not the probabilities) in order to get the amplitude that’s associated with both sub-events happening. However, we add an extra factor: n2. Note that it must be some very small number because we have lots of alternative paths and, hence, they should not be very likely! So what’s the n? And why n2 instead of just n?
Well… Frankly, I don’t know. Ralph Leighton boldly equates n to the mass of the electron. Now, because he obviously means the mass expressed in Planck units, that’s the same as saying n is the electron’s energy (again, expressed in Planck’s ‘natural’ units), so n should be that number m = meP = EeP = 4.181×10−23. However, I couldn’t find any confirmation on the Internet, or elsewhere, of the suggested n = m identity, so I’ll assume n = m indeed, but… Well… Please check for yourself. It seems the answer is to be found in a mathematical theory that helps physicists to actually calculate j and n from experiment. It’s referred to as perturbation theory, and it’s the next thing on my study list. As for now, however, I can’t help you much. I can only note that the equation makes sense.
Of course, it does: inserting a tiny little number n, close to zero, ensures that those other amplitudes don’t contribute too much to the final ‘arrow’. And it also makes a lot of sense to associate it with the electron’s mass: if mass is a measure of inertia, then it should be some factor reducing the amplitude that’s associated with the electron following such crooked path. So let’s go along with it, and see what comes out of it.
A three-hop flight is even weirder and uses that n2 factor two times:
P(A to E)∗n2∗P(E to D)∗n2∗P(D to B)
So we have an (n2)= nfactor here, which is good, because two hops should be much less likely than one hop. So what do we get? Well… (4.181×10−23)≈ 305×10−92. Pretty tiny, huh? 🙂 Of course, any point in space is a potential hop for the electron’s flight from point A to B and, hence, there’s a lot of paths and a lot of amplitudes (or ‘arrows’ if you want), which, again, is consistent with a very tiny value for n indeed.
So, to make a long story short, E(A to B) will be a giant sum (i.e. some kind of integral indeed) of a lot of different ways an electron can go from point A to B. It will be a series of terms P(A to E) + P(A to C)∗n2∗P(C to B) + P(A to E)∗n2∗P(E to D)∗n2∗P(D to B) + … for all possible intermediate points C, D, E, and so on.
What about the j? The junction number of coupling constant. How does that show up in the E(A to B) formula? Well… Those alternative paths with hops here and there are actually the easiest bit of the whole calculation. Apart from taking some strange path, electrons can also emit and/or absorb photons during the trip. In fact, they’re doing that constantly actually. Indeed, the image of an electron ‘in orbit’ around the nucleus is that of an electron exchanging so-called ‘virtual’ photons constantly, as illustrated below. So our image of an electron absorbing and then emitting a photon (see the diagram on the right-hand side) is really like the tiny tip of a giant iceberg: most of what’s going on is underneath! So that’s where our junction number j comes in, i.e. the charge (e) of the electron.
So, when you hear that a coupling constant is actually equal to the charge, then this is what it means: you should just note it’s the charge expressed in Planck units. But it’s a deep connection, isn’t? When everything is said and done, a charge is something physical, but so here, in these amplitude calculations, it just shows up as some dimensionless negative number, used in multiplications and additions of amplitudes. Isn’t that remarkable?
The situation becomes even more complicated when more than one electron is involved. For example, two electrons can go in a straight line from point 1 and 2 to point 3 and 4 respectively, but there’s two ways in which this can happen, and they might exchange photons along the way, as shown below. If there’s two alternative ways in which one event can happen, you know we have to add amplitudes, rather than multiply them. Hence, the formula for E(A to B) becomes even more complicated.
Moreover, a single electron may first emit and then absorb a photon itself, so there’s no need for other particles to be there to have lots of j factors in our calculation. In addition, that photon may briefly disintegrate into an electron and a positron, which then annihilate each other to again produce a photon: in case you wondered, that’s what those little loops in those diagrams depicting the exchange of virtual photons is supposed to represent. So, every single junction (i.e. every emission and/or absorption of a photon) involves a multiplication with that junction number j, so if there are two couplings involved, we have a j2 factor, and so that’s 0.085424552 = α ≈ 0.0073. Four couplings implies a factor of 0.085424554 ≈ 0.000053.
Just as an example, I copy two diagrams involving four, five or six couplings indeed. They all have some ‘incoming’ photon, because Feynman uses them to explain something else (the so-called magnetic moment of a photon), but it doesn’t matter: the same illustrations can serve multiple purposes.
Now, it’s obvious that the contributions of the alternatives with many couplings add almost nothing to the final amplitude – just like the ‘many-hop’ flights add almost nothing – but… Well… As tiny as these contributions are, they are all there, and so they all have to be accounted for. So… Yes. You can easily appreciate how messy it all gets, especially in light of the fact that there are so many points that can serve as a ‘hop’ or a ‘coupling’ point!
So… Well… Nothing. That’s it! I am done! I realize this has been another long and difficult story, but I hope you appreciated and that it shed some light on what’s really behind those simplified stories of what quantum mechanics is all about. It’s all weird and, admittedly, not so easy to understand, but I wouldn’t say an understanding is really beyond the reach of us, common mortals. 🙂
Post scriptum: When you’ve reached here, you may wonder: so where’s the final formula then for E(A to B)? Well… I have no easy formula for you. From what I wrote above, it should be obvious that we’re talking some really awful-looking integral and, because it’s so awful, I’ll let you find it yourself. 🙂
I should also note another reason why I am reluctant to identify n with m. The formulas in Feynman’s QED are definitely not the standard ones. The more standard formulations will use the gauge coupling parameter about which I talked already. I sort of discussed it, indirectly, in my first comments on Feynman’s QED, when I criticized some other part of the book, notably its explanation of the phenomenon of diffraction of light, which basically boiled down to: “When you try to squeeze light too much [by forcing it to go through a small hole], it refuses to cooperate and begins to spread out”, because “there are not enough arrows representing alternative paths.”
Now that raises a lot of questions, and very sensible ones, because that simplification is nonsensical. Not enough arrows? That statement doesn’t make sense. We can subdivide space in as many paths as we want, and probability amplitudes don’t take up any physical space. We can cut up space in smaller and smaller pieces (so we analyze more paths within the same space). The consequence – in terms of arrows – is that directions of our arrows won’t change but their length will be much and much smaller as we’re analyzing many more paths. That’s because of the normalization constraint. However, when adding them all up – a lot of very tiny ones, or a smaller bunch of bigger ones – we’ll still get the same ‘final’ arrow. That’s because the direction of those arrows depends on the length of the path, and the length of the path doesn’t change simply because we suddenly decide to use some other ‘gauge’.
Indeed, the real question is: what’s a ‘small’ hole? What’s ‘small’ and what’s ‘large’ in quantum electrodynamics? Now, I gave an intuitive answer to that question in that post of mine, but it’s much more accurate than Feynman’s, or Leighton’s. The answer to that question is: there’s some kind of natural ‘gauge’, and it’s related to the wavelength. So the wavelength of a photon, or an electron, in this case, comes with some kind of scale indeed. That’s why the fine-structure constant is often written in yet another form:
α = 2πree = rek
λe and kare the Compton wavelength and wavenumber of the electron (so kis not the Coulomb constant here). The Compton wavelength is the de Broglie wavelength of the electron. [You’ll find that Wikipedia defines it as “the wavelength that’s equivalent to the wavelength of a photon whose energy is the same as the rest-mass energy of the electron”, but that’s a very confusing definition, I think.]
The point to note is that the spatial dimension in both the analysis of photons as well as of matter waves, especially in regard to studying diffraction and/or interference phenomena, is related to the frequencies, wavelengths and/or wavenumbers of the wavefunctions involved. There’s a certain ‘gauge’ involved indeed, i.e. some measure that is relative, like the gauge pressure illustrated below. So that’s where that gauge parameter g comes in. And the fact that it’s yet another number that’s closely related to that fine-structure constant is… Well… Again… That alpha number is a very magic number indeed… 🙂
Post scriptum (5 October 2015):
Much stuff is physics is quite ‘magical’, but it’s never ‘too magical’. I mean: there’s always an explanation. So there is a very logical explanation for the above-mentioned deep connection between the charge of an electron, its energy and/or mass, its various radii (or physical dimensions) and the coupling constant too. I wrote a piece about that, much later than when I wrote the piece above. I would recommend you read that piece too. It’s a piece in which I do take the magic out of ‘God’s number’. Understanding it involves a deep understanding of electromagnetism, however, and that requires some effort. It’s surely worth the effort, though.
# The Strange Theory of Light and Matter (II)
If we limit our attention to the interaction between light and matter (i.e. the behavior of photons and electrons only—so we we’re not talking quarks and gluons here), then the ‘crazy ideas’ of quantum mechanics can be summarized as follows:
1. At the atomic or sub-atomic scale, we can no longer look at light as an electromagnetic wave. It consists of photons, and photons come in blobs. Hence, to some extent, photons are ‘particle-like’.
2. At the atomic or sub-atomic scale, electrons don’t behave like particles. For example, if we send them through a slit that’s small enough, we’ll observe a diffraction pattern. Hence, to some extent, electrons are ‘wave-like’.
In short, photons aren’t waves, but they aren’t particles either. Likewise, electrons aren’t particles, but they aren’t waves either. They are neither. The weirdest thing of all, perhaps, is that, while light and matter are two very different things in our daily experience – light and matter are opposite concepts, I’d say, just like particles and waves are opposite concepts) – they look pretty much the same in quantum physics: they are both represented by a wavefunction.
Let me immediately make a little note on terminology here. The term ‘wavefunction’ is a bit ambiguous, in my view, because it makes one think of a real wave, like a water wave, or an electromagnetic wave. Real waves are described by real-valued wave functions describing, for example, the motion of a ball on a spring, or the displacement of a gas (e.g. air) as a sound wave propagates through it, or – in the case of an electromagnetic wave – the strength of the electric and magnetic field.
You may have questions about the ‘reality’ of fields, but electromagnetic waves – i.e. the classical description of light – are quite ‘real’ too, even if:
1. Light doesn’t travel in a medium (like water or air: there is no aether), and
2. The magnitude of the electric and magnetic field (they are usually denoted by E and B) depend on your reference frame: if you calculate the fields using a moving coordinate system, you will get a different mixture of E and B. Therefore, E and B may not feel very ‘real’ when you look at them separately, but they are very real when we think of them as representing one physical phenomenon: the electromagnetic interaction between particles. So the E and B mix is, indeed, a dual representation of one reality. I won’t dwell on that, as I’ve done that in another post of mine.
How ‘real’ is the quantum-mechanical wavefunction?
The quantum-mechanical wavefunction is not like any of these real waves. In fact, I’d rather use the term ‘probability wave’ but, apparently, that’s used only by bloggers like me 🙂 and so it’s not very scientific. That’s for a good reason, because it’s not quite accurate either: the wavefunction in quantum mechanics represents probability amplitudes, not probabilities. So we should, perhaps, be consistent and term it a ‘probability amplitude wave’ – but then that’s too cumbersome obviously, so the term ‘probability wave’ may be confusing, but it’s not so bad, I think.
Amplitudes and probabilities are related as follows:
1. Probabilities are real numbers between 0 and 1: they represent the probability of something happening, e.g. a photon moves from point A to B, or a photon is absorbed (and emitted) by an electron (i.e. a ‘junction’ or ‘coupling’, as you know).
2. Amplitudes are complex numbers, or ‘arrows’ as Feynman calls them: they have a length (or magnitude) and a direction.
3. We get the probabilities by taking the (absolute) square of the amplitudes.
So photons aren’t waves, but they aren’t particles either. Likewise, electrons aren’t particles, but they aren’t waves either. They are neither. So what are they? We don’t have words to describe what they are. Some use the term ‘wavicle’ but that doesn’t answer the question, because who knows what a ‘wavicle’ is? So we don’t know what they are. But we do know how they behave. As Feynman puts it, when comparing the behavior of light and then of electrons in the double-slit experiment—struggling to find language to describe what’s going on: “There is one lucky break: electrons behave just like light.”
He says so because of that wave function: the mathematical formalism is the same, for photons and for electrons. Exactly the same? […] But that’s such a weird thing to say, isn’t it? We can’t help thinking of light as waves, and of electrons as particles. They can’t be the same. They’re different, aren’t they? They are.
Scales and senses
To some extent, the weirdness can be explained because the scale of our world is not atomic or sub-atomic. Therefore, we ‘see’ things differently. Let me say a few words about the instrument we use to look at the world: our eye.
Our eye is particular. The retina has two types of receptors: the so-called cones are used in bright light, and distinguish color, but when we are in a dark room, the so-called rods become sensitive, and it is believed that they actually can detect a single photon of light. However, neural filters only allow a signal to pass to the brain when at least five photons arrive within less than a tenth of a second. A tenth of a second is, roughly, the averaging time of our eye. So, as Feynman puts it: “If we were evolved a little further so we could see ten times more sensitively, we wouldn’t have this discussion—we would all have seen very dim light of one color as a series of intermittent little flashes of equal intensity.” In other words, the ‘particle-like’ character of light would have been obvious to us.
Let me make a few more remarks here, which you may or may not find useful. The sense of ‘color’ is not something ‘out there’: colors, like red or brown, are experiences in our eye and our brain. There are ‘pigments’ in the cones (cones are the receptors that work only if the intensity of the light is high enough) and these pigments absorb the light spectrum somewhat differently, as a result of which we ‘see’ color. Different animals see different things. For example, a bee can distinguish between white paper using zinc white versus lead white, because they reflect light differently in the ultraviolet spectrum, which the bee can see but we don’t. Bees can also tell the direction of the sun without seeing the sun itself, because they are sensitive to polarized light, and the scattered light of the sky (i.e. the blue sky as we see it) is polarized. The bee can also notice flicker up to 200 oscillations per second, while we see it only up to 20, because our averaging time is like a tenth of a second, which is short for us, but so the averaging time of the bee is much shorter. So we cannot see the quick leg movements and/or wing vibrations of bees, but the bee can!
Sometimes we can’t see any color. For example, we see the night sky in ‘black and white’ because the light intensity is very low, and so it’s our rods, not the cones, that process the signal, and so these rods can’t ‘see’ color. So those beautiful color pictures of nebulae are not artificial (although the pictures are often enhanced). It’s just that the camera that is used to take those pictures (film or, nowadays, digital) is much more sensitive than our eye.
Regardless, color is a quality which we add to our experience of the outside world ourselves. What’s out there are electromagnetic waves with this or that wavelength (or, what amounts to the same, this or that frequency). So when critics of the exact sciences say so much is lost when looking at (visible) light as an electromagnetic wave in the range of 430 to 790 teraherz, they’re wrong. Those critics will say that physics reduces reality. That is not the case.
What’s going on is that our senses process the signal that they are receiving, especially when it comes to vision. As Feynman puts it: “None of the other senses involves such a large amount of calculation, so to speak, before the signal gets into a nerve that one can make measurements on. The calculations for all the rest of the senses usually happen in the brain itself, where it is very difficult to get at specific places to make measurements, because there are so many interconnections. Here, with the visual sense, we have the light, three layers of cells making calculations, and the results of the calculations being transmitted through the optic nerve.”
Hence, things like color and all of the other sensations that we have are the object of study of other sciences, including biochemistry and neurobiology, or physiology. For all we know, what’s ‘out there’ is, effectively, just ‘boring’ stuff, like electromagnetic radiation, energy and ‘elementary particles’—whatever they are. No colors. Just frequencies. 🙂
Light versus matter
If we accept the crazy ideas of quantum mechanics, then the what and the how become one and the same. Hence we can say that photons and electrons are a wavefunction somewhere in space. Photons, of course, are always traveling, because they have energy but no rest mass. Hence, all their energy is in the movement: it’s kinetic, not potential. Electrons, on the other hand, usually stick around some nucleus. And, let’s not forget, they have an electric charge, so their energy is not only kinetic but also potential.
But, otherwise, it’s the same type of ‘thing’ in quantum mechanics: a wavefunction, like those below.
Why diagram A and B? It’s just to emphasize the difference between a real-valued wave function and those ‘probability waves’ we’re looking at here (diagram C to H). A and B represent a mass on a spring, oscillating at more or less the same frequency but a different amplitude. The amplitude here means the displacement of the mass. The function describing the displacement of a mass on a spring (so that’s diagram A and B) is an example of a real-valued wave function: it’s a simple sine or cosine function, as depicted below. [Note that a sine and a cosine are the same function really, except for a phase difference of 90°.]
Let’s now go back to our ‘probability waves’. Photons and electrons, light and matter… The same wavefunction? Really? How can the sunlight that warms us up in the morning and makes trees grow be the same as our body, or the tree? The light-matter duality that we experience must be rooted in very different realities, isn’t it?
Well… Yes and no. If we’re looking at one photon or one electron only, it’s the same type of wavefunction indeed. The same type… OK, you’ll say. So they are the same family or genus perhaps, as they say in biology. Indeed, both of them are, obviously, being referred to as ‘elementary particles’ in the so-called Standard Model of physics. But so what makes an electron and a photon specific as a species? What are the differences?
There’re quite a few, obviously:
1. First, as mentioned above, a photon is a traveling wave function and, because it has no rest mass, it travels at the ultimate speed, i.e. the speed of light (c). An electron usually sticks around or, if it travels through a wire, it travels at very low speeds. Indeed, you may find it hard to believe, but the drift velocity of the free electrons in a standard copper wire is measured in cm per hour, so that’s very slow indeed—and while the electrons in an electron microscope beam may be accelerated up to 70% of the speed of light, and close to in those huge accelerators, you’re not likely to find an electron microscope or accelerator in Nature. In fact, you may want to remember that a simple thing like electricity going through copper wires in our houses is a relatively modern invention. 🙂
So, yes, those oscillating wave functions in those diagrams above are likely to represent some electron, rather than a photon. To be precise, the wave functions above are examples of standing (or stationary) waves, while a photon is a traveling wave: just extend that sine and cosine function in both directions if you’d want to visualize it or, even better, think of a sine and cosine function in an envelope traveling through space, such as the one depicted below.
Indeed, while the wave function of our photon is traveling through space, it is likely to be limited in space because, when everything is said and done, our photon is not everywhere: it must be somewhere.
At this point, it’s good to pause and think about what is traveling through space. It’s the oscillation. But what’s the oscillation? There is no medium here, and even if there would be some medium (like water or air or something like aether—which, let me remind you, isn’t there!), the medium itself would not be moving, or – I should be precise here – it would only move up and down as the wave propagates through space, as illustrated below. To be fully complete, I should add we also have longitudinal waves, like sound waves (pressure waves): in that case, the particles oscillate back and forth along the direction of wave propagation. But you get the point: the medium does not travel with the wave.
When talking electromagnetic waves, we have no medium. These E and B vectors oscillate but is very wrong to assume they use ‘some core of nearby space’, as Feynman puts it. They don’t. Those field vectors represent a condition at one specific point (admittedly, a point along the direction of travel) in space but, for all we know, an electromagnetic wave travels in a straight line and, hence, we can’t talk about its diameter or so.
Still, as mentioned above, we can imagine, more or less, what E and B stand for (we can use field line to visualize them, for instance), even if we have to take into account their relativity (calculating their values from a moving reference frame results in different mixtures of E and B). But what are those amplitudes? How should we visualize them?
The honest answer is: we can’t. They are what they are: two mathematical quantities which, taken together, form a two-dimensional vector, which we square to find a value for a real-life probability, which is something that – unlike the amplitude concept – does make sense to us. Still, that representation of a photon above (i.e. the traveling envelope with a sine and cosine inside) may help us to ‘understand’ it somehow. Again, you absolute have to get rid of the idea that these ‘oscillations’ would somehow occupy some physical space. They don’t. The wave itself has some definite length, for sure, but that’s a measurement in the direction of travel, which is often denoted as x when discussing uncertainty in its position, for example—as in the famous Uncertainty Principle (ΔxΔp > h).
You’ll say: Oh!—but then, at the very least, we can talk about the ‘length’ of a photon, can’t we? So then a photon is one-dimensional at least, not zero-dimensional! The answer is yes and no. I’ve talked about this before and so I’ll be short(er) on it now. A photon is emitted by an atom when an electron jumps from one energy level to another. It thereby emits a wave train that lasts about 10–8 seconds. That’s not very long but, taking into account the rather spectacular speed of light (3×10m/s), that still makes for a wave train with a length of not less than 3 meter. […] That’s quite a length, you’ll say. You’re right. But you forget that light travels at the speed of light and, hence, we will see this length as zero because of the relativistic length contraction effect. So… Well… Let me get back to the question: if photons and electrons are both represented by a wavefunction, what makes them different?
2. A more fundamental difference between photons and electrons is how they interact with each other.
From what I’ve written above, you understand that probability amplitudes are complex numbers, or ‘arrows’, or ‘two-dimensional vectors’. [Note that all of these terms have precise mathematical definitions and so they’re actually not the same, but the difference is too subtle to matter here.] Now, there are two ways of combining amplitudes, which are referred to as ‘positive’ and ‘negative’ interference respectively. I should immediately note that there’s actually nothing ‘positive’ or ‘negative’ about the interaction: we’re just putting two arrows together, and there are two ways to do that. That’s all.
The diagrams below show you these two ways. You’ll say: there are four! However, remember that we square an arrow to get a probability. Hence, the direction of the final arrow doesn’t matter when we’re taking the square: we get the same probability. It’s the direction of the individual amplitudes that matters when combining them. So the square of A+B is the same as the square of –(A+B) = –A+(–B) = –AB. Likewise, the square of AB is the same as the square of –(AB) = –A+B.
These are the only two logical possibilities for combining arrows. I’ve written ad nauseam about this elsewhere: see my post on amplitudes and statistics, and so I won’t go into too much detail here. Or, in case you’d want something less than a full mathematical treatment, I can refer you to my previous post also, where I talked about the ‘stopwatch’ and the ‘phase’: the convention for the stopwatch is to have its hand turn clockwise (obviously!) while, in quantum physics, the phase of a wave function will turn counterclockwise. But so that’s just convention and it doesn’t matter, because it’s the phase difference between two amplitudes that counts. To use plain language: it’s the difference in the angles of the arrows, and so that difference is just the same if we reverse the direction of both arrows (which is equivalent to putting a minus sign in front of the final arrow).
OK. Let me get back to the lesson. The point is: this logical or mathematical dichotomy distinguishes bosons (i.e. force-carrying ‘particles’, like photons, which carry the electromagnetic force) from fermions (i.e. ‘matter-particles’, such as electrons and quarks, which make up protons and neutrons). Indeed, the so-called ‘positive’ and ‘negative’ interference leads to two very different behaviors:
1. The probability of getting a boson where there are already present, is n+1 times stronger than it would be if there were none before.
2. In contrast, the probability of getting two electrons into exactly the same state is zero.
The behavior of photons makes lasers possible: we can pile zillions of photon on top of each other, and then release all of them in one powerful burst. [The ‘flickering’ of a laser beam is due to the quick succession of such light bursts. If you want to know how it works in detail, check my post on lasers.]
The behavior of electrons is referred to as Fermi’s exclusion principle: it is only because real-life electrons can have one of two spin polarizations (i.e. two opposite directions of angular momentum, which are referred to as ‘up’ or ‘down’, but they might as well have been referred to as ‘left’ or ‘right’) that we find two electrons (instead of just one) in any atomic or molecular orbital.
So, yes, while both photons and electrons can be described by a similar-looking wave function, their behavior is fundamentally different indeed. How is that possible? Adding and subtracting ‘arrows’ is a very similar operation, isn’it?
It is and it isn’t. From a mathematical point of view, I’d say: yes. From a physics point of view, it’s obviously not very ‘similar’, as it does lead to these two very different behaviors: the behavior of photons allows for laser shows, while the behavior of electrons explain (almost) all the peculiarities of the material world, including us walking into doors. 🙂 If you want to check it out for yourself, just check Feynman’s Lectures for more details on this or, else, re-read my posts on it indeed.
3. Of course, there are even more differences between photons and electrons than the two key differences I mentioned above. Indeed, I’ve simplified a lot when I wrote what I wrote above. The wavefunctions of electrons in orbit around a nucleus can take very weird shapes, as shown in the illustration below—and please do google a few others if you’re not convinced. As mentioned above, they’re so-called standing waves, because they occupy a well-defined position in space only, but standing waves can look very weird. In contrast, traveling plane waves, or envelope curves like the one above, are much simpler.
In short: yes, the mathematical representation of photons and electrons (i.e. the wavefunction) is very similar, but photons and electrons are very different animals indeed.
Potentiality and interconnectedness
I guess that, by now, you agree that quantum theory is weird but, as you know, quantum theory does explain all of the stuff that couldn’t be explained before: “It works like a charm”, as Feynman puts it. In fact, he’s often quoted as having said the following:
“It is often stated that of all the theories proposed in this century, the silliest is quantum theory. Some say the the only thing that quantum theory has going for it, in fact, is that it is unquestionably correct.”
Silly? Crazy? Uncommon-sensy? Truth be told, you do get used to thinking in terms of amplitudes after a while. And, when you get used to them, those ‘complex’ numbers are no longer complicated. 🙂 Most importantly, when one thinks long and hard enough about it (as I am trying to do), it somehow all starts making sense.
For example, we’ve done away with dualism by adopting a unified mathematical framework, but the distinction between bosons and fermions still stands: an ‘elementary particle’ is either this or that. There are no ‘split personalities’ here. So the dualism just pops up at a different level of description, I’d say. In fact, I’d go one step further and say it pops up at a deeper level of understanding.
But what about the other assumptions in quantum mechanics. Some of them don’t make sense, do they? Well… I struggle for quite a while with the assumption that, in quantum mechanics, anything is possible really. For example, a photon (or an electron) can take any path in space, and it can travel at any speed (including speeds that are lower or higher than light). The probability may be extremely low, but it’s possible.
Now that is a very weird assumption. Why? Well… Think about it. If you enjoy watching soccer, you’ll agree that flying objects (I am talking about the soccer ball here) can have amazing trajectories. Spin, lift, drag, whatever—the result is a weird trajectory, like the one below:
But, frankly, a photon taking the ‘southern’ route in the illustration below? What are the ‘wheels and gears’ there? There’s nothing sensible about that route, is there?
In fact, there’s at least three issues here:
1. First, you should note that strange curved paths in the real world (such as the trajectories of billiard or soccer balls) are possible only because there’s friction involved—between the felt of the pool table cloth and the ball, or between the balls, or, in the case of soccer, between the ball and the air. There’s no friction in the vacuum. Hence, in empty space, all things should go in a straight line only.
2. While it’s quite amazing what’s possible, in the real world that is, in terms of ‘weird trajectories’, even the weirdest trajectories of a billiard or soccer ball can be described by a ‘nice’ mathematical function. We obviously can’t say the same of that ‘southern route’ which a photon could follow, in theory that is. Indeed, you’ll agree the function describing that trajectory cannot be ‘nice’. So even we’d allow all kinds of ‘weird’ trajectories, shouldn’t we limit ourselves to ‘nice’ trajectories only? I mean: it doesn’t make sense to allow the photons traveling from your computer screen to your retina take some trajectory to the Sun and back, does it?
3. Finally, and most fundamentally perhaps, even when we would assume that there’s some mechanism combining (a) internal ‘wheels and gears’ (such as spin or angular momentum) with (b) felt or air or whatever medium to push against, what would be the mechanism determining the choice of the photon in regard to these various paths? In Feynman’s words: How does the photon ‘make up its mind’?
Feynman answers these questions, fully or partially (I’ll let you judge), when discussing the double-slit experiment with photons:
“Saying that a photon goes this or that way is false. I still catch myself saying, “Well, it goes either this way or that way,” but when I say that, I have to keep in mind that I mean in the sense of adding amplitudes: the photon has an amplitude to go one way, and an amplitude to go the other way. If the amplitudes oppose each other, the light won’t get there—even though both holes are open.”
It’s probably worth re-calling the results of that experiment here—if only to help you judge whether or not Feynman fully answer those questions above!
The set-up is shown below. We have a source S, two slits (A and B), and a detector D. The source sends photons out, one by one. In addition, we have two special detectors near the slits, which may or may not detect a photon, depending on whether or not they’re switched on as well as on their accuracy.
First, we close one of the slits, and we find that 1% of the photons goes through the other (so that’s one photon for every 100 photons that leave S). Now, we open both slits to study interference. You know the results already:
1. If we switch the detectors off (so we have no way of knowing where the photon went), we get interference. The interference pattern depends on the distance between A and B and varies from 0% to 4%, as shown in diagram (a) below. That’s pretty standard. As you know, classical theory can explain that too assuming light is an electromagnetic wave. But so we have blobs of energy – photons – traveling one by one. So it’s really that double-slit experiment with electrons, or whatever other microscopic particles (as you know, they’ve done these interference electrons with large molecules as well—and they get the same result!). We get the interference pattern by using those quantum-mechanical rules to calculate probabilities: we first add the amplitudes, and it’s only when we’re finished adding those amplitudes, that we square the resulting arrow to the final probability.
2. If we switch those special detectors on, and if they are 100% reliable (i.e. all photons going through are being detected), then our photon suddenly behaves like a particle, instead of as a wave: they will go through one of the slits only, i.e. either through A, or, alternatively, through B. So the two special detectors never go off together. Hence, as Feynman puts it: we shouldn’t think there is “sneaky way that the photon divides in two and then comes back together again.” It’s one or the other way and, and there’s no interference: the detector at D goes off 2% of the time, which is the simple sum of the probabilities for A and B (i.e. 1% + 1%).
3. When the special detectors near A and B are not 100% reliable (and, hence, do not detect all photons going through), we have three possible final conditions: (i) A and D go off, (ii) B and D go off, and (iii) D goes off alone (none of the special detectors went off). In that case, we have a final curve that’s a mixture, as shown in diagram (c) and (d) below. We get it using the same quantum-mechanical rules: we add amplitudes first, and then we square to get the probabilities.
Now, I think you’ll agree with me that Feynman doesn’t answer my (our) question in regard to the ‘weird paths’. In fact, all of the diagrams he uses assume straight or nearby paths. Let me re-insert two of those diagrams below, to show you what I mean.
So where are all the strange non-linear paths here? Let me, in order to make sure you get what I am saying here, insert that illustration with the three crazy routes once again. What we’ve got above (Figure 33 and 34) is not like that. Not at all: we’ve got only straight lines there! Why? The answer to that question is easy: the crazy paths don’t matter because their amplitudes cancel each other out, and so that allows Feynman to simplify the whole situation and show all the relevant paths as straight lines only.
Now, I struggled with that for quite a while. Not because I can’t see the math or the geometry involved. No. Feynman does a great job showing why those amplitudes cancel each other out indeed (if you want a summary, see my previous post once again). My ‘problem’ is something else. It’s hard to phrase it, but let me try: why would we even allow for the logical or mathematical possibility of ‘weird paths’ (and let me again insert that stupid diagram below) if our ‘set of rules’ ensures that the truly ‘weird’ paths (like that photon traveling from your computer screen to your eye doing a detour taking it to the Sun and back) cancel each other out anyway? Does that respect Occam’s Razor? Can’t we devise some theory including ‘sensible’ paths only?
Of course, I am just an autodidact with limited time, and I know hundreds (if not thousands) of the best scientists have thought long and hard about this question and, hence, I readily accept the answer is quite simply: no. There is no better theory. I accept that answer, ungrudgingly, not only because I think I am not so smart as those scientists but also because, as I pointed out above, one can’t explain any path that deviates from a straight line really, as there is no medium, so there are no ‘wheels and gears’. The only path that makes sense is the straight line, and that’s only because…
Well… Thinking about it… We think the straight path makes sense because we have no good theory for any of the other paths. Hmm… So, from a logical point of view, assuming that the straight line is the only reasonable path is actually pretty random too. When push comes to shove, we have no good theory for the straight line either!
You’ll say I’ve just gone crazy. […] Well… Perhaps you’re right. 🙂 But… Somehow, it starts to make sense to me. We allow for everything to, then, indeed weed out the crazy paths using our interference theory, and so we do end up with what we’re ending up with: some kind of vague idea of “light not really traveling in a straight line but ‘smelling’ all of the neighboring paths around it and, hence, using a small core of nearby space“—as Feynman puts it.
Hmm… It brings me back to Richard Feynman’s introduction to his wonderful little book, in which he says we should just be happy to know how Nature works and not aspire to know why it works that way. In fact, he’s basically saying that, when it comes to quantum mechanics, the ‘how’ and the ‘why’ are one and the same, so asking ‘why’ doesn’t make sense, because we know ‘how’. He compares quantum theory with the system of calculation used by the Maya priests, which was based on a system of bars and dots, which helped them to do complex multiplications and divisions, for example. He writes the following about it: “The rules were tricky, but they were a much more efficient way of getting an answer to complicated questions (such as when Venus would rise again) than by counting beans.”
When I first read this, I thought the comparison was flawed: if a common Maya Indian did not want to use the ‘tricky’ rules of multiplication and what have you (or, more likely, if he didn’t understand them), he or she could still resort to counting beans. But how do we count beans in quantum mechanics? We have no ‘simpler’ rules than those weird rules about adding amplitudes and taking the (absolute) square of complex numbers so… Well… We actually are counting beans here then:
1. We allow for any possibility—any path: straight, curved or crooked. Anything is possible.
2. But all those possibilities are inter-connected. Also note that every path has a mirror image: for every route ‘south’, there is a similar route ‘north’, so to say, except for the straight line, which is a mirror image of itself.
3. And then we have some clock ticking. Time goes by. It ensures that the paths that are too far removed from the straight line cancel each other. [Of course, you’ll ask: what is too far? But I answered that question – convincingly, I hope – in my previous post: it’s not about the ‘number of arrows’ (as suggested in the caption under that Figure 34 above), but about the frequency and, hence, the ‘wavelength’ of our photon.]
4. And so… Finally, what’s left is a limited number of possibilities that interfere with each other, which results in what we ‘see’: light seems to use a small core of space indeed–a limited number of nearby paths.
You’ll say… Well… That still doesn’t ‘explain’ why the interference pattern disappears with those special detectors or – what amounts to the same – why the special detectors at the slits never click simultaneously.
You’re right. How do we make sense of that? I don’t know. You should try to imagine what happens for yourself. Everyone has his or her own way of ‘conceptualizing’ stuff, I’d say, and you may well be content and just accept all of the above without trying to ‘imagine’ what’s happening really when a ‘photon’ goes through one or both of those slits. In fact, that’s the most sensible thing to do. You should not try to imagine what happens and just follow the crazy calculus rules.
However, when I think about it, I do have some image in my head. The image is of one of those ‘touch-me-not’ weeds. I quickly googled one of these images, but I couldn’t quite find what I am looking for: it would be more like something that, when you touch it, curls up in a little ball. Any case… You know what I mean, I hope.
You’ll shake your head now and solemnly confirm that I’ve gone mad. Touch-me-not weeds? What’s that got to do with photons?
Well… It’s obvious you and I cannot really imagine how a photon looks like. But I think of it as a blob of energy indeed, which is inseparable, and which effectively occupies some space (in three dimensions that is). I also think that, whatever it is, it actually does travel through both slits, because, as it interferes with itself, the interference pattern does depend on the space between the two slits as well as the width of those slits. In short, the whole ‘geometry’ of the situation matters, and so the ‘interaction’ is some kind of ‘spatial’ thing. [Sorry for my awfully imprecise language here.]
Having said that, I think it’s being detected by one detector only because only one of them can sort of ‘hook’ it, somehow. Indeed, because it’s interconnected and inseparable, it’s the whole blob that gets hooked, not just one part of it. [You may or may not imagine that the detectors that’s got the best hold of it gets it, but I think that’s pushing the description too much.] In any case, the point is that a photon is surely not like a lizard dropping its tail while trying to escape. Perhaps it’s some kind of unbreakable ‘string’ indeed – and sorry for summarizing string theory so unscientifically here – but then a string oscillating in dimensions we can’t imagine (or in some dimension we can’t observe, like the Kaluza-Klein theory suggests). It’s something, for sure, and something that stores energy in some kind of oscillation, I think.
What it is, exactly, we can’t imagine, and we’ll probably never find out—unless we accept that the how of quantum mechanics is not only the why, but also the what. 🙂
Does this make sense? Probably not but, if anything, I hope it fired your imagination at least. 🙂
# The Strange Theory of Light and Matter (I)
I am of the opinion that Richard Feynman’s wonderful little common-sense introduction to the ‘uncommon-sensy‘ theory of quantum electrodynamics (The Strange Theory of Light and Matter), which were published a few years before his death only, should be mandatory reading for high school students.
I actually mean that: it should just be part of the general education of the first 21st century generation. Either that or, else, the Education Board should include a full-fledged introduction to complex analysis and quantum physics in the curriculum. 🙂
Having praised it (just now, as well as in previous posts), I re-read it recently during a trek in Nepal with my kids – I just grabbed the smallest book I could find the morning we left 🙂 – and, frankly, I now think Ralph Leighton, who transcribed and edited these four short lectures, could have cross-referenced it better. Moreover, there are two or three points where Feynman (or Leighton?) may have sacrificed accuracy for readability. Let me recapitulate the key points and try to improve here and there.
Amplitudes and arrows
The booklet avoids scary mathematical terms and formulas but doesn’t avoid the fundamental concepts behind, and it doesn’t avoid the kind of ‘deep’ analysis one needs to get some kind of ‘feel’ for quantum mechanics either. So what are the simplifications?
A probability amplitude (i.e. a complex number) is, quite simply, an arrow, with a direction and a length. Thus Feynman writes: “Arrows representing probabilities from 0% to 16% [as measured by the surface of the square which has the arrow as its side] have lengths from 0 to 0.4.” That makes sense: such geometrical approach does away, for example, with the need to talk about the absolute square (i.e. the square of the absolute value, or the squared norm) of a complex number – which is what we need to calculate probabilities from probability amplitudes. So, yes, it’s a wonderful metaphor. We have arrows and surfaces now, instead of wave functions and absolute squares of complex numbers.
The way he combines these arrows make sense too. He even notes the difference between photons (bosons) and electrons (fermions): for bosons, we just add arrows; for fermions, we need to subtract them (see my post on amplitudes and statistics in this regard).
There is also the metaphor for the phase of a wave function, which is a stroke of genius really (I mean it): the direction of the ‘arrow’ is determined by a stopwatch hand, which starts turning when a photon leaves the light source, and stops when it arrives, as shown below.
OK. Enough praise. What are the drawbacks?
The illustration above accompanies an analysis of how light is either reflected from the front surface of a sheet of a glass or, else, from the back surface. Because it takes more time to bounce off the back surface (the path is associated with a greater distance), the front and back reflection arrows point in different directions indeed (the stopwatch is stopped somewhat later when the photon reflects from the back surface). Hence, the difference in phase (but that’s a term that Feynman also avoids) is determined by the thickness of the glass. Just look at it. In the upper part of the illustration above, the thickness is such that the chance of a photon reflecting off the front or back surface is 5%: we add two arrows, each with a length of 0.2, and then we square the resulting (aka final) arrow. Bingo! We get a surface measuring 0.05, or 5%.
Huh? Yes. Just look at it: if the angle between the two arrows would be 90° exactly, it would be 0.08 or 8%, but the angle is a bit less. In the lower part of the illustration, the thickness of the glass is such that the two arrows ‘line up’ and, hence, they form an arrow that’s twice the length of either arrow alone (0.2 + 0.2 = 0.4), with a square four times as large (0.16 = 16%). So… It all works like a charm, as Feynman puts it.
[…]
But… Hey! Look at the stopwatch for the front reflection arrows in the upper and lower diagram: they point in the opposite direction of the stopwatch hand! Well… Hmm… You’re right. At this point, Feynman just notes that we need an extra rule: “When we are considering the path of a photon bouncing off the front surface of the glass, we reverse the direction of the arrow.
He doesn’t say why. He just adds this random rule to the other rules – which most readers who read this book already know. But why this new rule? Frankly, this inconsistency – or lack of clarity – would wake me up at night. This is Feynman: there must be a reason. Why?
Initially, I suspected it had something to do with the two types of ‘statistics’ in quantum mechanics (i.e. those different rules for combining amplitudes of bosons and fermions respectively, which I mentioned above). But… No. Photons are bosons anyway, so we surely need to add, not subtract. So what is it?
[…] Feynman explains it later, much later – in the third of the four chapters of this little book, to be precise. It’s, quite simply, the result of the simplified model he uses in that first chapter. The photon can do anything really, and so there are many more arrows than just two. We actually should look at an infinite number of arrows, representing all possible paths in spacetime, and, hence, the two arrows (i.e. the one for the reflection from the front and back surface respectively) are combinations of many other arrows themselves. So how does that work?
An analysis of partial reflection (I)
The analysis in Chapter 3 of the same phenomenon (i.e. partial reflection by glass) is a simplified analysis too, but it’s much better – because there are no ‘random’ rules here. It is what Leighton promises to the reader in his introduction: “A complete description, accurate in every detail, of a framework onto which more advanced concepts can be attached without modification. Nothing has to be ‘unlearned’ later.
Well… Accurate in every detail? Perhaps not. But it’s good, and I still warmly recommend a reading of this delightful little book to anyone who’d ask me what to read as a non-mathematical introduction to quantum mechanics. I’ll limit myself here to just some annotations.
The first drawing (a) depicts the situation:
1. A photon from a light source is being reflected by the glass. Note that it may also go straight through, but that’s a possibility we’ll analyze separately. We first assume that the photon is effectively being reflected by the glass, and so we want to calculate the probability of that event using all these ‘arrows’, i.e. the underlying probability amplitudes.
2. As for the geometry of the situation: while the light source and the detector seem to be positioned at some angle from the normal, that is not the case: the photon travels straight down (and up again when reflected). It’s just a limitation of the drawing. It doesn’t really matter much for the analysis: we could look at a light beam coming in at some angle, but so we’re not doing that. It’s the simplest situation possible, in terms of experimental set-up that is. I just want to be clear on that.
Now, rather than looking at the front and back surface only (as Feynman does in Chapter 1), the glass sheet is now divided into a number of very thin sections: five, in this case, so we have six points from which the photon can be scattered into the detector at A: X1 to X6. So that makes six possible paths. That’s quite a simplification but it’s easy to see it doesn’t matter: adding more sections would result in many more arrows, but these arrows would also be much smaller, and so the final arrow would be the same.
The more significant simplification is that the paths are all straight paths, and that the photon is assumed to travel at the speed of light, always. If you haven’t read the booklet, you’ll say that’s obvious, but it’s not: a photon has an amplitude to go faster or slower than c but, as Feynman points out, these amplitudes cancel out over longer distances. Likewise, a photon can follow any path in space really, including terribly crooked paths, but these paths also cancel out. As Feynman puts it: “Only the paths near the straight-line path have arrows pointing in nearly the same direction, because their timings are nearly the same, and only these arrows are important, because it is from them that we accumulate a large final arrow.” That makes perfect sense, so there’s no problem with the analysis here either.
So let’s have a look at those six arrows in illustration (b). They point in a slightly different direction because the paths are slightly different and, hence, the distances (and, therefore, the timings) are different too. Now, Feynman (but I think it’s Leighton really) loses himself here in a digression on monochromatic light sources. A photon is a photon: it will have some wave function with a phase that varies in time and in space and, hence, illustration (b) makes perfect sense. [I won’t quote what he writes on a ‘monochromatic light source’ because it’s quite confusing and, IMHO, not correct.]
The stopwatch metaphor has only one minor shortcoming: the hand of a stopwatch rotates clockwise (obviously!), while the phase of an actual wave function goes counterclockwise with time. That’s just convention, and I’ll come back to it when I discuss the mathematical representation of the so-called wave function, which gives you these amplitudes. However, it doesn’t change the analysis, because it’s the difference in the phase that matters when combining amplitudes, so the clock can turn in either way indeed, as long as we’re agreed on it.
At this point, I can’t resist: I’ll just throw the math in. If you don’t like it, you can just skip the section that follows.
Feynman’s arrows and the wave function
The mathematical representation of Feynman’s ‘arrows’ is the wave function:
f = f(x–ct)
Is that the wave function? Yes. It is: it’s a function whose argument is x – ct, with x the position in space, and t the time variable. As for c, that’s the speed of light. We throw it in to make the units in which we measure time and position compatible.
Really? Yes: f is just a regular wave function. To make it look somewhat more impressive, I could use the Greek symbol Φ (phi) or Ψ (psi) for it, but it’s just what it is: a function whose value depends on position and time indeed, so we write f = f(x–ct). Let me explain the minus sign and the c in the argument.
Time and space are interchangeable in the argument, provided we measure time in the ‘right’ units, and so that’s why we multiply the time in seconds with c, so the new unit of time becomes the time that light needs to travel a distance of one meter. That also explains the minus sign in front of ct: if we add one distance unit (i.e. one meter) to the argument, we have to subtract one time unit from it – the new time unit of course, so that’s the time that light needs to travel one meter – in order to get the same value for f. [If you don’t get that x–ct thing, just think a while about this, or make some drawing of a wave function. Also note that the spacetime diagram in illustration (b) above assumes the same: time is measured in an equivalent unit as distance, so the 45% line from the south-west to the north-east, that bounces back to the north-west, represents a photon traveling at speed c in space indeed: one unit of time corresponds to one meter of travel.]
Now I want to be a bit more aggressive. I said is a simple function. That’s true and not true at the same time. It’s a simple function, but it gives you probability amplitudes, which are complex numbers – and you may think that complex numbers are, perhaps, not so simple. However, you shouldn’t be put off. Complex numbers are really like Feynman’s ‘arrows’ and, hence, fairly simple things indeed. They have two dimensions, so to say: an a– and a b-coordinate. [I’d say an x– and y-coordinate, because that’s what you usually see, but then I used the x symbol already for the position variable in the argument of the function, so you have to switch to a and b for a while now.]
This a– and b-coordinate are referred to as the real and imaginary part of a complex number respectively. The terms ‘real’ and ‘imaginary’ are confusing because both parts are ‘real’ – well… As real as numbers can be, I’d say. 🙂 They’re just two different directions in space: the real axis is the a-axis in coordinate space, and the imaginary axis is the b-axis. So we could write it as an ordered pair of numbers (a, b). However, we usually write it as a number itself, and we distinguish the b-coordinate from the a-coordinate by writing an i in front: (a, b) = a + ib. So our function f = f(x–ct) is a complex-valued function: it will give you two numbers (an a and a b) instead of just one when you ‘feed’ it with specific values for x and t. So we write:
f = f(x–ct) = (a, b) = a + ib
So what’s the shape of this function? Is it linear or irregular or what? We’re talking a very regular wave function here, so it’s shape is ‘regular’ indeed. It’s a periodic function, so it repeats itself again and again. The animations below give you some idea of such ‘regular’ wave functions. Animation A and B shows a real-valued ‘wave’: a ball on a string that goes up and down, for ever and ever. Animations C to H are – believe it or not – basically the same thing, but so we have two numbers going up and down. That’s all.
The wave functions above are, obviously, confined in space, and so the horizontal axis represents the position in space. What we see, then, is how the real and imaginary part of these wave functions varies as time goes by. [Think of the blue graph as the real part, and the imaginary part as the pinkish thing – or the other way around. It doesn’t matter.] Now, our wave function – i.e. the one that Feynman uses to calculate all those probabilities – is even more regular than those shown above: its real part is an ordinary cosine function, and it’s imaginary part is a sine. Let me write this in math:
f = f(x–ct) = a + ib = r(cosφ + isinφ)
It’s really the most regular wave function in the world: the very simple illustration below shows how the two components of f vary as a function in space (i.e. the horizontal axis) while we keep the time fixed, or vice versa: it could also show how the function varies in time at one particular point in space, in which case the horizontal axis would represent the time variable. It is what it is: a sine and a cosine function, with the angle φ as its argument.
Note that a sine function is the same as a cosine function, but it just lags a bit. To be precise, the phase difference is 90°, or π/2 in radians (the radian (i.e. the length of the arc on the unit circle) is a much more natural unit to express angles, as it’s fully compatible with our distance unit and, hence, most – if not all – of our other units). Indeed, you may or may not remember the following trigonometric identities: sinφ = cos(π/2–φ) = cos(φ–π/2).
In any case, now we have some r and φ here, instead of a and b. You probably wonder where I am going with all of this. Where are the x and t variables? Be patient! You’re right. We’ll get there. I have to explain that r and φ first. Together, they are the so-called polar coordinates of Feynman’s ‘arrow’ (i.e. the amplitude). Polar coordinates are just as good as coordinates as these Cartesian coordinates we’re used to (i.e. a and b). It’s just a different coordinate system. The illustration below shows how they are related to each other. If you remember anything from your high school trigonometry course, you’ll immediately agree that a is, obviously, equal to rcosφ, and b is rsinφ, which is what I wrote above. Just as good? Well… The polar coordinate system has some disadvantages (all of those expressions and rules we learned in vector analysis assume rectangular coordinates, and so we should watch out!) but, for our purpose here, polar coordinates are actually easier to work with, so they’re better.
Feynman’s wave function is extremely simple because his ‘arrows’ have a fixed length, just like the stopwatch hand. They’re just turning around and around and around as time goes by. In other words, is constant and does not depend on position and time. It’s the angle φ that’s turning and turning and turning as the stopwatch ticks while our photon is covering larger and larger distances. Hence, we need to find a formula for φ that makes it explicit how φ changes as a function in spacetime. That φ variable is referred to as the phase of the wave function. That’s a term you’ll encounter frequently and so I had better mention it. In fact, it’s generally used as a synonym for any angle, as you can see from my remark on the phase difference between a sine and cosine function.
So how do we express φ as a function of x and t? That’s where Euler’s formula comes in. Feynman calls it the most remarkable formula in mathematics – our jewel! And he’s probably right: of all the theorems and formulas, I guess this is the one we can’t do without when studying physics. I’ve written about this in another post, and repeating what I wrote there would eat up too much space, so I won’t do it and just give you that formula. A regular complex-valued wave function can be represented as a complex (natural) exponential function, i.e. an exponential function with Euler’s number e (i.e. 2.728…) as the base, and the complex number iφ as the (variable) exponent. Indeed, according to Euler’s formula, we can write:
f = f(x–ct) = a + ib = r(cosφ + isinφ) = r·eiφ
As I haven’t explained Euler’s formula (you should really have a look at my posts on it), you should just believe me when I say that r·eiφ is an ‘arrow’ indeed, with length r and angle φ (phi), as illustrated above, with a and b coordinates arcosφ and b = rsinφ. What you should be able to do now, is to imagine how that φ angle goes round and round as time goes by, just like Feynman’s ‘arrow’ goes round and round – just like a stopwatch hand indeed, but note our φ angle turns counterclockwise indeed.
Fine, you’ll say – but so we need a mathematical expression, don’t we? Yes,we do. We need to know how that φ angle (i.e. the variable in our r·eiφ function) changes as a function of x and t indeed. It turns out that the φ in r·eiφ can be substituted as follows:
eiφ = r·ei(ωt–kx) = r·eik(x–ct)
Huh? Yes. The phase (φ) of the probability amplitude (i.e. the ‘arrow’) is a simple linear function of x and t indeed: φ = ωt–kx = –k(x–ct). What about all these new symbols, k and ω? The ω and k in this equation are the so-called angular frequency and the wave number of the wave. The angular frequency is just the frequency expressed in radians, and you should think of the wave number as the frequency in space. [I could write some more here, but I can’t make it too long, and you can easily look up stuff like this on the Web.] Now, the propagation speed c of the wave is, quite simply, the ratio of these two numbers: c = ω/k. [Again, it’s easy to show how that works, but I won’t do it here.]
Now you know it all, and so it’s time to get back to the lesson.
An analysis of partial reflection (II)
Why did I digress? Well… I think that what I write above makes much more sense than Leighton’s rather convoluted description of a monochromatic light source as he tries to explain those arrows in diagram (b) above. Whatever it is, a monochromatic light source is surely not “a device that has been carefully arranged so that the amplitude for a photon to be emitted at a certain time can be easily calculated.” That’s plain nonsense. Monochromatic light is light of a specific color, so all photons have the same frequency (or, to be precise, their wave functions have all the same well-defined frequency), but these photons are not in phase. Photons are emitted by atoms, as an electron moves from one energy level to the other. Now, when a photon is emitted, what actually happens is that the atom radiates a train of waves only for about 10–8 sec, so that’s about 10 billionths of a second. After 10–8 sec, some other atom takes over, and then another atom, and so on. Each atom emits one photon, whose energy is the difference between the two energy levels that the electron is jumping to or from. So the phase of the light that is being emitted can really only stay the same for about 10–8 sec. Full stop.
Now, what I write above on how atoms actually emit photons is a paraphrase of Feynman’s own words in his much more serious series of Lectures on Mechanics, Radiation and Heat. Therefore, I am pretty sure it’s Leighton who gets somewhat lost when trying to explain what’s happening. It’s not photons that interfere. It’s the probability amplitudes associated with the various paths that a photon can take. To be fully precise, we’re talking the photon here, i.e. the one that ends up in the detector, and so what’s going on is that the photon is interfering with itself. Indeed, that’s exactly what the ‘craziness’ of quantum mechanics is all about: we sent electrons, one by one, through two slits, and we observe an interference pattern. Likewise, we got one photon here, that can go various ways, and it’s those amplitudes that interfere, so… Yes: the photon interferes with itself.
OK. Let’s get back to the lesson and look at diagram (c) now, in which the six arrows are added. As mentioned above, it would not make any difference if we’d divide the glass in 10 or 20 or 1000 or a zillion ‘very thin’ sections: there would be many more arrows, but they would be much smaller ones, and they would cover the same circular segment: its two endpoints would define the same arc, and the same chord on the circle that we can draw when extending that circular segment. Indeed, the six little arrows define a circle, and that’s the key to understanding what happens in the first chapter of Feynman’s QED, where he adds two arrows only, but with a reversal of the direction of the ‘front reflection’ arrow. Here there’s no confusion – Feynman (or Leighton) eloquently describe what they do:
“There is a mathematical trick we can use to get the same answer [i.e. the same final arrow]: Connecting the arrows in order from 1 to 6, we get something like an arc, or part of a circle. The final arrow forms the chord of this arc. If we draw arrows from the center of the ‘circle’ to the tail of arrow 1 and to the head of arrow 6, we get two radii. If the radius arrow from the center to arrow 1 is turned 180° (“subtracted”), then it can be combined with the other radius arrow to give us the same final arrow! That’s what I was doing in the first lecture: these two radii are the two arrows I said represented the ‘front surface’ and ‘back surface’ reflections. They each have the famous length of 0.2.”
That’s what’s shown in part (d) of the illustration above and, in case you’re still wondering what’s going on, the illustration below should help you to make your own drawings now.
So… That explains the phenomenon Feynman wanted to explain, which is a phenomenon that cannot be explained in classical physics. Let me copy the original here:
Partial reflection by glass—a phenomenon that cannot be explained in classical physics? Really?
You’re right to raise an objection: partial reflection by glass can, in fact, be explained by the classical theory of light as an electromagnetic wave. The assumption then is that light is effectively being reflected by both the front and back surface and the reflected waves combine or cancel out (depending on the thickness of the glass and the angle of reflection indeed) to match the observed pattern. In fact, that’s how the phenomenon was explained for hundreds of years! The point to note is that the wave theory of light collapsed as technology advanced, and experiments could be made with very weak light hitting photomultipliers. As Feynman writes: “As the light got dimmer and dimmer, the photomultipliers kept making full-sized clicks—there were just fewer of them. Light behaved as particles!”
The point is that a photon behaves like an electron when going through two slits: it interferes with itself! As Feynman notes, we do not have any ‘common-sense’ theory to explain what’s going on here. We only have quantum mechanics, and quantum mechanics is an “uncommon-sensy” theory: a “strange” or even “absurd” theory, that looks “cockeyed” and incorporates “crazy ideas”. But… It works.
Now that we’re here, I might just as well add a few more paragraphs to fully summarize this lovely publication – if only because summarizing stuff like this helps me to come to terms with understanding things better myself!
Calculating amplitudes: the basic actions
So it all boils down to calculating amplitudes: an event is divided into alternative ways of how the event can happen, and the arrows for each way are ‘added’. Now, every way an event can happen can be further subdivided into successive steps. The amplitudes for these steps are then ‘multiplied’. For example, the amplitude for a photon to go from A to C via B is the ‘product’ of the amplitude to go from A to B and the amplitude to go from B to C.
I marked the terms ‘multiplied’ and ‘product’ with apostrophes, as if to say it’s not a ‘real’ product. But it is an actual multiplication: it’s the product of two complex numbers. Feynman does not explicitly compare this product to other products, such as the dot (•) or cross (×) product of two vectors, but he uses the ∗ symbol for multiplication here, which clearly distinguishes VW from VW or V×W indeed or, more simply, from the product of two ordinary numbers. [Ordinary numbers? Well… With ‘ordinary’ numbers, I mean real numbers, of course, but once you get used to complex numbers, you won’t like that term anymore, because complex numbers start feeling just as ‘real’ as other numbers – especially when you get used to the idea of those complex-valued wave functions underneath reality.]
Now, multiplying complex numbers, or ‘arrows’ using QED’s simpler language, consists of adding their angles and multiplying their lengths. That being said, the arrows here all have a length smaller than one (because their square cannot be larger than one, because that square is a probability, i.e. a (real) number between 0 and 1), Feynman defines successive multiplication as successive ‘shrinks and turns’ of the unit arrow. That all makes sense – very much sense.
But what’s the basic action? As Feynman puts the question: “How far can we push this process of splitting events into simpler and simpler subevents? What are the smallest possible bits and pieces? Is there a limit?” He immediately answers his own question. There are three ‘basic actions’:
1. A photon goes from one point (in spacetime) to another: this amplitude is denoted by P(A to B).
2. An electron goes from one point to another: E(A to B).
3. An electron emits and/or absorbs a photon: this is referred to as a ‘junction’ or a ‘coupling’, and the amplitude for this is denoted by the symbol j, i.e. the so-called junction number.
How do we find the amplitudes for these?
The amplitudes for (1) and (2) are given by a so-called propagator functions, which give you the probability amplitude for a particle to travel from one place to another in a given time indeed, or to travel with a certain energy and momentum. Judging from the Wikipedia article on these functions, the subject-matter is horrendously complicated, and the formulas are too, even if Feynman says it’s ‘very simple’ – for a photon, that is. The key point to note is that any path is possible. Moreover, there are also amplitudes for photons to go faster or slower than the speed of light (c)! However, these amplitudes make smaller contributions, and cancel out over longer distances. The same goes for the crooked paths: the amplitudes cancel each other out as well.
What remains are the ‘nearby paths’. In my previous post (check the section on electromagnetic radiation), I noted that, according to classical wave theory, a light wave does not occupy any physical space: we have electric and magnetic field vectors that oscillate in a direction that’s perpendicular to the direction of propagation, but these do not take up any space. In quantum mechanics, the situation is quite different. As Feynman puts it: “When you try to squeeze light too much [by forcing it to go through a small hole, for example, as illustrated below], it refuses to cooperate and begins to spread out.” He explains this in the text below the second drawing: “There are not enough arrows representing the paths to Q to cancel each other out.”
Not enough arrows? We can subdivide space in as many paths as we want, can’t we? Do probability amplitudes take up space? And now that we’re asking the tougher questions, what’s a ‘small’ hole? What’s ‘small’ and what’s ‘large’ in this funny business?
Unfortunately, there’s not much of an attempt in the booklet to try to answer these questions. One can begin to formulate some kind of answer when doing some more thinking about these wave functions. To be precise, we need to start looking at their wavelength. The frequency of a typical photon (and, hence, of the wave function representing that photon) is astronomically high. For visible light, it’s in the range of 430 to 790 teraherz, i.e. 430–790×1012 Hz. We can’t imagine such incredible numbers. Because the frequency is so high, the wavelength is unimaginably small. There’s a very simple and straightforward relation between wavelength (λ) and frequency (ν) indeed: c = λν. In words: the speed of a wave is the wavelength (i.e. the distance (in space) of one cycle) times the frequency (i.e. the number of cycles per second). So visible light has a wavelength in the range of 390 to 700 nanometer, i.e. 390–700 billionths of a meter. A meter is a rather large unit, you’ll say, so let me express it differently: it’s less than one thousandth of a micrometer, and a micrometer itself is one thousandth of a millimeter. So, no, we can’t imagine that distance either.
That being said, that wavelength is there, and it does imply that some kind of scale is involved. A wavelength covers one full cycle of the oscillation: it means that, if we travel one wavelength in space, our ‘arrow’ will point in the same direction again. Both drawings above (Figure 33 and 34) suggest the space between the two blocks is less than one wavelength. It’s a bit hard to make sense of the direction of the arrows but note the following:
1. The phase difference between (a) the ‘arrow’ associated with the straight route (i.e. the ‘middle’ path) and (b) the ‘arrow’ associated with the ‘northern’ or ‘southern’ route (i.e. the ‘highest’ and ‘lowest’ path) in Figure 33 is like quarter of a full turn, i.e. 90°. [Note that the arrows for the northern and southern route to P point in the same direction, because they are associated with the same timing. The same is true for the two arrows in-between the northern/southern route and the middle path.]
2. In Figure 34, the phase difference between the longer routes and the straight route is much less, like 10° only.
Now, the calculations involved in these analyses are quite complicated but you can see the explanation makes sense: the gap between the two blocks is much narrower in Figure 34 and, hence, the geometry of the situation does imply that the phase difference between the amplitudes associated with the ‘northern’ and ‘southern’ routes to Q is much smaller than the phase difference between those amplitudes in Figure 33. To be precise,
1. The phase difference between (a) the ‘arrow’ associated with the ‘northern route’ to Q and (b) the ‘arrow’ associated with the ‘southern’ route to Q (i.e. the ‘highest’ and ‘lowest’ path) in Figure 33 is like three quarters of a full turn, i.e. 270°. Hence, the final arrow is very short indeed, which means that the probability of the photon going to Q is very low indeed. [Note that the arrows for the northern and southern route no longer point in the same direction, because they are associated with very different timings: the ‘southern route’ is shorter and, hence, faster.]
2. In Figure 34, we have a phase difference between the shortest and longest route that is like 60° only and, hence, the final arrow is very sizable and, hence, the probability of the photon going to Q is, accordingly, quite substantial.
OK… What did I say here about P(A to B)? Nothing much. I basically complained about the way Feynman (or Leighton, more probably) explained the interference or diffraction phenomenon and tried to do a better job before tacking the subject indeed: how do we get that P(A to B)?
A photon can follow any path from A to B, including the craziest ones (as shown below). Any path? Good players give a billiard ball extra spin that may make the ball move in a curved trajectory, and will also affect its its collision with any other ball – but a trajectory like the one below? Why would a photon suddenly take a sharp turn left, or right, or up, or down? What’s the mechanism here? What are the ‘wheels and gears inside’ of the photon that (a) make a photon choose this path in the first place and (b) allow it to whirl, swirl and twirl like that?
We don’t know. In fact, the question may make no sense, because we don’t know what actually happens when a photon travels through space. We know it leaves as a lump of energy, and we know it arrives as a similar lump of energy. When we actually put a detector to check which path is followed – by putting special detectors at the slits in the famous double-slit experiment, for example – the interference pattern disappears. So… Well… We don’t know how to describe what’s going on: a photon is not a billiard ball, and it’s not a classical electromagnetic wave either. It is neither. The only thing that we know is that we get probabilities that match with the results of experiment if we accept this nonsensical assumptions and do all of the crazy arithmetic involved. Let me get back to the lesson.
Photons can also travel faster or slower than the speed of light (c is some 3×108 meter per second but, in our special time unit, it’s equal to one). Does that violate relativity? It doesn’t, apparently, but for the reasoning behind I must, once again, refer you to more sophisticated writing.
In any case, if the mathematicians and physicists have to take into account both of these assumptions (any path is possible, and speeds higher or lower than c are possible too!), they must be looking at some kind of horrendous integral, don’t they?
They are. When everything is said and done, that propagator function is some monstrous integral indeed, and I can’t explain it to you in a couple of words – if only because I am struggling with it myself. 🙂 So I will just believe Feynman when he says that, when the mathematicians and physicists are finished with that integral, we do get some simple formula which depends on the value of the so-called spacetime interval between two ‘points’ – let’s just call them 1 and 2 – in space and time. You’ve surely heard about it before: it’s denoted by sor I (or whatever) and it’s zero if an object moves at the speed of light, which is what light is supposed to do – but so we’re dealing with a different situation here. 🙂 To be precise, I consists of two parts:
1. The distance d between the two points (1 and 2), i.e. Δr, which is just the square root of d= Δr= (x2–x2)2+(y2–y1)2+(z2–z1)2. [This formula is just a three-dimensional version of the Pythagorean Theorem.]
2. The ‘distance’ (or difference) in time, which is usually expressed in those ‘equivalent’ time units that we introduced above already, i.e. the time that light – traveling at the speed of light 🙂 – needs to travel one meter. We will usually see that component of I in a squared version too: Δt= (t2–t1)2, or, if time is expressed in the ‘old’ unit (i.e. seconds), then we write c2Δt2 = c2(t2–t1)2.
Now, the spacetime interval itself is defined as the excess of the squared distance (in space) over the squared time difference:
s= I = Δr– Δt= (x2–x2)2+(y2–y1)2+(z2–z1)– (t2–t1)2
You know we can then define time-like, space-like and light-like intervals, and these, in turn, define the so-called light cone. The spacetime interval can be negative, for example. In that case, Δt2 will be greater than Δr2, so there is no ‘excess’ of distance over time: it means that the time difference is large enough to allow for a cause–effect relation between the two events, and the interval is said to be time-like. In any case, that’s not the topic of this post, and I am sorry I keep digressing.
The point to note is that the formula for the propagator favors light-like intervals: they are associated with large arrows. Space- and time-like intervals, on the other hand, will contribute much smaller arrows. In addition, the arrows for space- and time-like intervals point in opposite directions, so they will cancel each other out. So, when everything is said and done, over longer distances, light does tend to travel in a straight line and at the speed of light. At least, that’s what Feynman tells us, and I tend to believe him. 🙂
But so where’s the formula? Feynman doesn’t give it, probably because it would indeed confuse us. Just google ‘propagator for a photon’ and you’ll see what I mean. He does integrate the above conclusions in that illustration (b) though. What illustration?
Oh… Sorry. You probably forgot what I am trying to do here, but so we’re looking at that analysis of partial reflection of light by glass. Let me insert it once again so you don’t have to scroll all the way up.
You’ll remember that Feynman divided the glass sheet into five sections and, hence, there are six points from which the photon can be scattered into the detector at A: X1 to X6. So that makes six possible paths: these paths are all straight (so Feynman makes abstraction of all of the crooked paths indeed), and the other assumption is that the photon effectively traveled at the speed of light, whatever path it took (so Feynman also assumes the amplitudes for speeds higher or lower than c cancel each other out). So that explains the difference in time at emission from the light source. The longest path is the path to point X6 and then back up to the detector. If the photon would have taken that path, it would have to be emitted earlier in time – earlier as compared to the other possibilities, which take less time. So it would have to be emitted at T = T6. The direction of the ‘arrow’ is like one o’clock. The shorter paths are associated with shorter times (the difference between the time of arrival and departure is shorter) and so T5 is associated with an arrow in the 12 o’clock direction, T5 is 11 o’clock, and so on, till T5, which points at the 9 o’clock direction.
But… What? These arrows also include the reflection, i.e. the interaction between the photon and some electron in the glass, don’t they? […] Right you are. Sorry. So… Yes. The action above involves four ‘basic actions’:
1. A photon is emitted by the source at a time T = T1, T2, T3, T4, T5 or T6: we don’t know. Quantum-mechanical uncertainty. 🙂
2. It goes from the source to one of the points X = X1, X2, X3, X4, X5 or Xin the glass: we don’t know which one, because we don’t have a detector there.
3. The photon interacts with an electron at that point.
4. It makes it way back up to the detector at A.
Step 1 does not have any amplitude. It’s just the start of the event. Well… We start with the unit arrow pointing north actually, so its length is one and its direction is 12 o’clock. And so we’ll shrink and turn it, i.e. multiply it with other arrows, in the next steps.
Steps 2 and 4 are straightforward and are associated with arrows of the same length. Their direction depends on the distance traveled and/or the time of emission: it amounts to the same because we assume the speed is constant and exactly the same for the six possibilities (that speed is c = 1 obviously). But what length? Well… Some length according to that formula which Feynman didn’t give us. 🙂
So now we need to analyze the third of those three basic actions: a ‘junction’ or ‘coupling’ between an electron and a photon. At this point, Feynman embarks on a delightful story highlighting the difficulties involved in calculating that amplitude. A photon can travel following crooked paths and at devious speeds, but an electron is even worse: it can take what Feynman refers to as ‘one-hop flights’, ‘two-hop flights’, ‘three-hop flights’,… any ‘n-hop flight’ really. Each stop involves an additional amplitude, which is represented by n2, with n some number that has been determined from experiment. The formula for E(A to B) then becomes a series of terms: P(A to B) + (P(A to C)∗n2∗(P(C to B) + (P(A to D)∗n2∗P(D to E)∗n2∗P(E to C)+…
P(A to B) is the ‘one-hop flight’ here, while C, D and E are intermediate points, and (P(A to C)∗n2∗(P(C to B) and (P(A to D)∗n2∗P(D to E)∗n2∗P(E to C) are the ‘two-hop’ and ‘three-hop’ flight respectively. Note that this calculation has to be made for all possible intermediate points C, D, E and so on. To make matters worse, the theory assumes that electrons can emit and absorb photons along the way, and then there’s a host of other problems, which Feynman tries to explain in the last and final chapter of his little book. […]
Hey! Stop it!
What?
You’re talking about E(A to B) here. You’re supposed to be talking about that junction number j.
Oh… Sorry. You’re right. Well… That junction number j is about –0.1. I know that looks like an ordinary number, but it’s an amplitude, so you should interpret it as an arrow. When you multiply it with another arrow, it amounts to a shrink to one-tenth, and half a turn. Feynman entertains us also on the difficulties of calculating this number but, you’re right, I shouldn’t be trying to copy him here – if only because it’s about time I finish this post. 🙂
So let me conclude it indeed. We can apply the same transformation (i.e. we multiply with j) to each of the six arrows we’ve got so far, and the result is those six arrows next to the time axis in illustration (b). And then we combine them to get that arc, and then we apply that mathematical trick to show we get the same result as in a classical wave-theoretical analysis of partial reflection.
Done. […] Are you happy now?
[…] You shouldn’t be. There are so many questions that have been left unanswered. For starters, Feynman never gives that formula for the length of P(A to B), so we have no clue about the length of these arrows and, hence, about that arc. If physicists know their length, it seems to have been calculated backwards – from those 0.2 arrows used in the classical wave theory of light. Feynman is actually quite honest about that, and simply writes:
“The radius of the arc [i.e. the arc that determines the final arrow] evidently depends on the length of the arrow for each section, which is ultimately determined by the amplitude S that an electron in an atom of glass scatters a photon. This radius can be calculated using the formulas for the three basic actions. […] It must be said, however, that no direct calculation from first principles for a substance as complex as glass has actually been done. In such cases, the radius is determined by experiment. For glass, it has been determined from experiment that the radius is approximately 0.2 (when the light shines directly onto the glass at right angles).”
Well… OK. I think that says enough. So we have a theory – or first principles at last – but we don’t them to calculate. That actually sounds a bit like metaphysics to me. 🙂 In any case… Well… Bye for now!
But… Hey! You said you’d analyze how light goes straight through the glass as well?
Yes. I did. But I don’t feel like doing that right now. I think we’ve got enough stuff to think about right now, don’t we? 🙂
# End of the Road to Reality?
Pre-scriptum (dated 26 June 2020): This post did not suffer from the DMCA take-down of some material. It is, therefore, still quite readable—even if my views on these matters have evolved quite a bit as part of my realist interpretation of QM. I now think the idea of force-carrying particles (bosons) is quite medieval. Moreover, I think the Higgs particle and other bosons (except for the photon and the neutrino) are just short-lived transients or resonances. Disequilibrium states, in other words. One should not refer to them as particles.
Original post:
Or the end of theoretical physics?
In my previous post, I mentioned the Goliath of science and engineering: the Large Hadron Collider (LHC), built by the European Organization for Nuclear Research (CERN) under the Franco-Swiss border near Geneva. I actually started uploading some pictures, but then I realized I should write a separate post about it. So here we go.
The first image (see below) shows the LHC tunnel, while the other shows (a part of) one of the two large general-purpose particle detectors that are part of this Large Hadron Collider. A detector is the thing that’s used to look at those collisions. This is actually the smallest of the two general-purpose detectors: it’s the so-called CMS detector (the other one is the ATLAS detector), and it’s ‘only’ 21.6 meter long and 15 meter in diameter – and it weighs about 12,500 tons. But so it did detect a Higgs particle – just like the ATLAS detector. [That’s actually not 100% sure but it was sure enough for the Nobel Prize committee – so I guess that should be good enough for us common mortals :-)]
The picture above shows one of these collisions in the CMS detector. It’s not the one with the trace of the Higgs particle though. In fact, I have not found any image that actually shows the Higgs particle: the closest thing to such image are some impressionistic images on the ATLAS site. See http://atlas.ch/news/2013/higgs-into-fermions.html
In case you wonder what’s being scattered here… Well… All kinds of things – but so the original collision is usually between protons (so these are hydrogen ions: Hnuclei), although the LHC can produce other nucleon beams as well (collectively referred to as hadrons). These protons have energy levels of 4 TeV (tera-electronVolt: 1 TeV = 1000 GeV = 1 trillion eV = 1×1012 eV).
Now, let’s think about scale once again. Remember (from that same previous post) that we calculated a wavelength of 0.33 nanometer (1 nm = 1×10–9 m, so that’s a billionth of a meter) for an electron. Well, this LHC is actually exploring the sub-femtometer (fm) frontier. One femtometer (fm) is 1×10–15 m so that’s another million times smaller. Yes: so we are talking a millionth of a billionth of a meter. The size of a proton is an estimated 1.7 femtometer indeed and, as you surely know, a proton is a point-like thing occupying a very tiny space, so it’s not like an electron ‘cloud’ swirling around: it’s much smaller. In fact, quarks – three of them make up a proton (or a neutron) – are usually thought of as being just a little bit less than half that size – so that’s about 0.7 fm.
It may also help you to use the value I mentioned for high-energy electrons when I was discussing the LEP (the Large Electron-Positron Collider, which preceded the LHC) – so that was 104.5 GeV – and calculate the associated de Broglie wavelength using E = hf and λ = v/f. The velocity is close to and, hence, if we plug everything in, we get a value close to 1.2×10–15 m indeed, so that’s the femtometer scale indeed. [If you don’t want to calculate anything, then just note we’re going from eV to giga-eV energy levels here, and so our wavelength decreases accordingly: one billion times smaller. Also remember (from the previous posts) that we calculated a wavelength of 0.33×10–6 m and an associated energy level of 70 eV for a slow-moving electron – i.e. one going at 2200 km per second ‘only’, i.e. less than 1% of the speed of light.] Also note that, at these energy levels, it doesn’t matter whether or not we include the rest mass of the electron: 0.511 MeV is nothing as compared to the GeV realm. In short, we are talking very very tiny stuff here.
But so that’s the LEP scale. I wrote that the LHC is probing things at the sub-femtometer scale. So how much sub-something is that? Well… Quite a lot: the LHC is looking at stuff at a scale that’s more than a thousand times smaller. Indeed, if collision experiments in the giga-electronvolt (GeV) energy range correspond to probing stuff at the femtometer scale, then tera-electronvolt (TeV) energy levels correspond to probing stuff that’s, once again, another thousand times smaller, so we’re looking at distances of less than a thousandth of a millionth of a billionth of a meter. Now, you can try to ‘imagine’ that, but you can’t really.
So what do we actually ‘see’ then? Well… Nothing much one could say: all we can ‘see’ are traces of point-like ‘things’ being scattered, which then disintegrate or just vanish from the scene – as shown in the image above. In fact, as mentioned above, we do not even have such clear-cut ‘trace’ of a Higgs particle: we’ve got a ‘kinda signal’ only. So that’s it? Yes. But then these images are beautiful, aren’t they? If only to remind ourselves that particle physics is about more than just a bunch of formulas. It’s about… Well… The essence of reality: its intrinsic nature so to say. So… Well…
Let me be skeptical. So we know all of that now, don’t we? The so-called Standard Model has been confirmed by experiment. We now know how Nature works, don’t we? We observe light (or, to be precise, radiation: most notably that cosmic background radiation that reaches us from everywhere) that originated nearly 14 billion years ago (to be precise: 380,000 years after the Big Bang – but what’s 380,000 years on this scale?) and so we can ‘see’ things that are 14 billion light-years away. In fact, things that were 14 billion light-years away: indeed, because of the expansion of the universe, they are further away now and so that’s why the so-called observable universe is actually larger. So we can ‘see’ everything we need to ‘see’ at the cosmic distance scale and now we can also ‘see’ all of the particles that make up matter, i.e. quarks and electrons mainly (we also have some other so-called leptons, like neutrinos and muons), and also all of the particles that make up anti-matter of course (i.e. antiquarks, positrons etcetera). As importantly – or even more – we can also ‘see’ all of the ‘particles’ carrying the forces governing the interactions between the ‘matter particles’ – which are collectively referred to as fermions, as opposed to the ‘force carrying’ particles, which are collectively referred to as bosons (see my previous post on Bose and Fermi). Let me quickly list them – just to make sure we’re on the same page:
1. Photons for the electromagnetic force.
2. Gluons for the so-called strong force, which explains why positively charged protons ‘stick’ together in nuclei – in spite of their electric charge, which should push them away from each other. [You might think it’s the neutrons that ‘glue’ them together but so, no, it’s the gluons.]
3. W+, W, and Z bosons for the so-called ‘weak’ interactions (aka as Fermi’s interaction), which explain how one type of quark can change into another, thereby explaining phenomena such as beta decay. [For example, carbon-14 will – through beta decay – spontaneously decay into nitrogen-14. Indeed, carbon-12 is the stable isotope, while carbon-14 has a life-time of 5,730 ± 40 years ‘only’ 🙂 and, hence, measuring how much carbon-14 is left in some organic substance allows us to date it (that’s what (radio)carbon-dating is about). As for the name, a beta particle can refer to an electron or a positron, so we can have β decay (e.g. the above-mentioned carbon-14 decay) as well as βdecay (e.g. magnesium-23 into sodium-23). There’s also alpha and gamma decay but that involves different things. In any case… Let me end this digression within the digression.]
4. Finally, the existence of the Higgs particle – and, hence, of the associated Higgs field – has been predicted since 1964 already, but so it was only experimentally confirmed (i.e. we saw it, in the LHC) last year, so Peter Higgs – and a few others of course – got their well-deserved Nobel prize only 50 years later. The Higgs field gives fermions, and also the W+, W, and Z bosons, mass (but not photons and gluons, and so that’s why the weak force has such short range – as compared to the electromagnetic and strong forces).
So there we are. We know it all. Sort of. Of course, there are many questions left – so it is said. For example, the Higgs particle does actually not explain the gravitational force, so it’s not the (theoretical) graviton, and so we do not have a quantum field theory for the gravitational force. [Just Google it and you’ll see why: there’s theoretical as well as practical (experimental) reasons for that.] Secondly, while we do have a quantum field theory for all of the forces (or ‘interactions’ as physicists prefer to call them), there are a lot of constants in them (much more than just that Planck constant I introduced in my posts!) that seem to be ‘unrelated and arbitrary.’ I am obviously just quoting Wikipedia here – but it’s true.
Just look at it: three ‘generations’ of matter with various strange properties, four force fields (and some ‘gauge theory’ to provide some uniformity), bosons that have mass (the W+, W, and Z bosons, and then the Higgs particle itself) but then photons and gluons don’t… It just doesn’t look good, and then Feynman himself wrote, just a few years before his death (QED, 1985, p. 128), that the math behind calculating some of these constants (the coupling constant j for instance, or the rest mass n of an electron), which he actually invented (it makes use of a mathematical approximation method called perturbation theory) and for which he got a Nobel Prize, is a “dippy process” and that “having to resort to such hocus-pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self-consistent“. He adds: “It’s surprising that the theory still hasn’t been proved self-consistent one way or the other by now; I suspect that renormalization [“the shell game that we play to find n and j” as he calls it] is not mathematically legitimate.” And so he writes this about quantum electrodynamics, not about “the rest of physics” (and so that’s quantum chromodynamics (QCD) – the theory of the strong interactions – and quantum flavordynamics (QFD) – the theory of weak interactions) which, he adds, “has not been checked anywhere near as well as electrodynamics.”
Waw ! That’s a pretty damning statement, isn’t it? In short, all of the celebrations around the experimental confirmation of the Higgs particle cannot hide the fact that it all looks a bit messy. There are other questions as well – most of which I don’t understand so I won’t mention them. To make a long story short, physicists and mathematicians alike seem to think there must be some ‘more fundamental’ theory behind. But – Hey! – you can’t have it all, can you? And, of course, all these theoretical physicists and mathematicians out there do need to justify their academic budget, don’t they? And so all that talk about a Grand Unification Theory (GUT) is probably just what is it: talk. Isn’t it? Maybe.
The key question is probably easy to formulate: what’s beyond this scale of a thousandth of a proton diameter (0.001×10–15 m) – a thousandth of a millionth of a billionth of a meter that is. Well… Let’s first note that this so-called ‘beyond’ is a ‘universe’ which mankind (or let’s just say ‘we’) will never see. Never ever. Why? Because there is no way to go substantially beyond the 4 TeV energy levels that were reached last year – at great cost – in the world’s largest particle collider (the LHC). Indeed, the LHC is widely regarded not only as “the most complex and ambitious scientific project ever accomplished by humanity” (I am quoting a CERN scientist here) but – with a cost of more than 7.5 billion Euro – also as one of the most expensive ones. Indeed, taking into account inflation and all that, it was like the Manhattan project indeed (although scientists loathe that comparison). So we should not have any illusions: there will be no new super-duper LHC any time soon, and surely not during our lifetime: the current LHC is the super-duper thing!
Indeed, when I write ‘substantially‘ above, I really mean substantially. Just to put things in perspective: the LHC is currently being upgraded to produce 7 TeV beams (it was shut down for this upgrade, and it should come back on stream in 2015). That sounds like an awful lot (from 4 to 7 is +75%), and it is: it amounts to packing the kinetic energy of seven flying mosquitos (instead of four previously :-)) into each and every particle that makes up the beam. But that’s not substantial, in the sense that it is very much below the so-called GUT energy scale, which is the energy level above which, it is believed (by all those GUT theorists at least), the electromagnetic force, the weak force and the strong force will all be part and parcel of one and the same unified force. Don’t ask me why (I’ll know when I finished reading Penrose, I hope) but that’s what it is (if I should believe what I am reading currently that is). In any case, the thing to remember is that the GUT energy levels are in the 1016 GeV range, so that’s – sorry for all these numbers – a trillion TeV. That amounts to pumping more than 160,000 Joule in each of those tiny point-like particles that make up our beam. So… No. Don’t even try to dream about it. It won’t happen. That’s science fiction – with the emphasis on fiction. [Also don’t dream about a trillion flying mosquitos packed into one proton-sized super-mosquito either. :-)]
So what?
Well… I don’t know. Physicists refer to the zone beyond the above-mentioned scale (so things smaller than 0.001×10–15 m) as the Great Desert. That’s a very appropriate name I think – for more than one reason. And so it’s this ‘desert’ that Roger Penrose is actually trying to explore in his ‘Road to Reality’. As for me, well… I must admit I have great trouble following Penrose on this road. I’ve actually started to doubt that Penrose’s Road leads to Reality. Maybe it takes us away from it. Huh? Well… I mean… Perhaps the road just stops at that 0.001×10–15 m frontier?
In fact, that’s a view which one of the early physicists specialized in high-energy physics, Raoul Gatto, referred to as the zeroth scenarioI am actually not quoting Gatto here, but another theoretical physicist: Gerard ‘t Hooft, another Nobel prize winner (you may know him better because he’s a rather fervent Mars One supporter, but so here I am referring to his popular 1996 book In Search of the Ultimate Building Blocks). In any case, Gatto, and most other physicists, including ‘T Hooft (despite the fact ‘T Hooft got his Nobel prize for his contribution to gauge theory – which, together with Feynman’s application of perturbation theory to QED, is actually the backbone of the Standard Model) firmly reject this zeroth scenario. ‘T Hooft himself thinks superstring theory (i.e. supersymmetric string theory – which has now been folded into M-theory or – back to the original term – just string theory – the terminology is quite confusing) holds the key to exploring this desert.
But who knows? In fact, we can’t – because of the above-mentioned practical problem of experimental confirmation. So I am likely to stay on this side of the frontier for quite a while – if only because there’s still so much to see here and, of course, also because I am just at the beginning of this road. 🙂 And then I also realize I’ll need to understand gauge theory and all that to continue on this road – which is likely to take me another six months or so (if not more) and then, only then, I might try to look at those little strings, even if we’ll never see them because… Well… Their theoretical diameter is the so-called Planck length. So what? Well… That’s equal to 1.6×10−35 m. So what? Well… Nothing. It’s just that 1.6×10−35 m is 1/10 000 000 000 000 000 of that sub-femtometer scale. I don’t even want to write this in trillionths of trillionths of trillionths etcetera because I feel that’s just not making any sense. And perhaps it doesn’t. One thing is for sure: that ‘desert’ that GUT theorists want us to cross is not just ‘Great’: it’s ENORMOUS!
Richard Feynman – another Nobel Prize scientist whom I obviously respect a lot – surely thought trying to cross a desert like that amounts to certain death. Indeed, he’s supposed to have said the following about string theorists, about a year or two before he died (way too young): I don’t like that they’re not calculating anything. I don’t like that they don’t check their ideas. I don’t like that for anything that disagrees with an experiment, they cook up an explanation–a fix-up to say, “Well, it might be true.” For example, the theory requires ten dimensions. Well, maybe there’s a way of wrapping up six of the dimensions. Yes, that’s all possible mathematically, but why not seven? When they write their equation, the equation should decide how many of these things get wrapped up, not the desire to agree with experiment. In other words, there’s no reason whatsoever in superstring theory that it isn’t eight out of the ten dimensions that get wrapped up and that the result is only two dimensions, which would be completely in disagreement with experience. So the fact that it might disagree with experience is very tenuous, it doesn’t produce anything; it has to be excused most of the time. It doesn’t look right.”
Hmm… Feynman and ‘T Hooft… Two giants in science. Two Nobel Prize winners – and for stuff that truly revolutionized physics. The amazing thing is that those two giants – who are clearly at loggerheads on this one – actually worked closely together on a number of other topics – most notably on the so-called Feynman-‘T Hooft gauge, which – as far as I understand – is the one that is most widely used in quantum field calculations. But I’ll leave it at that here – and I’ll just make a mental note of the terminology here. The Great Desert… Probably an appropriate term. ‘T Hooft says that most physicists think that desert is full of tiny flowers. I am not so sure – but then I am not half as smart as ‘T Hooft. Much less actually. So I’ll just see where the road I am currently following leads me. With Feynman’s warning in mind, I should probably expect the road condition to deteriorate quickly.
Post scriptum: You will not be surprised to hear that there’s a word for 1×10–18 m: it’s called an attometer (with two t’s, and abbreviated as am). And beyond that we have zeptometer (1 zm = 1×10–21 m) and yoctometer (1 ym = 1×10–23 m). In fact, these measures actually represent something: 20 yoctometer is the estimated radius of a 1 MeV neutrino – or, to be precise, its the radius of the cross section, which is “the effective area that governs the probability of some scattering or absorption event.” But so then there are no words anymore. The next measure is the Planck length: 1.62 × 10−35 m – but so that’s a trillion (1012) times smaller than a yoctometer. Unimaginable, isn’t it? Literally.
Note: A 1 MeV neutrino? Well… Yes. The estimated rest mass of an (electron) neutrino is tiny: at least 50,000 times smaller than the mass of the electron and, therefore, neutrinos are often assumed to be massless, for all practical purposes that is. However, just like the massless photon, they can carry high energy. High-energy gamma ray photons, for example, are also associated with MeV energy levels. Neutrinos are one of the many particles produced in high-energy particle collisions in particle accelerators, but they are present everywhere: they’re produced by stars (which, as you know, are nuclear fusion reactors). In fact, most neutrinos passing through Earth are produced by our Sun. The largest neutrino detector on Earth is called IceCube. It sits on the South Pole – or under it, as it’s suspended under the Antarctic ice, and it regularly captures high-energy neutrinos in the range of 1 to 10 TeV. Last year (in November 2013), it captured two with energy levels around 1000 TeV – so that’s the peta-electronvolt level (1 PeV = 1×1015 eV). If you think that’s amazing, it is. But also remember that 1 eV is 1.6×10−19 Joule, so it’s ‘only’ a ten-thousandth of a Joule. In other words, you would need at least ten thousand of them to briefly light up an LED. The PeV pair was dubbed Bert and Ernie and the illustration below (from IceCube’s website) conveys how the detectors sort of lit up when they passed. It was obviously a pretty clear ‘signal’ – but so the illustration also makes it clear that we don’t really ‘see’ at such small scale: we just know ‘something’ happened.
# An easy piece: introducing quantum mechanics and the wave function
Pre-scriptum (dated 26 June 2020): A quick glance at this piece – so many years after I have written it – tells me it is basically OK. However, it is quite obvious that, in terms of interpreting the math, I have come a very long way. However, I would recommend you go through the piece so as to get the basic math, indeed, and then you may or may not be ready for the full development of my realist or classical interpretation of QM. My manuscript may also be a fun read for you.
Original post:
After all those boring pieces on math, it is about time I got back to physics. Indeed, what’s all that stuff on differential equations and complex numbers good for? This blog was supposed to be a journey into physics, wasn’t it? Yes. But wave functions – functions describing physical waves (in classical mechanics) or probability amplitudes (in quantum mechanics) – are the solution to some differential equation, and they will usually involve complex-number notation. However, I agree we have had enough of that now. Let’s see how it works. By the way, the title of this post – An Easy Piece – is an obvious reference to (some of) Feynman’s 1965 Lectures on Physics, some of which were re-packaged in 1994 (six years after his death that is) in ‘Six Easy Pieces’ indeed – but, IMHO, it makes more sense to read all of them as part of the whole series.
Let’s first look at one of the most used mathematical shapes: the sinusoidal wave. The illustration below shows the basic concepts: we have a wave here – some kind of cyclic thing – with a wavelength λ, an amplitude (or height) of (maximum) A0, and a so-called phase shift equal to φ. The Wikipedia definition of a wave is the following: “a wave is a disturbance or oscillation that travels through space and matter, accompanied by a transfer of energy.” Indeed, a wave transports energy as it travels (oh – I forgot to mention the speed or velocity of a wave (v) as an important characteristic of a wave), and the energy it carries is directly proportional to the square of the amplitude of the wave: E ∝ A2 (this is true not only for waves like water waves, but also for electromagnetic waves, like light).
Let’s now look at how these variables get into the argument – literally: into the argument of the wave function. Let’s start with that phase shift. The phase shift is usually defined referring to some other wave or reference point (in this case the origin of the x and y axis). Indeed, the amplitude – or ‘height’ if you want (think of a water wave, or the strength of the electric field) – of the wave above depends on (1) the time t (not shown above) and (2) the location (x), but so we will need to have this phase shift φ in the argument of the wave function because at x = 0 we do not have a zero height for the wave. So, as we can see, we can shift the x-axis left or right with this φ. OK. That’s simple enough. Let’s look at the other independent variables now: time and position.
The height (or amplitude) of the wave will obviously vary both in time as well as in space. On this graph, we fixed time (t = 0) – and so it does not appear as a variable on the graph – and show how the amplitude y = A varies in space (i.e. along the x-axis). We could also have looked at one location only (x = 0 or x1 or whatever other location) and shown how the amplitude varies over time at that location only. The graph would be very similar, except that we would have a ‘time distance’ between two crests (or between two troughs or between any other two points separated by a full cycle of the wave) instead of the wavelength λ (i.e. a distance in space). This ‘time distance’ is the time needed to complete one cycle and is referred to as the period of the wave (usually denoted by the symbol T or T– in line with the notation for the maximum amplitude A0). In other words, we will also see time (t) as well as location (x) in the argument of this cosine or sine wave function. By the way, it is worth noting that it does not matter if we use a sine or cosine function because we can go from one to the other using the basic trigonometric identities cos θ = sin(π/2 – θ) and sin θ = cos(π/2 – θ). So all waves of the shape above are referred to as sinusoidal waves even if, in most cases, the convention is to actually use the cosine function to represent them.
So we will have x, t and φ in the argument of the wave function. Hence, we can write A = A(x, t, φ) = cos(x + t + φ) and there we are, right? Well… No. We’re adding very different units here: time is measured in seconds, distance in meter, and the phase shift is measured in radians (i.e. the unit of choice for angles). So we can’t just add them up. The argument of a trigonometric function (like this cosine function) is an angle and, hence, we need to get everything in radians – because that’s the unit we use to measure angles. So how do we do that? Let’s do it step by step.
First, it is worth noting that waves are usually caused by something. For example, electromagnetic waves are caused by an oscillating point charge somewhere, and radiate out from there. Physical waves – like water waves, or an oscillating string – usually also have some origin. In fact, we can look at a wave as a way of transmitting energy originating elsewhere. In the case at hand here – i.e. the nice regular sinusoidal wave illustrated above – it is obvious that the amplitude at some time t = tat some point x = x1 will be the same as the amplitude of that wave at point x = 0 some time ago. How much time ago? Well… The time (t) that was needed for that wave to travel from point x = 0 to point x = xis easy to calculate: indeed, if the wave originated at t = 0 and x = 0, then x1 (i.e. the distance traveled by the wave) will be equal to its velocity (v) multiplied by t1, so we have x1= v.t1 (note that we assume the wave velocity is constant – which is a very reasonable assumption). In other words, inserting x1and t1 in the argument of our cosine function should yield the same value as inserting zero for x and t. Distance and time can be substituted so to say, and that’s we will have something like x – vt or vt – x in the argument in that cosine function: we measure both time and distance in units of distance so to say. [Note that x – vt and –(x-vt) = vt – x are equivalent because cos θ = cos (-θ)]
Does this sound fishy? It shouldn’t. Think about it. In the (electric) field equation for electromagnetic radiation (that’s one of the examples of a wave which I mentioned above), you’ll find the so-called retarded acceleration a(t – x/c) in the argument: that’s the acceleration (a)of the charge causing the electric field at point x to change not at time t but at time t – x/c. So that’s the retarded acceleration indeed: x/c is the time it took for the wave to travel from its origin (the oscillating point charge) to x and so we subtract that from t. [When talking electromagnetic radiation (e.g. light), the wave velocity v is obviously equal to c, i.e. the speed of light, or of electromagnetic radiation in general.] Of course, you will now object that t – x/c is not the same as vt – x, and you are right: we need time units in the argument of that acceleration function, not distance. We can get to distance units if we would multiply the time with the wave velocity v but that’s complicated business because the velocity of that moving point charge is not a constant.
[…] I am not sure if I made myself clear here. If not, so be it. The thing to remember is that we need an input expressed in radians for our cosine function, not time, nor distance. Indeed, the argument in a sine or cosine function is an angle, not some distance. We will call that angle the phase of the wave, and it is usually denoted by the symbol θ – which we also used above. But so far we have been talking about amplitude as a function of distance, and we expressed time in distance units too – by multiplying it with v. How can we go from some distance to some angle? It is simple: we’ll multiply x – vt with 2π/λ.
Huh? Yes. Think about it. The wavelength will be expressed in units of distance – typically 1 m in the SI International System of Units but it could also be angstrom (10–10 m = 0.1 nm) or nano-meter (10–9 m = 10 Å). A wavelength of two meter (2 m) means that the wave only completes half a cycle per meter of travel. So we need to translate that into radians, which – once again – is the measure used to… well… measure angles, or the phase of the wave as we call it here. So what’s the ‘unit’ here? Well… Remember that we can add or subtract 2π (and any multiple of 2π, i.e. ± 2nπ with n = ±1, ±2, ±3,…) to the argument of all trigonometric functions and we’ll get the same value as for the original argument. In other words, a cycle characterized by a wavelength λ corresponds to the angle θ going around the origin and describing one full circle, i.e. 2π radians. Hence, it is easy: we can go from distance to radians by multiplying our ‘distance argument’ x – vt with 2π/λ. If you’re not convinced, just work it out for the example I gave: if the wavelength is 2 m, then 2π/λ equals 2π/2 = π. So traveling 6 meters along the wave – i.e. we’re letting x go from 0 to 6 m while fixing our time variable – corresponds to our phase θ going from 0 to 6π: both the ‘distance argument’ as well as the change in phase cover three cycles (three times two meter for the distance, and three times 2π for the change in phase) and so we’re fine. [Another way to think about it is to remember that the circumference of the unit circle is also equal to 2π (2π·r = 2π·1 in this case), so the ratio of 2π to λ measures how many times the circumference contains the wavelength.]
In short, if we put time and distance in the (2π/λ)(x-vt) formula, we’ll get everything in radians and that’s what we need for the argument for our cosine function. So our sinusoidal wave above can be represented by the following cosine function:
A = A(x, t) = A0cos[(2π/λ)(x-vt)]
We could also write A = A0cosθ with θ = (2π/λ)(x-vt). […] Both representations look rather ugly, don’t they? They do. And it’s not only ugly: it’s not the standard representation of a sinusoidal wave either. In order to make it look ‘nice’, we have to introduce some more concepts here, notably the angular frequency and the wave number. So let’s do that.
The angular frequency is just like the… well… the frequency you’re used to, i.e. the ‘non-angular’ frequency f, as measured in cycles per second (i.e. in Hertz). However, instead of measuring change in cycles per second, the angular frequency (usually denoted by the symbol ω) will measure the rate of change of the phase with time, so we can write or define ω as ω = ∂θ/∂t. In this case, we can easily see that ω = –2πv/λ. [Note that we’ll take the absolute value of that derivative because we want to work with positive numbers for such properties of functions.] Does that look complicated? In doubt, just remember that ω is measured in radians per second and then you can probably better imagine what it is really. Another way to understand ω somewhat better is to remember that the product of ω and the period T is equal to 2π, so that’s a full cycle. Indeed, the time needed to complete one cycle multiplied with the phase change per second (i.e. per unit time) is equivalent to going round the full circle: 2π = ω.T. Because f = 1/T, we can also relate ω to f and write ω = 2π.f = 2π/T.
Likewise, we can measure the rate of change of the phase with distance, and that gives us the wave number k = ∂θ/∂x, which is like the spatial frequency of the wave. So it is just like the wavelength but then measured in radians per unit distance. From the function above, it is easy to see that k = 2π/λ. The interpretation of this equality is similar to the ω.T = 2π equality. Indeed, we have a similar equation for k: 2π = k.λ, so the wavelength (λ) is for k what the period (T) is for ω. If you’re still uncomfortable with it, just play a bit with some numerical examples and you’ll be fine.
To make a long story short, this, then, allows us to re-write the sinusoidal wave equation above in its final form (and let me include the phase shift φ again in order to be as complete as possible at this stage):
A(x, t) = A0cos(kx – ωt + φ)
You will agree that this looks much ‘nicer’ – and also more in line with what you’ll find in textbooks or on Wikipedia. 🙂 I should note, however, that we’re not adding any new parameters here. The wave number k and the angular frequency ω are not independent: this is still the same wave (A = A0cos[(2π/λ)(x-vt)]), and so we are not introducing anything more than the frequency and – equally important – the speed with which the wave travels, which is usually referred to as the phase velocity. In fact, it is quite obvious from the ω.T = 2π and the k = 2π/λ identities that kλ = ω.T and, hence, taking into account that λ is obviously equal to λ = v.T (the wavelength is – by definition – the distance traveled by the wave in one period), we find that the phase (or wave) velocity v is equal to the ratio of ω and k, so we have that v = ω/k. So x, t, ω and k could be re-scaled or so but their ratio cannot change: the velocity of the wave is what it is. In short, I am introducing two new concepts and symbols (ω and k) but there are no new degrees of freedom in the system so to speak.
[At this point, I should probably say something about the difference between the phase velocity and the so-called group velocity of a wave. Let me do that in as brief a way as I can manage. Most real-life waves travel as a wave packet, aka a wave train. So that’s like a burst, or an “envelope” (I am shamelessly quoting Wikipedia here…), of “localized wave action that travels as a unit.” Such wave packet has no single wave number or wavelength: it actually consists of a (large) set of waves with phases and amplitudes such that they interfere constructively only over a small region of space, and destructively elsewhere. The famous Fourier analysis (or infamous if you have problems understanding what it is really) decomposes this wave train in simpler pieces. While these ‘simpler’ pieces – which, together, add up to form the wave train – are all ‘nice’ sinusoidal waves (that’s why I call them ‘simple’), the wave packet as such is not. In any case (I can’t be too long on this), the speed with which this wave train itself is traveling through space is referred to as the group velocity. The phase velocity and the group velocity are usually very different: for example, a wave packet may be traveling forward (i.e. its group velocity is positive) but the phase velocity may be negative, i.e. traveling backward. However, I will stop here and refer to the Wikipedia article on group and phase velocity: it has wonderful illustrations which are much and much better than anything I could write here. Just one last point that I’ll use later: regardless of the shape of the wave (sinusoidal, sawtooth or whatever), we have a very obvious relationship relating wavelength and frequency to the (phase) velocity: v = λ.f, or f = v/λ. For example, the frequency of a wave traveling 3 meter per second and wavelength of 1 meter will obviously have a frequency of three cycles per second (i.e. 3 Hz). Let’s go back to the main story line now.]
With the rather lengthy ‘introduction’ to waves above, we are now ready for the thing I really wanted to present here. I will go much faster now that we have covered the basics. Let’s go.
From my previous posts on complex numbers (or from what you know on complex numbers already), you will understand that working with cosine functions is much easier when writing them as the real part of a complex number A0eiθ = A0ei(kx – ωt + φ). Indeed, A0eiθ = A0(cosθ + isinθ) and so the cosine function above is nothing else but the real part of the complex number A0eiθ. Working with complex numbers makes adding waves and calculating interference effects and whatever we want to do with these wave functions much easier: we just replace the cosine functions by complex numbers in all of the formulae, solve them (algebra with complex numbers is very straightforward), and then we look at the real part of the solution to see what is happening really. We don’t care about the imaginary part, because that has no relationship to the actual physical quantities – for physical and electromagnetic waves that is, or for any other problem in classical wave mechanics. Done. So, in classical mechanics, the use of complex numbers is just a mathematical tool.
Now, that is not the case for the wave functions in quantum mechanics: the imaginary part of a wave equation – yes, let me write one down here – such as Ψ = Ψ(x, t) = (1/x)ei(kx – ωt) is very much part and parcel of the so-called probability amplitude that describes the state of the system here. In fact, this Ψ function is an example taken from one of Feynman’s first Lectures on Quantum Mechanics (i.e. Volume III of his Lectures) and, in this case, Ψ(x, t) = (1/x)ei(kx – ωt) represents the probability amplitude of a tiny particle (e.g. an electron) moving freely through space – i.e. without any external forces acting upon it – to go from 0 to x and actually be at point x at time t. [Note how it varies inversely with the distance because of the 1/x factor, so that makes sense.] In fact, when I started writing this post, my objective was to present this example – because it illustrates the concept of the wave function in quantum mechanics in a fairly easy and relatively understandable way. So let’s have a go at it.
First, it is necessary to understand the difference between probabilities and probability amplitudes. We all know what a probability is: it is a real number between o and 1 expressing the chance of something happening. It is usually denoted by the symbol P. An example is the probability that monochromatic light (i.e. one or more photons with the same frequency) is reflected from a sheet of glass. [To be precise, this probability is anything between 0 and 16% (i.e. P = 0 to 0.16). In fact, this example comes from another fine publication of Richard Feynman – QED (1985) – in which he explains how we can calculate the exact probability, which depends on the thickness of the sheet.]
A probability amplitude is something different. A probability amplitude is a complex number (3 + 2i, or 2.6ei1.34, for example) and – unlike its equivalent in classical mechanics – both the real and imaginary part matter. That being said, probabilities and probability amplitudes are obviously related: to be precise, one calculates the probability of an event actually happening by taking the square of the modulus (or the absolute value) of the probability amplitude associated with that event. Huh? Yes. Just let it sink in. So, if we denote the probably amplitude by Φ, then we have the following relationship:
P =|Φ|2
P = probability
Φ = probability amplitude
In addition, where we would add and multiply probabilities in the classical world (for example, to calculate the probability of an event which can happen in two different ways – alternative 1 and alternative 2 let’s say – we would just add the individual probabilities to arrive at the probably of the event happening in one or the other way, so P = P1+ P2), in the quantum-mechanical world we should add and multiply probability amplitudes, and then take the square of the modulus of that combined amplitude to calculate the combined probability. So, formally, the probability of a particle to reach a given state by two possible routes (route 1 or route 2 let’s say) is to be calculated as follows:
Φ = Φ1+ Φ2
and P =|Φ|=|Φ1+ Φ2|2
Also, when we have only one route, but that one route consists of two successive stages (for example: to go from A to C, the particle would have first have to go from A to B, and then from B to C, with different probabilities of stage AB and stage BC actually happening), we will not multiply the probabilities (as we would do in the classical world) but the probability amplitudes. So we have:
Φ = ΦAB ΦBC
and P =|Φ|=|ΦAB ΦBC|2
In short, it’s the probability amplitudes (and, as mentioned, these are complex numbers, not real numbers) that are to be added and multiplied etcetera and, hence, the probability amplitudes act as the equivalent, so to say, in quantum mechanics, of the conventional probabilities in classical mechanics. The difference is not subtle. Not at all. I won’t dwell too much on this. Just re-read any account of the double-slit experiment with electrons which you may have read and you’ll remember how fundamental this is. [By the way, I was surprised to learn that the double-slit experiment with electrons has apparently only been done in 2012 in exactly the way as Feynman described it. So when Feynman described it in his 1965 Lectures, it was still very much a ‘thought experiment’ only – even a 1961 experiment (not mentioned by Feynman) had clearly established the reality of electron interference.]
OK. Let’s move on. So we have this complex wave function in quantum mechanics and, as Feynman writes, “It is not like a real wave in space; one cannot picture any kind of reality to this wave as one does for a sound wave.” That being said, one can, however, get pretty close to ‘imagining’ what it actually is IMHO. Let’s go by the example which Feynman gives himself – on the very same page where he writes the above actually. The amplitude for a free particle (i.e. with no forces acting on it) with momentum p = m to go from location rto location ris equal to
Φ12 = (1/r12)eip.r12/ħ with r12 = rr
I agree this looks somewhat ugly again, but so what does it say? First, be aware of the difference between bold and normal type: I am writing p and v in bold type above because they are vectors: they have a magnitude (which I will denote by p and v respectively) as well as a direction in space. Likewise, r12 is a vector going from r1 to r2 (and rand r2 themselves are space vectors themselves obviously) and so r12 (non-bold) is the magnitude of that vector. Keeping that in mind, we know that the dot product p.r12 is equal to the product of the magnitudes of those vectors multiplied by cosα, with α the angle between those two vectors. Hence, p.r12 .= p.r12.cosα. Now, if p and r12 have the same direction, the angle α will be zero and so cosα will be equal to one and so we just have p.r12 = p.r12 or, if we’re considering a particle going from 0 to some position x, p.r12 = p.r12 = px.
Now we also have Planck’s constant there, in its reduced form ħ = h/2π. As you can imagine, this 2π has something to do with the fact that we need radians in the argument. It’s the same as what we did with x in the argument of that cosine function above: if we have to express stuff in radians, then we have to absorb a factor of 2π in that constant. However, here I need to make an additional digression. Planck’s constant is obviously not just any constant: it is the so-called quantum of action. Indeed, it appears in what may well the most fundamental relations in physics.
The first of these fundamental relations is the so-called Planck relation: E = hf. The Planck relation expresses the wave-particle duality of light (or electromagnetic waves in general): light comes in discrete quanta of energy (photons), and the energy of these ‘wave particles’ is directly proportional to the frequency of the wave, and the factor of proportionality is Planck’s constant.
The second fundamental relation, or relations – in plural – I should say, are the de Broglie relations. Indeed, Louis-Victor-Pierre-Raymond, 7th duc de Broglie, turned the above on its head: if the fundamental nature of light is (also) particle-like, then the fundamental nature of particles must (also) be wave-like. So he boldly associated a frequency f and a wavelength λ with all particles, such as electrons for example – but larger-scale objects, such as billiard balls, or planets, also have a de Broglie wavelength and frequency! The de Broglie relation determining the de Broglie frequency is – quite simply – the re-arranged Planck relation: f = E/h. So this relation relates the de Broglie frequency with energy. However, in the above wave function, we’ve got momentum, not energy. Well… Energy and momentum are obviously related, and so we have a second de Broglie relation relating momentum with wavelength: λ = h/p.
We’re almost there: just hang in there. 🙂 When we presented the sinusoidal wave equation, we introduced the angular frequency (ω) and the wave number (k), instead of working with f and λ. That’s because we want an argument expressed in radians. Here it’s the same. The two de Broglie equations have a equivalent using angular frequency and wave number: ω = E/ħ and k = p/ħ. So we’ll just use the second one (i.e. the relation with the momentum in it) to associate a wave number with the particle (k = p/ħ).
Phew! So, finally, we get that formula which we introduced a while ago already: Ψ(x) = (1/x)eikx, or, including time as a variable as well (we made abstraction of time so far):
Ψ(x, t) = (1/x)ei(kx – ωt)
The formula above obviously makes sense. For example, the 1/x factor makes the probability amplitude decrease as we get farther away from where the particle started: in fact, this 1/x or 1/r variation is what we see with electromagnetic waves as well: the amplitude of the electric field vector E varies as 1/r and, because we’re talking some real wave here and, hence, its energy is proportional to the square of the field, the energy that the source can deliver varies inversely as the square of the distance. [Another way of saying the same is that the energy we can take out of a wave within a given conical angle is the same, no matter how far away we are: the energy flux is never lost – it just spreads over a greater and greater effective area. But let’s go back to the main story.]
We’ve got the math – I hope. But what does this equation mean really? What’s that de Broglie wavelength or frequency in reality? What wave are we talking about? Well… What’s reality? As mentioned above, the famous de Broglie relations associate a wavelength λ and a frequency f to a particle with momentum p and energy E, but it’s important to mention that the associated de Broglie wave function yields probability amplitudes. So it is, indeed, not a ‘real wave in space’ as Feynman would put it. It is a quantum-mechanical wave equation.
Huh? […] It’s obviously about time I add some illustrations here, and so that’s what I’ll do. Look at the two cases below. The case on top is pretty close to the situation I described above: it’s a de Broglie wave – so that’s a complex wave – traveling through space (in one dimension only here). The real part of the complex amplitude is in blue, and the green is the imaginary part. So the probability of finding that particle at some position x is the modulus squared of this complex amplitude. Now, this particular wave function ignores the 1/x variation and, hence, the squared modulus of Aei(kx – ωt) is equal to a constant. To be precise, it’s equal to A2 (check it: the squared modulus of a complex number z equals the product of z and its complex conjugate, and so we get Aas a result indeed). So what does this mean? It means that the probability of finding that particle (an electron, for example) is the same at all points! In other words, we don’t know where it is! In the illustration below (top part), that’s shown as the (yellow) color opacity: the probability is spread out, just like the wave itself, so there is no definite position of the particle indeed.
[Note that the formula in the illustration above (which I took from Wikipedia once again) uses p instead of k as the factor in front of x. While it does not make a big difference from a mathematical point of view (ħ is just a factor of proportionality: k = p/ħ), it does make a big difference from a conceptual point of view and, hence, I am puzzled as to why the author of this article did this. Also, there is some variation in the opacity of the yellow (i.e. the color of our tennis (or ping pong) ball representing our ‘wavicle’) which shouldn’t be there because the probability associated with this particular wave function is a constant indeed: so there is no variation in the probability (when squaring the absolute value of a complex number, the phase factor does not come into play). Also note that, because all probabilities have to add up to 100% (or to 1), a wave function like this is quite problematic. However, don’t worry about it just now: just try to go with the flow.]
By now, I must assume you shook your head in disbelief a couple of time already. Surely, this particle (let’s stick to the example of an electron) must be somewhere, yes? Of course.
The problem is that we gave an exact value to its momentum and its energy and, as a result, through the de Broglie relations, we also associated an exact frequency and wavelength to the de Broglie wave associated with this electron. Hence, Heisenberg’s Uncertainty Principle comes into play: if we have exact knowledge on momentum, then we cannot know anything about its location, and so that’s why we get this wave function covering the whole space, instead of just some region only. Sort of. Here we are, of course, talking about that deep mystery about which I cannot say much – if only because so many eminent physicists have already exhausted the topic. I’ll just state Feynman once more: “Things on a very small scale behave like nothing that you have any direct experience with. […] It is very difficult to get used to, and it appears peculiar and mysterious to everyone – both to the novice and to the experienced scientist. Even the experts do not understand it the way they would like to, and it is perfectly reasonable that they should not because all of direct, human experience and of human intuition applies to large objects. We know how large objects will act, but things on a small scale just do not act that way. So we have to learn about them in a sort of abstract or imaginative fashion and not by connection with our direct experience.” And, after describing the double-slit experiment, he highlights the key conclusion: “In quantum mechanics, it is impossible to predict exactly what will happen. We can only predict the odds [i.e. probabilities]. Physics has given up on the problem of trying to predict exactly what will happen. Yes! Physics has given up. We do not know how to predict what will happen in a given circumstance. It is impossible: the only thing that can be predicted is the probability of different events. It must be recognized that this is a retrenchment in our ideal of understanding nature. It may be a backward step, but no one has seen a way to avoid it.”
[…] That’s enough on this I guess, but let me – as a way to conclude this little digression – just quickly state the Uncertainty Principle in a more or less accurate version here, rather than all of the ‘descriptions’ which you may have seen of it: the Uncertainty Principle refers to any of a variety of mathematical inequalities asserting a fundamental limit (fundamental means it’s got nothing to do with observer or measurement effects, or with the limitations of our experimental technologies) to the precision with which certain pairs of physical properties of a particle (these pairs are known as complementary variables) such as, for example, position (x) and momentum (p), can be known simultaneously. More in particular, for position and momentum, we have that σxσp ≥ ħ/2 (and, in this formulation, σ is, obviously the standard symbol for the standard deviation of our point estimate for x and p respectively).
OK. Back to the illustration above. A particle that is to be found in some specific region – rather than just ‘somewhere’ in space – will have a probability amplitude resembling the wave equation in the bottom half: it’s a wave train, or a wave packet, and we can decompose it, using the Fourier analysis, in a number of sinusoidal waves, but so we do not have a unique wavelength for the wave train as a whole, and that means – as per the de Broglie equations – that there’s some uncertainty about its momentum (or its energy).
I will let this sink in for now. In my next post, I will write some more about these wave equations. They are usually a solution to some differential equation – and that’s where my next post will connect with my previous ones (on differential equations). Just to say goodbye – as for now that is – I will just copy another beautiful illustration from Wikipedia. See below: it represents the (likely) space in which a single electron on the 5d atomic orbital of a hydrogen atom would be found. The solid body shows the places where the electron’s probability density (so that’s the squared modulus of the probability amplitude) is above a certain value – so it’s basically the area where the likelihood of finding the electron is higher than elsewhere. The hue on the colored surface shows the complex phase of the wave function.
It is a wonderful image, isn’t it? At the very least, it increased my understanding of the mystery surround quantum mechanics somewhat. I hope it helps you too. 🙂
Post scriptum 1: On the need to normalize a wave function
In this post, I wrote something about the need for probabilities to add up to 1. In mathematical terms, this condition will resemble something like
In this integral, we’ve got – once again – the squared modulus of the wave function, and so that’s the probability of find the particle somewhere. The integral just states that all of the probabilities added all over space (Rn) should add up to some finite number (a2). Hey! But that’s not equal to 1 you’ll say. Well… That’s a minor problem only: we can create a normalized wave function ψ out of ψ0 by simply dividing ψ by a so we have ψ = ψ0/a, and then all is ‘normal’ indeed. 🙂
Post scriptum 2: On using colors to represent complex numbers
When inserting that beautiful 3D graph of that 5d atomic orbital (again acknowledging its source: Wikipedia), I wrote that “the hue on the colored surface shows the complex phase of the wave function.” Because this kind of visual representation of complex numbers will pop up in other posts as well (and you’ve surely encountered it a couple of times already), it’s probably useful to be explicit on what it represents exactly. Well… I’ll just copy the Wikipedia explanation, which is clear enough: “Given a complex number z = reiθ, the phase (also known as argument) θ can be represented by a hue, and the modulus r =|z| is represented by either intensity or variations in intensity. The arrangement of hues is arbitrary, but often it follows the color wheel. Sometimes the phase is represented by a specific gradient rather than hue.” So here you go…
Post scriptum 3: On the de Broglie relations
The de Broglie relations are a wonderful pair. They’re obviously equivalent: energy and momentum are related, and wavelength and frequency are obviously related too through the general formula relating frequency, wavelength and wave velocity: fλ = v (the product of the frequency and the wavelength must yield the wave velocity indeed). However, when it comes to the relation between energy and momentum, there is a little catch. What kind of energy are we talking about? We were describing a free particle (e.g. an electron) traveling through space, but with no (other) charges acting on it – in other words: no potential acting upon it), and so we might be tempted to conclude that we’re talking about the kinetic energy (K.E.) here. So, at relatively low speeds (v), we could be tempted to use the equations p = mv and K.E. = p2/2m = mv2/2 (the one electron in a hydrogen atom travels at less than 1% of the speed of light, and so that’s a non-relativistic speed indeed) and try to go from one equation to the other with these simple formulas. Well… Let’s try it.
f = E/h according to de Broglie and, hence, substituting E with p2/2m and f with v/λ, we get v/λ = m2v2/2mh. Some simplification and re-arrangement should then yield the second de Broglie relation: λ = 2h/mv = 2h/p. So there we are. Well… No. The second de Broglie relation is just λ = h/p: there is no factor 2 in it. So what’s wrong? The problem is the energy equation: de Broglie does not use the K.E. formula. [By the way, you should note that the K.E. = mv2/2 equation is only an approximation for low speeds – low compared to c that is.] He takes Einstein’s famous E = mc2 equation (which I am tempted to explain now but I won’t) and just substitutes c, the speed of light, with v, the velocity of the slow-moving particle. This is a very fine but also very deep point which, frankly, I do not yet fully understand. Indeed, Einstein’s E = mcis obviously something much ‘deeper’ than the formula for kinetic energy. The latter has to do with forces acting on masses and, hence, obeys Newton’s laws – so it’s rather familiar stuff. As for Einstein’s formula, well… That’s a result from relativity theory and, as such, something that is much more difficult to explain. While the difference between the two energy formulas is just a factor of 1/2 (which is usually not a big problem when you’re just fiddling with formulas like this), it makes a big conceptual difference.
Hmm… Perhaps we should do some examples. So these de Broglie equations associate a wave with frequency f and wavelength λ with particles with energy E, momentum p and mass m traveling through space with velocity v: E = hf and p = h/λ. [And, if we would want to use some sine or cosine function as an example of such wave function – which is likely – then we need an argument expressed in radians rather than in units of time or distance. In other words, we will need to convert frequency and wavelength to angular frequency and wave number respectively by using the 2π = ωT = ω/f and 2π = kλ relations, with the wavelength (λ), the period (T) and the velocity (v) of the wave being related through the simple equations f = 1/T and λ = vT. So then we can write the de Broglie relations as: E = ħω and p = ħk, with ħ = h/2π.]
In these equations, the Planck constant (be it h or ħ) appears as a simple factor of proportionality (we will worry about what h actually is in physics in later posts) – but a very tiny one: approximately 6.626×10–34 J·s (Joule is the standard SI unit to measure energy, or work: 1 J = 1 kg·m2/s2), or 4.136×10–15 eV·s when using a more appropriate (i.e. larger) measure of energy for atomic physics: still, 10–15 is only 0.000 000 000 000 001. So how does it work? First note, once again, that we are supposed to use the equivalent for slow-moving particles of Einstein’s famous E = mcequation as a measure of the energy of a particle: E = mv2. We know velocity adds mass to a particle – with mass being a measure for inertia. In fact, the mass of so-called massless particles, like photons, is nothing but their energy (divided by c2). In other words, they do not have a rest mass, but they do have a relativistic mass m = E/c2, with E = hf (and with f the frequency of the light wave here). Particles, such as electrons, or protons, do have a rest mass, but then they don’t travel at the speed of light. So how does that work out in that E = mvformula which – let me emphasize this point once again – is not the standard formula (for kinetic energy) that we’re used to (i.e. E = mv2/2)? Let’s do the exercise.
For photons, we can re-write E = hf as E = hc/λ. The numerator hc in this expression is 4.136×10–15 eV·s (i.e. the value of the Planck constant h expressed in eV·s) multiplied with 2.998×108 m/s (i.e. the speed of light c) so that’s (more or less) hc ≈ 1.24×10–6 eV·m. For visible light, the denominator will range from 0.38 to 0.75 micrometer (1 μm = 10–6 m), i.e. 380 to 750 nanometer (1 nm = 10–6 m), and, hence, the energy of the photon will be in the range of 3.263 eV to 1.653 eV. So that’s only a few electronvolt (an electronvolt (eV) is, by definition, the amount of energy gained (or lost) by a single electron as it moves across an electric potential difference of one volt). So that’s 2.6 to 5.2 Joule (1 eV = 1.6×10–19 Joule) and, hence, the equivalent relativistic mass of these photons is E/cor 2.9 to 5.8×10–34 kg. That’s tiny – but not insignificant. Indeed, let’s look at an electron now.
The rest mass of an electron is about 9.1×10−31 kg (so that’s a scale factor of a thousand as compared to the values we found for the relativistic mass of photons). Also, in a hydrogen atom, it is expected to speed around the nucleus with a velocity of about 2.2×10m/s. That’s less than 1% of the speed of light but still quite fast obviously: at this speed (2,200 km per second), it could travel around the earth in less than 20 seconds (a photon does better: it travels not less than 7.5 times around the earth in one second). In any case, the electron’s energy – according to the formula to be used as input for calculating the de Broglie frequency – is 9.1×10−31 kg multiplied with the square of 2.2×106 m/s, and so that’s about 44×10–19 Joule or about 70 eV (1 eV = 1.6×10–19 Joule). So that’s – roughly – 35 times more than the energy associated with a photon.
The frequency we should associate with 70 eV can be calculated from E = hv/λ (we should, once again, use v instead of c), but we can also simplify and calculate directly from the mass: λ = hv/E = hv/mv2 = h/m(however, make sure you express h in J·s in this case): we get a value for λ equal to 0.33 nanometer, so that’s more than one thousand times shorter than the above-mentioned wavelengths for visible light. So, once again, we have a scale factor of about a thousand here. That’s reasonable, no? [There is a similar scale factor when moving to the next level: the mass of protons and neutrons is about 2000 times the mass of an electron.] Indeed, note that we would get a value of 0.510 MeV if we would apply the E = mc2, equation to the above-mentioned (rest) mass of the electron (in kg): MeV stands for mega-electronvolt, so 0.510 MeV is 510,000 eV. So that’s a few hundred thousand times the energy of a photon and, hence, it is obvious that we are not using the energy equivalent of an electron’s rest mass when using de Broglie’s equations. No. It’s just that simple but rather mysterious E = mvformula. So it’s not mcnor mv2/2 (kinetic energy). Food for thought, isn’t it? Let’s look at the formulas once again.
They can easily be linked: we can re-write the frequency formula as λ = hv/E = hv/mv2 = h/mand then, using the general definition of momentum (p = mv), we get the second de Broglie equation: p = h/λ. In fact, de Broglie‘s rather particular definition of the energy of a particle (E = mv2) makes v a simple factor of proportionality between the energy and the momentum of a particle: v = E/p or E = pv. [We can also get this result in another way: we have h = E/f = pλ and, hence, E/p = fλ = v.]
Again, this is serious food for thought: I have not seen any ‘easy’ explanation of this relation so far. To appreciate its peculiarity, just compare it to the usual relations relating energy and momentum: E =p2/2m or, in its relativistic form, p2c2 = E2 – m02c4 . So these two equations are both not to be used when going from one de Broglie relation to another. [Of course, it works for massless photons: using the relativistic form, we get p2c2 = E2 – 0 or E = pc, and the de Broglie relation becomes the Planck relation: E = hf (with f the frequency of the photon, i.e. the light beam it is part of). We also have p = h/λ = hf/c, and, hence, the E/p = c comes naturally. But that’s not the case for (slower-moving) particles with some rest mass: why should we use mv2 as a energy measure for them, rather than the kinetic energy formula?
But let’s just accept this weirdness and move on. After all, perhaps there is some mistake here and so, perhaps, we should just accept that factor 2 and replace λ = h/p by λ = 2h/p. Why not? 🙂 In any case, both the λ = h/mv and λ = 2h/p = 2h/mv expressions give the impression that both the mass of a particle as well as its velocity are on a par so to say when it comes to determining the numerical value of the de Broglie wavelength: if we double the speed, or the mass, the wavelength gets shortened by half. So, one would think that larger masses can only be associated with extremely short de Broglie wavelengths if they move at a fairly considerable speed. But that’s where the extremely small value of h changes the arithmetic we would expect to see. Indeed, things work different at the quantum scale, and it’s the tiny value of h that is at the core of this. Indeed, it’s often referred to as the ‘smallest constant’ in physics, and so here’s the place where we should probably say a bit more about what h really stands for.
Planck’s constant h describes the tiny discrete packets in which Nature packs energy: one cannot find any smaller ‘boxes’. As such, it’s referred to as the ‘quantum of action’. But, surely, you’ll immediately say that it’s cousin, ħ = h/2π, is actually smaller. Well… Yes. You’re actually right: ħ = h/2π is actually smaller. It’s the so-called quantum of angular momentum, also (and probably better) known as spin. Angular momentum is a measure of… Well… Let’s call it the ‘amount of rotation’ an object has, taking into account its mass, shape and speed. Just like p, it’s a vector. To be precise, it’s the product of a body’s so-called rotational inertia (so that’s similar to the mass m in p = mv) and its rotational velocity (so that’s like v, but it’s ‘angular’ velocity), so we can write L = Iω but we’ll not go in any more detail here. The point to note is that angular momentum, or spin as it’s known in quantum mechanics, also comes in discrete packets, and these packets are multiples of ħ. [OK. I am simplifying here but the idea or principle that I am explaining here is entirely correct.]
But let’s get back to the de Broglie wavelength now. As mentioned above, one would think that larger masses can only be associated with extremely short de Broglie wavelengths if they move at a fairly considerable speed. Well… It turns out that the extremely small value of h upsets our everyday arithmetic. Indeed, because of the extremely small value of h as compared to the objects we are used to ( in one grain of salt alone, we will find about 1.2×1018 atoms – just write a 1 with 18 zeroes behind and you’ll appreciate this immense numbers somewhat more), it turns out that speed does not matter all that much – at least not in the range we are used to. For example, the de Broglie wavelength associated with a baseball weighing 145 grams and traveling at 90 mph (i.e. approximately 40 m/s) would be 1.1×10–34 m. That’s immeasurably small indeed – literally immeasurably small: not only technically but also theoretically because, at this scale (i.e. the so-called Planck scale), the concepts of size and distance break down as a result of the Uncertainty Principle. But, surely, you’ll think we can improve on this if we’d just be looking at a baseball traveling much slower. Well… It does not much get better for a baseball traveling at a snail’s pace – let’s say 1 cm per hour, i.e. 2.7×10–6 m/s. Indeed, we get a wavelength of 17×10–28 m, which is still nowhere near the nanometer range we found for electrons. Just to give an idea: the resolving power of the best electron microscope is about 50 picometer (1 pm = ×10–12 m) and so that’s the size of a small atom (the size of an atom ranges between 30 and 300 pm). In short, for all practical purposes, the de Broglie wavelength of the objects we are used to does not matter – and then I mean it does not matter at all. And so that’s why quantum-mechanical phenomena are only relevant at the atomic scale.
|
2020-08-09 08:08:32
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8387882709503174, "perplexity": 606.6199561978942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738523.63/warc/CC-MAIN-20200809073133-20200809103133-00203.warc.gz"}
|
http://www.gamedev.net/topic/659050-c-files-organisation/
|
• Create Account
# C++ files organisation
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
12 replies to this topic
### #1ZwodahS Members - Reputation: 483
Like
0Likes
Like
Posted 23 July 2014 - 03:58 AM
I have been coding C++ for more than a year now and I just realized that the way I package my project is very different from how all the C++ open source projects package their projects.
Most of the C++ projects package in such a way that it looks like this
project
|_ include
|_ src
while I usually store them in deep folder structures.
project
|_folder A
|_ sub folder A
|_ sub folder B
|_ folder B
When I learn a language, I want to embrace it fully and not just use it like another language. But before I migrate some of my projects, I thought I asked, what are the advantages/disadvantages to doing either way ? I know my way probably came from back when I just started programming and Java has a really deep folder structure. Do anyone do the same as I ?
Edited by ZwodahS, 23 July 2014 - 04:00 AM.
Check out my blog at zwodahs.github.io and zwodahs.itch.io/
### #2Tribad Members - Reputation: 981
Like
0Likes
Like
Posted 23 July 2014 - 04:15 AM
Putting header files into a include folder has the advantage that you have all interfaces to your modules in a single place, like /usr/include and /usr/local/include on UNIX style systems. To make the software available to others, maybe as a library gives you a more simple way to find the header files to the libraries.
But at the end I suspect that it is up to your personal feeling about how to handle header files.
### #3Bacterius Crossbones+ - Reputation: 13028
Like
4Likes
Like
Posted 23 July 2014 - 04:50 AM
They are not mutually exclusive. You can have a main src or include folder which then branches off into a deep (but not too deep) file tree, with your modules neatly organized in separate files and folders. This is what I tend to do myself. In any case, if you are not designing a library or other code that could be reused by people other than you, I would just use whatever works best for you, such organizational concerns are not usually a major problem except to grumpy packagers used to doing things "their way" . Probably many of the large open source projects that you have seen have bureaucratic or architectural requirements (by virtue of being very large, or having lots of users and contributors) that would be very inappropriate in a smaller project, so many of the things you see in them would seem very strange from your perspective (though src/include is not really among them, but just saying). I don't think there is a widely accepted standard in C++ anyway, as long as your build system does not grow uncontrollably in complexity underneath you, you should not worry about it too much. C++ doesn't have a universal style guide that almost everybody follows like Java or C# do, far from that.
Some styles I've seen are "headers in include, source code in src", "only public headers in include, private headers and source code in src", "everything in src", "code dump with no folders at all (perhaps with e.g. a visual studio solution which already encodes the folder structure)", and so on... to be fair I do mostly C and not much C++, and I am personally not too comfortable with the idea of putting actual implementation code inside an "include" folder like a lot of the C++ projects seem to be doing with the advent of header-only libraries and templates (yes, I know it's not strictly required if you forward declare the different templated types you'll be using, but few bother to do that). But it's really no big deal - we are not machines, and can adapt when things don't go 100% as we expect. Really, it just goes to show that there is really no consensus on the right way to do it.
In any case, I can give a few insights on what I expect from a freshly checked out code repository:
* as a user (for libraries and other)
- is there an obvious build/install script (e.g. a solution file for visual studio for windows, a makefile or cmake/scons/autohell script for linux, a codeblocks project, etc..)?
- if not, is there a readme or install.txt I can look at?
- no? well, I don't know how to use it, if it's small enough and license permitting I might copy the source and headers inside my own code.. provided I can find them, e.g. an include or src folder
- if not, I give up and check out another library
* as a developer (contributing/etc)
- if the build system is a bit complex or there are things I should know or configuration options, are there notes about that somewhere? (not needed for small programs or obvious instructions e.g. a plain makefile)
- is it easy to build the software after changing code? does it make sure to always rebuild what needs to be (and, preferably, only what needs to be)?
- does it build out of source, or at least doesn't spew .o/.obj files everywhere in the source folder?
- are there tests I can run after making nontrivial changes?
As long as your project package provides these things, I don't see any problem. I've certainly seen far worse and I'm sure others have too.
“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”
### #4Karsten_ Members - Reputation: 2242
Like
0Likes
Like
Posted 23 July 2014 - 04:51 AM
Nested source layout seems to have two problems:
1) Includes may have to still be relative i.e #include "../../player/LeftHand.h" which may get a little bit messy and a pain if you decide to move the project structure.
2) At work when we used Unity we had an issue with locating scripts (quickly). A location that makes sense to one person does not to another. We decided to have all game script files in the same directory, all library scripts in their own directory. So much easier.
Note that Visual Studio provides structures in the IDE call "filters". Even though these appear to be nested, they only point to files which are all in the same directory. This may be the best of both worlds.
I recommend nesting source only if they are unique to a lib or .exe such as
mygame
- bin
- game.exe
- lib
- libplatform.a
- libnetcode.a
- src
- platform
- *.cpp *.h
- game
- *.cpp *.h
- netcode
- *.cpp *.h
- *.cpp *.h
And then -Isrc/platform -Isrc/netcode -Isrc/imageloader so that library headers can be included using < >.
Edited by Karsten_, 23 July 2014 - 07:31 AM.
http://tinyurl.com/vanillapink - Please like my GF's Competition Cosplay Entry for Cosplayzine
Mutiny - Open-source C++ Unity re-implementation.
Defile of Eden 2 - FreeBSD and OpenBSD binaries of our latest game.
### #5BitMaster Crossbones+ - Reputation: 8647
Like
0Likes
Like
Posted 23 July 2014 - 04:51 AM
Personally I'm never putting my headers into a different directory. The only point to do that would be for publishing just the headers for a library I want to publish, but I would rather use CMake or some other build tool to copy the relevant headers from the source directory to a published include directory when needed.
### #6Bacterius Crossbones+ - Reputation: 13028
Like
0Likes
Like
Posted 23 July 2014 - 04:55 AM
1) Includes may have to still be relative i.e #include "../../player/LeftHand.h" which may get a little bit messy and a pain if you decide to move the project structure.
I would suggest to not include files relative to the location of the source file or whatever, that is just asking for trouble in my opinion. Always include files from a base relative directory (e.g. project root, or more likely the "include" folder) and the problem disappears.
“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”
### #7Tribad Members - Reputation: 981
Like
0Likes
Like
Posted 23 July 2014 - 05:04 AM
Using relative includes is a good choice to prevent many -I<include dir> compiler parameters. This way you can go inside a subdir with sources and start the compiler/make in there without thinking about where you are and how your compiler include parameter must look.
But... in fact it is awesome to handle if you move a module around.
Because I use UML with a code generator that produces the include statements I do not think about the positioning. It is always right and works, even if I move around the modules in the model.
### #8ZwodahS Members - Reputation: 483
Like
0Likes
Like
Posted 23 July 2014 - 05:23 AM
Wow that is a lot of feedback . Thanks a lot.
1) Includes may have to still be relative i.e #include "../../player/LeftHand.h" which may get a little bit messy and a pain if you decide to move the project structure.
I would suggest to not include files relative to the location of the source file or whatever, that is just asking for trouble in my opinion. Always include files from a base relative directory (e.g. project root, or more likely the "include" folder) and the problem disappears.
So instead of relative includes, what would be a good way if I don't want a centralized include folder ?
I have this problem recently when I was reorganizing my files and I need to update quite a few of the includes.
Check out my blog at zwodahs.github.io and zwodahs.itch.io/
### #9Bregma Crossbones+ - Reputation: 8043
Like
3Likes
Like
Posted 23 July 2014 - 05:32 AM
I work with oodles of free software projects. I've seen plenty with a separate includes/ directory and many that have them combined. I've seen many with a deep or broad hierarchy and the same with everything dumped into a single subdirectory. Technically it makes no difference and there is no de facto or de jure standard, it's entirely up to the taste of the most vocal or dominant developer.
Since packaging project source inevitably means installing into a staging directory, that isn't relevant. Since installing means copying out of the sources into an installation directory, that's not relevant.
What is relevant is when someone comes along to try and read and understand the code, it's a lot easier when there isn't a separate include/ directory in the project, and the header and other sources files are all combined in one hierarchy. I've noticed the most vocal proponents of the separate include/ hierarchy tend to be those who spend no time maintaining other people's code. There is also no argument that in larger projects readability is improved by namespacing components into separate subdirectories and all include references are relative to the top of the hierarchy. If each component produces a static convenience library (or, if required, a shared library) that also makes your unit testing easier.
Stephen M. Webb
Professional Free Software Developer
### #10BitMaster Crossbones+ - Reputation: 8647
Like
0Likes
Like
Posted 23 July 2014 - 06:17 AM
Using relative includes is a good choice to prevent many -I<include dir> compiler parameters. This way you can go inside a subdir with sources and start the compiler/make in there without thinking about where you are and how your compiler include parameter must look.
I would avoid relative includes if at all possible and rather use something like CMake to generate my makesfiles then.
### #11Tribad Members - Reputation: 981
Like
0Likes
Like
Posted 23 July 2014 - 07:05 AM
Avoiding lots of search pathes on the command line has the benefit that it is clear what version of a header file is used and you need not read hundreds of -I parameters with long pathes to find at which point something goes wrong.
But as I already said. It is always alot of work if you move a module around.
### #12Karsten_ Members - Reputation: 2242
Like
1Likes
Like
Posted 23 July 2014 - 07:41 AM
So instead of relative includes, what would be a good way if I don't want a centralized include folder ?
I have this problem recently when I was reorganizing my files and I need to update quite a few of the includes.
Unless you are making reusable libraries as part of your project, I would recommend a single folder for both your src and header files. i.e a single .exe means I would create a single src directory. If you find this folder simply has too much source code files in, then this might even suggest that you need to break your project up into multiple separate libraries (in which case they would get their own src directories (containing .cpp and .h).
So since these libraries and binaries are separate projects, you could say that I don't do any nesting in my projects. Some places where you may be tempted however is if some code is completely standalone from the rest of the project (i.e so no #include "../" needed since it has no dependence on other headers). If a part of a project is also in a separate namespace then you may also want to nest, however often parts in a nested namespace still require ../headers and I do typically try to avoid this pattern.
One system that I have found to be very effective is the following (I use cmake but this should work with many build systems). Imagine a folder structure as follows:
proj/
src/
game/
foolib/
barlib/
If I specify on the command like -Isrc, this means that anywhere in the game source code, I can do
#include <foolib/foolib.h>
If foolib has a dependency on barlib, I can do in the foolib code:
#include <barlib/barlib.h>
This means that you can separate your project into logical libraries (and separate .cpp / .h directories) and yet still be able to reference the correct headers you need.
Whats quite useful about this system is that if barlib was really made to be a standalone library, I could have an installer script like:
# mkdir /usr/local/include/barlib
# cp -r src/barlib/*.h /usr/local/include/barlib
# cp lib/barlib.a /usr/local/lib/
And now any project on my computer can access the barlib.h in exactly the same way as when it was part of my project.
Edited by Karsten_, 23 July 2014 - 07:51 AM.
http://tinyurl.com/vanillapink - Please like my GF's Competition Cosplay Entry for Cosplayzine
Mutiny - Open-source C++ Unity re-implementation.
Defile of Eden 2 - FreeBSD and OpenBSD binaries of our latest game.
### #13BitMaster Crossbones+ - Reputation: 8647
Like
0Likes
Like
Posted 23 July 2014 - 08:00 AM
Avoiding lots of search pathes on the command line has the benefit that it is clear what version of a header file is used and you need not read hundreds of -I parameters with long pathes to find at which point something goes wrong.
Well, in general I set exactly one include directory for my project (excluding 3rd party libraries). Every file can then simply include what it needs using <mytool/file.h> or <mylibrary/file.h>. CMake just simplifies doing that because every sub-makefile is aware of that include directory without any work on my part. An added benefit is that you immediately see which library/subproject an include is from.
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
PARTNERS
|
2016-09-27 07:05:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23531030118465424, "perplexity": 1719.2840439661054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660992.15/warc/CC-MAIN-20160924173740-00158-ip-10-143-35-109.ec2.internal.warc.gz"}
|
http://playzona.net/Hawaii/how-to-find-standard-error-of-slope-on-calculator.html
|
Address 2301 Kalakaua Ave, Honolulu, HI 96815 (808) 931-2480 http://www.apple.com/retail/royalhawaiian?cid=aos-us-seo-maps
# how to find standard error of slope on calculator Kailua, Hawaii
can you elaborate on why you can think of (X'X)^{-1}X' as constant matrix? Greg's way is to use vector notation. It was missing an additional step, which is now fixed. Statisticshowto.com Apply for $2000 in Scholarship Money As part of our commitment to education, we're giving away$2000 in scholarships to StatisticsHowTo.com visitors.
Even if you think you know how to use the formula, it's so time-consuming to work that you'll waste about 20-30 minutes on one question if you try to do the Difference Between a Statistic and a Parameter 3. Continuous Variables 8. If you do an experiment where you assign different doses or treatment levels as the x-variable then it is clearly not a random observance, but a fixed matrix.
Analyze sample data. Standard error of regression slope is a term you're likely to come across in AP Statistics. thanks! –aha Dec 11 '15 at 4:05 @aha, The x values in regression can be considered fixed or random depending on how the data was collected and how you Misleading Graphs 10.
Is the measure of the sum equal to the sum of the measures? Melde dich bei YouTube an, damit dein Feedback gezählt wird. standard error of regression Hot Network Questions IQ Puzzle with no pattern Are leet passwords easily crackable? Standard Error of Regression Slope was last modified: July 6th, 2016 by Andale By Andale | November 11, 2013 | Linear Regression / Regression Analysis | 3 Comments | ← Regression
Note: The TI83 doesn't find the SE of the regression slope directly; the "s" reported on the output is the SE of the residuals, not the SE of the regression slope. Step 7: Divide b by t. Nächstes Video Standar Error from Data (1-sample mean) - Dauer: 3:24 James Gray 4.629 Aufrufe 3:24 Standard Error - Dauer: 7:05 Bozeman Science 174.347 Aufrufe 7:05 How to calculate standard error minimise $||Y - X\beta||^2$ with respect to the vector $\beta$), and Greg quite rightly states that $\widehat{\beta} = (X^{\top}X)^{-1}X^{\top}Y$.
Degrees of freedom. Step 4: Select the sign from your alternate hypothesis. If you don't know how to enter data into a list, see:TI-83 Scatter Plot.) Step 2: Press STAT, scroll right to TESTS and then select E:LinRegTTest Step 3: Type in the The smaller the "s" value, the closer your values are to the regression line.
Answer 1 to stats.stackexchange.com/questions/88461/… helped me perfectly. –user3451767 Apr 9 '14 at 9:50 add a comment| 2 Answers 2 active oldest votes up vote 4 down vote To elaborate on Greg Instead, hold down shift and control and then press enter. The Y values are roughly normally distributed (i.e., symmetric and unimodal). Popular Articles 1.
Schließen Weitere Informationen View this message in English Du siehst YouTube auf Deutsch. The first true tells LINEST not to force the y-intercept to be zero and the second true tells LINEST to return additional regression stats besides just the slope and y-intercept. Search Statistics How To Statistics for the rest of us! View/set parent page (used for creating breadcrumbs and structured layout).
The TI-83 calculator is allowed in the test and it can help you find the standard error of regression slope. If you need to calculate the standard error of the slope (SE) by hand, use the following formula: SE = sb1 = sqrt [ Σ(yi - ŷi)2 / (n - 2) Anmelden Transkript Statistik 24.599 Aufrufe 62 Dieses Video gefällt dir? The TI-83 calculator is allowed in the test and it can help you find the standard error of regression slope.
Misleading Graphs 10. Like the standard error, the slope of the regression line will be provided by most statistics software packages. The equation looks a little ugly, but the secret is you won't need to work the formula by hand on the test. Watch headings for an "edit" link when available.
How to Calculate a Z Score 4. asked 2 years ago viewed 6326 times active 2 years ago 11 votes · comment · stats Linked 11 Derive Variance of regression coefficient in simple linear regression Related 6Standard error Home Tables Binomial Distribution Table F Table PPMC Critical Values T-Distribution Table (One Tail) T-Distribution Table (Two Tails) Chi Squared Table (Right Tail) Z-Table (Left of Curve) Z-table (Right of Curve) Wikidot.com Terms of Service - what you can, what you should not etc.
Step 1: Enter your data into lists L1 and L2. Because linear regression aims to minimize the total squared error in the vertical direction, it assumes that all of the error is in the y-variable. DON'T HIT ENTER. Wird geladen... Über YouTube Presse Urheberrecht YouTuber Werbung Entwickler +YouTube Nutzungsbedingungen Datenschutz Richtlinien und Sicherheit Feedback senden Probier mal was Neues aus!
If those answers do not fully address your question, please ask a new question. 1 see stats.stackexchange.com/questions/88461/… –TooTone Mar 28 '14 at 23:19 It's reasonably straightforward if you You can select up to 5 rows (10 cells) and get even more statistics, but we usually only need the first six. Standard Error of Regression Slope Formula SE of regression slope = sb1 = sqrt [ Σ(yi - ŷi)2 / (n - 2) ] / sqrt [ Σ(xi - x)2 ]). Melde dich an, um dieses Video zur Playlist "Später ansehen" hinzuzufügen.
|
2018-10-19 23:09:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48272889852523804, "perplexity": 3273.267594808587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512460.41/warc/CC-MAIN-20181019212313-20181019233813-00180.warc.gz"}
|
http://mathhelpforum.com/calculus/15666-exponential-integral.html
|
# Math Help - exponential integral
1. ## exponential integral
not quite sure how to do this one:
$\int^2_1 e^{lnu}\frac{1}{u}du$
2. Originally Posted by viet
not quite sure how to do this one:
$\int^2_1 e^{lnu}\frac{1}{u}du$
substitution. let t = ln(u)
3. Note,
$e^{\ln u}=u$
So,
$e^{\ln u}\cdot \frac{1}{u} = \frac{u}{u}=1$
4. Originally Posted by ThePerfectHacker
Note,
$e^{\ln u}=u$
So,
$e^{\ln u}\cdot \frac{1}{u} = \frac{u}{u}=1$
yeah, i missed that. thanks
|
2014-07-25 16:05:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.988271176815033, "perplexity": 6166.970470659348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894319.36/warc/CC-MAIN-20140722025814-00158-ip-10-33-131-23.ec2.internal.warc.gz"}
|
https://im.kendallhunt.com/HS/teachers/1/5/5/preparation.html
|
# Lesson 5
Representing Exponential Decay
### Lesson Narrative
In this lesson, students examine more situations with quantities that decrease exponentially. They work from an equation to a graph and from a graph to an equation. In both cases, they interpret the different parts of their equation in terms of the situation and use the graph to answer questions.
Like many activities in this unit, the equations and graphs represent actual quantities (the area covered by algae and the amount of insulin in a person’s body) and are to be interpreted in context (MP2). They also use a discrete graph to answer questions about quantities that vary continuously with time. In following lessons we will represent situations where the domain is all real numbers with a continuous graph.
Technology isn't required for this lesson, but there are opportunities for students to choose to use appropriate technology to solve problems. We recommend making technology available.
### Learning Goals
Teacher Facing
• Calculate growth factor using points on a graph that represents exponential decay.
• Graph equations that represent quantities that change by a growth factor between 0 and 1.
• Interpret equations and graphs that represent exponential decay situations.
### Required Preparation
If possible, acquire devices that can run Desmos (recommended) or other graphing technology as an option for students to select during the lesson.
### Student Facing
• I can explain the meanings of $a$ and $b$ in an equation that represents exponential decay and is written as $y=a \boldcdot b^x$.
• I can find a growth factor from a graph and write an equation to represent exponential decay.
• I can graph equations that represent quantities that change by a growth factor between 0 and 1.
|
2022-08-13 12:39:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6275565028190613, "perplexity": 824.8686262443207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571950.76/warc/CC-MAIN-20220813111851-20220813141851-00704.warc.gz"}
|
https://brian2.readthedocs.io/en/stable/reference/brian2.groups.group.Indexing.html
|
# Indexing class¶
(Shortest import: from brian2.groups.group import Indexing)
class brian2.groups.group.Indexing(group, default_index='_idx')[source]
Bases: object
Object responsible for calculating flat index arrays from arbitrary group- specific indices. Stores strong references to the necessary variables so that basic indexing (i.e. slicing, integer arrays/values, …) works even when the respective VariableOwner no longer exists. Note that this object does not handle string indexing.
Methods
__call__([item, index_var]) Return flat indices to index into state variables from arbitrary group specific indices.
Details
__call__(item=slice(None, None, None), index_var=None)[source]
Return flat indices to index into state variables from arbitrary group specific indices. In the default implementation, raises an error for multidimensional indices and transforms slices into arrays.
Parameters: item : slice, array, int The indices to translate. indices : numpy.ndarray The flat indices corresponding to the indices given in item.
See also
SynapticIndexing
|
2019-11-12 11:43:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20997291803359985, "perplexity": 10236.465994351785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665521.72/warc/CC-MAIN-20191112101343-20191112125343-00273.warc.gz"}
|
https://scielosp.org/article/bwho/2002.v80n5/378-383/
|
The treatment gap and primary health care for people with epilepsy in rural Gambia
Rosalind Coleman,1 Louie Loppy,2 & Gijs Walraven3
OBJECTIVE: To study primary-level management for people with epilepsy in rural Gambia by means of community surveys.
METHODS: After population screening was carried out, visits were made by a physician who described the epidemiology of epilepsy and its management. Gaps between required management and applied management were investigated by conducting interviews and discussions with people with epilepsy and their communities.
FINDINGS: The lifetime prevalence of epilepsy was 4.9/1000 and the continuous treatment rate was less than 10%. The choice of treatment was shaped by beliefs in an external spiritual cause of epilepsy and was commonly expected to be curative but not preventive. Treatment rarely led to the control of seizures, although when control was achieved, the level of community acceptance of people with epilepsy increased. Every person with epilepsy had sought traditional treatment. Of the 69 people with active epilepsy, 42 (61%) said they would like to receive preventive biomedical treatment if it were available in their local community. Key programme factors included the local provision of effective treatment and community information with, in parallel, clarification of the use of preventive treatment and genuine integration with current traditional sources of treatment and advice.
CONCLUSION: Primary-level management of epilepsy could be integrated into a chronic disease programme covering hypertension, diabetes, asthma and mental health. Initial diagnosis and prescribing could take place away from the periphery but recurrent dispensing would be conducted locally. Probable epilepsy etiologies suggest that there is scope for primary prevention through the strengthening of maternal and child health services.
Keywords Epilepsy/epidemiology/therapy; Seizures/therapy; Patient acceptance of health care/ethnology; Attitude/ethnology; Choice behavior; Primary health care; Medicine, Traditional; Gambia (source: MeSH, NLM).
Mots clés Epilepsie/épidémiologie/thérapeutique; Crise/thérapeutique; Acceptation des soins/éthnologie; Attitude/éthnologie; Comportement choix; Programme soins courants; Médecine traditionnelle; Gambie (source: MeSH, INSERM).
Palabras clave Epilepsia/epidemiología/terapia; Ataques/terapia; Aceptación de la atención de salud/etnología; Actitud/etnología; Conducta de elección; Atención primaria de salud; Medicina tradicional; Gambia (fuente: DeCS, BIREME).
Introduction
There are more than 50 million people with epilepsy worldwide but comprehensive or effective treatment is rare (1). It is estimated that more than 80% of people with epilepsy in developing countries do not receive effective treatment (2). Epilepsy is associated with psychosocial problems, reduced life expectancy, social isolation and an increased risk of unexpected death (36). The difference between the need for effective treatment and the receipt of such treatment is termed the treatment gap.
Seizures can be controlled in as many as 75% of people with epilepsy by means of inexpensive medication dispensed by primary health care workers (7, 8). Improved access to effective low-cost medication is essential if the uptake of treatment is to be increased. However, epilepsy has relatively complex social and spiritual implications, and the wide treatment gap is associated with varied combinations of factors other than cost and the provision of health care of good quality. Ideas about the causes of epilepsy and opinions about treatment are often based on beliefs about disease, contagion and sources of unexplained phenomena. Since epilepsy is often seen as a spiritual affliction, some people with active epilepsy or their carers assume that traditional treatment in the local community is appropriate. Others, however, travel far and pay large sums in a quest for curative treatment but do not consider preventive biomedical medication to be appropriate (9, 10). Persons who experience infrequent attacks may only seek treatment when seizures occur. Those who seek biomedical care may be disappointed if clinic staff or medication are not available. How can these factors be confronted?
In the developing world there are a few sites where the prevalence of epilepsy is unusually high, but in most places an intervention programme isolated from primary care in general would be unjustifiable (11, 12). The purpose of the present study was to contribute to a comprehensive epilepsy programme within the primary health care system of the Gambia by providing information on the epidemiology of epilepsy, prevalent notions about the disease, and choice and experience of different treatments. By combining such information with existing knowledge it becomes possible to suggest ways of improving prevention, community acceptance and the uptake of effective treatment by persons requesting it as well as those who are unaware that they may benefit from it.
Methods
Study population
The 16 200 people who live in 40 of the villages around the town of Farafenni have been under continuous demographic surveillance by the Medical Research Council of the Gambia since 1981 (13). Under the Household Registration System, field workers gather information on residents, births, deaths and migration every three months. The majority of the people are involved in subsistence farming. For nearly half of the population the annual income is less than US$150. The literacy rate among women is 3%. Islam is the religion of 95% of the population and there is a dense intermingling of religious and traditional medical systems. Many traditional healers are also religious elders. In the surveillance area there are two primary health care dispensaries, one hospital and several private pharmacies, each with an intermittent supply of phenobarbitone. Transportation to these facilities is mainly by donkey or horse cart, and there are taxis on the few laterite roads. The six mobile maternal and child health clinics do not cater for chronic conditions. There are two cadres of community health workers: village health workers and community health nurses. The village health workers, based in villages with populations of more than 400, treat common conditions, including malaria, diarrhoea and acute chest infections, and provide some health information. They are supervised by the community health nurses. None of these health workers carry out treatment for chronic conditions. Surveys The epilepsy surveys were linked to projects already in progress under the Household Registration System, in order to make the best possible use of resources. The first survey was conducted from January to June 1997 as part of a community noncommunicable diseases survey of persons aged over 14 years in a random selection of half the villages covered by the Household Registration System. The second survey, which took place between January and March 1999, was performed by field workers in all the demographic surveillance villages, and information was obtained on all household members from the heads of households. The participants in the noncommunicable diseases survey (n = 3223) were therefore a subset of the population participating in the Household Registration System (n = 16 200). In both surveys a two-stage approach was used to identify people with epilepsy. In the first stage a screening questionnaire was employed, which was a modified version of one validated in Ecuador (14). For the survey under the Household Registration System a question was added in which local terms, determined by forward and back translation in the three local languages of Wolof, Mandinka and Fula, were used for types of seizure (15). Local validation of the questionnaires involved testing with known epileptics. In the second stage of both surveys, those people with a positive screening questionnaire were evaluated by a physician with neurological experience. On the basis of histories taken from the study subjects or their close companions, epilepsy status was defined as active, inactive or a false positive screening test (i.e. not actually epileptic) and attempts were made to identify the forms of seizure. Information about treatment that had been tried was obtained for all people found to have active epilepsy and their attitudes were canvassed on the regular use of an effective medication available either at clinics or from community health workers. People with active epilepsy were offered treatment with phenytoin in accordance with the government recommendations of the time. However, most were later given phenobarbitone, the supply of which was more secure in the country's primary care system. A semistructured interview was conducted with 25% of the people identified as having active or inactive epilepsy, covering knowledge and beliefs about causation, treatment, health-seeking behaviour and experience, socioeconomic circumstances, relatives with epilepsy and possible etiological factors. Each interview was conducted by a trained Gambian field worker who was fluent in the local language and was supervised by the principal investigator. Predetermined open questions were used. Another trained Gambian field worker provided concurrent translation for the principal investigator, who could thus ask additional questions if clarification was needed. The same field worker conducted group discussions with the communities to which people with epilepsy belonged and with other interest groups, including teachers, religious leaders, traditional healers and biomedical health care workers. These discussions supplemented the interviews and established connections for the later dissemination of results. The field worker had been trained in group discussion techniques and was guided by an outline of topics to be covered after the following questions had been asked: "What can anyone say about epilepsy?" and "What can anyone say about how it is to be a person with epilepsy?" Again there was concurrent translation for the principal investigator, who answered questions about epilepsy after the discussion. Definitions The standard criteria were used for the diagnosis and classification of epilepsy (16), on the basis of history and eyewitness accounts. No electroencephalogram facilities were available. Active epilepsy included any case of epilepsy in which there had been at least one unprovoked seizure in the previous five years, whether or not treatment was being given. Epilepsy was defined as inactive if there had been no seizure in the previous five years. Lifetime epilepsy was the sum of active and inactive epilepsy. Single or febrile seizures and acute symptomatic seizures were not included. The etiology of the epilepsy was attributed to antenatal or perinatal insult if there was evidence from history or examination of static retardation of motor and/or mental development, with no obvious postnatal precipitating factor that was said to have been evident in the first year of life (17). Treatments labelled "traditional" and "biomedical" respectively refer to those originating from within a set of cultural beliefs and those based on a biomedical model. Analysis Because of the erratic availability of drugs and the variability of attendance at clinics, many people with epilepsy used biomedical medication only when seizures occurred and could not be classed as fully on or off such treatment. For data entry and analysis a score was given in order to code awareness of biomedical treatment, any persistent effort made to use it, and whether it was sought only at the time of a seizure or not at all. Replies concerning attitudes to the taking of preventive treatment locally were graded according to the proposed uptake from either of the dispensing sources (community health workers and clinics). The survey data were analysed by means of Epi Info version 6. The interviews and group discussions were analysed by theme. Themes were considered across interviews and discussions and for relevance to understanding and responding to the wide treatment gap. Only statements or ideas expressed more than once are reported. Results Epidemiology The compliance rates with the noncommunicable diseases survey and the Household Registration System survey were 81.7% and 99.8% respectively. Taken together, these surveys gave a prevalence of active epilepsy of 4.3/1000 (69/16 200) with a 95% confidence interval of 3.84.8, and a lifetime prevalence of 4.9/1000 (80/16 200) with a 95% confidence interval of 4.55.3. Common seizure types were primary generalized tonic-clonic and partial with secondary generalization, affecting 48% and 36% of lifetime epileptics respectively. Other partial seizures included complex partial and simple partial (6% and 2% of all seizures); primary generalized nonconvulsive seizures accounted for the remaining 8%. In persons with active epilepsy, age-specific prevalence peaked between 25 and 44 years (Table 1). Their frequency of fits ranged from daily to less than yearly, and was said to be declining for 36% of them; 26% said they were having seizures at least once a week. In those people with a lifetime history of epilepsy for whom it was possible to suggest an etiology (55/80, i.e. 67% of all lifetime epileptics), 31% of seizures began after a febrile illness in childhood and for 67% there was evidence of antenatal or perinatal brain insult. Three people with epilepsy claimed to have first-degree relatives who were epileptics. In the two years between the noncommunicable diseases survey and the Household Registration System survey, 3 people died among the 21 with lifetime epilepsy in the former survey. If, on average, they died halfway between the two surveys, there would have been 39 person-years of observation, giving a death rate of 77/1000 person-years. In the same period, 52 non-epileptic adults in the noncommunicable diseases population died in 6267 person-years of observation, resulting in a significantly lower death rate of 8/1000 person years (P Treatment Every person with a lifetime history of epilepsy had used traditional treatment, and 74% had attempted to find treatment from more than one source. The median number of people from whom treatment had been requested was six (range 2 16). For nearly half of the people with active epilepsy this included a trial of biomedical treatment, either dispensed during the noncommunicable diseases survey or obtained by visiting a clinic. Both traditional and biomedical methods of treatment were used preventively only on rare occasions. Only 16% (11/ 69) of people with active epilepsy knew that preventive treatment was possible. Attempts to obtain preventive treatment from a clinic were intermittently thwarted either by a lack of personal finances or by inadequate drug supplies. Consequently, the only people on regular treatment were those who had resorted to buying it from private pharmacies. Others who were currently seeking treatment attempted to find it only at the time of a seizure. Of the 48% (33/69) of people with active epilepsy who had never sought biomedical treatment, 70% did not know that clinics offered treatment for seizures. None of them said they would take regular preventive medication from a clinic but 45% (15/33) claimed they would take such medication if it were available from a community health worker. Given that 11 people were attempting to maintain preventive treatment at a clinic and that others had previously sought biomedical treatment only at the time of a seizure but now claimed that they would take continuous preventive treatment, the possibility existed that 61% (42/69) of people with active epilepsy could receive preventive treatment if it were available from a local community health worker. The remaining 39% (27/69) of people with active epilepsy said they would continue with no treatment or with traditional or biomedical treatment only at the time of a seizure, even if preventive medication were available locally (Table 2). Most traditional treatment was obtained in the home villages from the healers, including relatives, who were used for other illnesses, or from healers visiting the villages. A person with epilepsy could attend the healer in person or send a representative to describe the symptoms and return with a treatment. Treatment included readings from the Koran, sometimes written down and sewn into cloth or leather amulets (jujus) that had to be worn. Water with herbs was blessed and given for washing and drinking, sometimes combined with exorcism rituals. It was commonly considered that whether or not effective treatment was found was God's will, and that many people with different skills should therefore be visited in order to find a cure. Except for treatment in the home villages the mean time taken to obtain treatment was six hours and the mean cost was 20 dalassi (US$ 1.60) per treatment. No persons with epilepsy and none of their families put money aside to pay for treatment. When the time for repeat treatment came, or when a person with epilepsy had a seizure, the action taken depended on what resources were available. It was the duty of the relatives to pay for treatment.
Context of epilepsy
The cause, persistence and treatment of epilepsy were accepted as ultimately under God's will and power. Most people attributed the immediate cause of epilepsy to a malign spirit but a few did not propose a cause. Some personal behaviour was thought to increase vulnerability to epilepsy, such as bathing late at night or the collection of water after dark by pregnant women. The avoidance of such behaviour was considered to offer the only possibility of prevention. Seizures were not seen as punishment, and no overt blame was attached to persons with epilepsy or their families. There was a generally high level of acceptance and integration of people with the disorder, but the degree of seizure control determined specific views and limitations on appropriate treatment, education and social roles. All the people with epilepsy in this rural area were living in a family setting, sometimes placing a heavy burden of care on their female relatives.
Discussion
People with epilepsy in the Gambia may not attain their full potential because of a combination of low awareness and poor availability of effective treatment, frequent associated mental disability and limiting societal perceptions. The highly increased risk of early death in people with active epilepsy (4, 6) was demonstrated in this small sample. The prevalence figures for active epilepsy and the spontaneous decrease in the frequency of fits were in keeping with previous estimates (18), although for lifetime epilepsy the figures were lower than would have been expected in the light of data from other studies. This might have been attributable to bias in recall, i.e. forgetfulness and denial, together with the high mortality rate (19). The types of seizure were defined as accurately as possible on the basis of eyewitness accounts but without the benefit of electroencephalograms (16). An attempt was made to fit a seizure type to each description that was felt to be a genuine case of epilepsy. It appeared that complex partial seizures were underrepresented but that generalized non-convulsive seizures were overreported, possibly because of misclassification of complex partial non-convulsive episodes (2).
The low continuous treatment rate, i.e. below 10%, is surprising but not unexpected and is similar to that reported elsewhere, e.g. in Sierra Leone (20). Increasing the sustained use of effective treatment is closely linked to improved awareness of epilepsy, as beliefs and explanations about epilepsy influence health-seeking and treatment-seeking behaviour. In many parts of sub-Saharan Africa, notions about epilepsy are rooted not in a medical model but in a spiritual model (21). This involves an external factor and the aim of the person with epilepsy is therefore to find a contextually relevant cure that removes the alien factor from the body. Consequently, preventive or biomedical treatment may not be seen as an option. Yet people with epilepsy sought treatment from various sources, often local, returned to the same healer if they were helped, and could be motivated by the community integration that accompanied seizure control. This presents an opportunity to increase understanding of preventive treatment and improve access to it locally. Already 61% were saying that they would take treatment from community health workers and more could be expected to follow suit if this were seen to be effective (2, 21).
The level of acceptance given to people with epilepsy depended mainly on the control of seizures and the severity of any mental handicap. Attitudes varied with previous experience of people with epilepsy. Many people recognized that their understanding of epilepsy was imperfect and that they were just trying to interpret their own experience as well as possible. A similar finding was reported from East Africa (21). Thus there was scope for influencing attitudes and improving the acceptance and treatment of people with epilepsy.
On the basis of our findings the following points seem relevant to community epilepsy programmes.
The integration of people with epilepsy into communities can only be improved in parallel with improved seizure control.
It is necessary to explain about preventive treatment in a way that is sensitive to prevalent perceptions and beliefs.
Treatment should be provided locally by members of the community.
A combination of broadening current primary health care work and collaborating with new partners is required. The strengthening of primary care for other chronic conditions, including hypertension, diabetes, asthma and mental illness is already a priority because of the increasing burden of chronic diseases (11). Epilepsy could be integrated into this endeavour. The initial assessment, diagnosis and prescription could take place during focused visits by trained staff, including community nurses. Protocols would be required for prescribing a limited number of drugs with secure availability. Prescribed drugs, held by community health workers, would be dispensed monthly to named patients. If the community health workers were able to provide effective treatment and make appropriate referrals they would be respected and their health education and information messages would be more likely to be heeded. The long-term presence of a primary care service for people with epilepsy in the United Republic of Tanzania changed notions about the illness and attitudes towards these people (4).
Collaboration with traditional healers would amount to an acknowledgement that biomedical services did not answer all the needs of people with epilepsy. Some healers would consider linking with biomedical services for the purposes of referral but would not countenance the sharing of ideas on treatment. The careful development of such a link would bring opportunities to present ideas about causation and preventive treatment without disturbing fundamental beliefs and values.
The prevention of onset of epilepsy depends on the risk factors. A putative etiology was defined for 67% of people with epilepsy, mostly on a basis of clinically obvious retardation of mental or physical development. If the 31% attributed to febrile illness represented overreporting, this would reflect people's need to construct a meaningful explanation for epilepsy. However, this can only be tested prospectively. Overall there is evidence of a significant role of insults that occur in utero, birth trauma and infectious diseases of childhood. In the study villages, home births without any trained supervision accounted for 48% of deliveries (22), and malaria and meningitis are common. Neurocysticercosis was an unlikely cause of epilepsy in this predominantly Muslim culture and no pigs were kept in the villages visited. The further refining of etiological data is not needed for an effective prevention programme, since the strengthening of primary maternal and child health services would automatically address much of the preventable causation of epilepsy (18).
It would not be easy to fund such a chronic disease programme, especially in the face of many competing health needs. With a revolving drug fund, payment for medication by people with epilepsy would have to be made to a secure local committee or else centrally on an annual basis. This would relieve community health workers from the need to handle money, which might compromise their safety or be squandered. People with epilepsy are not usually in a position to organize themselves and put pressure on health services to provide appropriate treatment or improve primary prevention. The same is true for people with other chronic conditions. It is necessary for health care planners to be proactive in discussing the development of such programmes.
Conclusion
The treatment gap for epilepsy in developing countries can be expected to diminish when effective and appropriately presented treatment is a real option. Similar issues exist for other chronic diseases. Tackling them all in an integrated primary care programme would form a systematic approach with an increased chance of sustainability. This would involve strengthening and mobilizing all primary care workers and recognizing traditional health and belief systems. If the treatment of epilepsy is not systematic and comprehensive it cannot be regarded as adequate (23). n
Acknowledgements
We thank field workers Tumani Trawally, Ousman Bah and Malik Njie, and computer staff Ensa Touray, Mufta Hydara, Pierre Gomez and Kunle Okunoye, whose help was indispensable. Keith McAdam, Director of the Medical Research Council Laboratories in the Gambia, has given continued support for which we are grateful. Marianne van der Sande made useful comments on a draft of this paper. The study would not have been possible without the generous and willing cooperation of local health workers, people with epilepsy and their relatives, and the communities in the study villages.
Conflicts of interest: none declared.
Résumé
Insuffisance du traitement et soins de santé primaires pour les personnes atteintes d'épilepsie dans des zones rurales de Gambie
OBJECTIF: Etudier, au moyen d'enquêtes en communauté, la prise en charge de l'épilepsie au niveau des soins de santé primaires dans des zones rurales de Gambie.
MÉTHODES: Après un dépistage dans la population, des visites ont été faites par un médecin qui a décrit l'épidémiologie de l'épilepsie et sa prise en charge. Les écarts entre la prise en charge nécessaire et la prise en charge effective ont fait l'objet d'une investigation au moyen d'entretiens avec les personnes concernées et de discussions de groupe au sein de la communauté.
RÉSULTANTS: La prévalence de l'épilepsie sur la vie entière était de 4,9/1000 et le taux de traitement continu était inférieur à 10 %. Le choix du traitement était guidé par la croyance en une cause extérieure, surnaturelle, de l'épilepsie et le but recherché était curatif plutôt que préventif. Le traitement permettait rarement de maîtriser les convulsions, mais lorsqu'il y parvenait, les personnes épileptiques étaient mieux acceptées par la communauté. Toutes les personnes atteintes avaient cherché à se soigner par des méthodes traditionnelles. Parmi les 69 personnes souffrant d'épilepsie active, 42 (61 %) ont déclaré qu'elles souhaiteraient recevoir un traitement biomédical préventif si cela était possible dans leur communauté. Les facteurs clés du programme prévoyaient la fourniture locale d'un traitement efficace et l'information de la communauté, avec en parallèle des explications sur l'utilisation du traitement préventif et une réelle intégration parmi les sources traditionnelles de traitement et de conseil.
CONCLUSION: La prise en charge de l'épilepsie au niveau des soins de santé primaires pourrait être intégrée dans un programme de lutte contre les maladies chroniques portant sur l'hypertension, le diabète, l'asthme et la santé mentale. Le diagnostic et la prescription initiale pourraient avoir lieu loin de la périphérie, mais les renouvellements pourraient être faits au niveau local. Compte tenu des étiologies probables de l'épilepsie, une prévention primaire pourrait être envisagée par le biais d'un renforcement des services de santé maternelle et infantile.
Resumen
Fallos de la cobertura terapéutica y atención primaria para las personas con epilepsia en zonas rurales de Gambia
OBJETIVO: Estudiar mediante encuestas comunitarias el tratamiento dispensado a las personas afectadas de epilepsia en el ámbito de la atención primaria en las zonas rurales de Gambia.
MÉTODOS: Tras llevar a cabo un cribado de la población, se realizaron visitas en las que un médico describía la epidemiología de la epilepsia y su tratamiento. Se investigó la divergencia entre el tratamiento requerido y el aplicado, organizando para ello entrevistas y charlas con las personas afectadas y sus comunidades.
CONCLUSIÓN: El tratamiento de la epilepsia en el nivel de atención primaria podría integrarse en un programa de lucha contra las enfermedades crónicas que cubriese la hipertensión, la diabetes, el asma y las enfermedades mentales. El diagnóstico y la prescripción iniciales podrían tener lugar fuera de la periferia, pero la dispensación periódica de tratamiento se realizaría a nivel local. Las etiologías más probables de la epilepsia permiten pensar que hay margen para reforzar la prevención primaria mediante el fortalecimiento de los servicios de salud maternoinfantil.
References
1. International League Against Epilepsy. The treatment gap in epilepsy: the current situation and ways forward. Epilepsia 2001;42(1):136-49.
2. Shorvon SD, Farmer PJ. Epilepsy in developing countries: a review of epidemiological, sociocultural and treatment aspects. Epilepsia 1988;29 (Suppl 1):S36-S54.
3. Matuja WB. Psychological disturbance in African Tanzanian epileptics. Tropical and Geographical Medicine 1990;42:359-64.
4. Jilek-Aall L, Rwiza HT. Prognosis of epilepsy in a rural African community: a 30 year follow-up of 164 patients in an outpatient clinic in rural Tanzania. Epilepsia 1992;33:645-50.
5. Jilek-Aall L, Jilek M, Kaay J, Mkombachepa L, Hilary K. Psychosocial study of epilepsy in Africa. Social Science and Medicine 1997;45:783-95.
6. Nilsson L. Risk factors for sudden unexpected death in epilepsy: a case control study. Lancet 1999;353:888-93.
7. Feksi AT, Kaamugisha J, Sander JW, Gatiti S, Shorvan SD. Comprehensive primary health care anti-epileptic drug treatment programmes in rural and semi-urban Kenya. Lancet 1991;337:406-9.
8. Kale R. Bringing epilepsy out of the shadows. British Medical Journal 1997;315:2-3.
9. Shaba B. Palliative versus curative beliefs regarding tropical epilepsy. Central African Journal of Medicine 1993;39:165-7.
10. Reis R. Anthropological aspects of epilepsy. Tropical and Geographical Medicine 1994;46:S37-S39.
11. Coleman R, Wilkinson D, Gill G. Noncommunicable disease management in resource-poor settings: a primary care model from rural South Africa. Bulletin of the World Health Organization 1998;76:633-40.
12. Pal DK, Nandy S, Sander JWAS. Towards a coherent public health analysis for epilepsy. Lancet 1999;353:1817-8.
13. Hill AG, MacLeod WB, Joof D, Gomez P, Walraven G. Decline of mortality in children in rural Gambia: the influence of village-level primary health care. Tropical Medicine and International Health 2000;5:107-18.
14. Placencia M, Sander JWAS, Shorvon SD. Validation of a screening questionnaire for the detection of epileptic seizures in epidemiological studies. Brain 1992;115:783-94.
15. Pal DK, Das T, Sengupta S. Comparison of key informant and survey methods for ascertainment of childhood epilepsy in West Bengal, India. International Journal of Epidemiology 1998;27:672-6.
16. International League Against Epilepsy, Commission on Epidemiology and Prognosis. Guidelines for epidemiologic studies on epilepsy. Epilepsia 1993;34,592-6.
17. Mendizabal JE, Salguero LF. Prevalence of epilepsy in a rural community of Guatemala. Epilepsia 1996;37:373-6.
18. Primary prevention of mental, neurological and psychosocial disorders. Geneva: World Health Organization; 1998.
19. Scott R, Lhatoo S, Sander J. The treatment of epilepsy in developing countries: where do we go from here? Bulletin of the World Health Organization 2001;79:344-51.
20. Lisk DR. Epilepsy pattern and clinical compliance in Sierra Leonian epileptics. Journal of the Sierra Leone Medical and Dental Association 1992;6:9-23.
21. Whyte SR. Constructing epilepsy: images and contexts in East Africa. In: Ingstad B, Whyte SR, editors. Disability and culture. Berkeley and Los Angeles (CA): University of California Press; 1995. p.226-45.
22. Walraven G, Telfer M, Rowley J, Ronsmans C. Maternal mortality in rural Gambia: levels, causes and contributing factors. Bulletin of the World Health Organization 2000;78:603-11.
23. Mani K, Sidharta P, Pickering C. Educational aspects in the education of health workers, patients and the public. Tropical and Geographical Medicine 1994;46:S34-S36.
1 Public Health Physician and Associate Researcher, Medical Research Council Laboratories, Farafenni Field Station, PO Box 273, Banjul, The Gambia (email: rosalind_coleman@hotmail.com). Correspondence should be addressed to Dr Coleman.
2 Senior Field Supervisor, Medical Research Council Laboratories, Farafenni Field Station, Banjul, The Gambia.
3 Senior Researcher and Station Head, Medical Research Council Laboratories, Farafenni Field Station, Banjul, The Gambia.
Ref. No. 00-1045
World Health Organization Genebra - Genebra - Switzerland
E-mail: bulletin@who.int
|
2020-09-27 17:00:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18648366630077362, "perplexity": 14169.814870694147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400283990.75/warc/CC-MAIN-20200927152349-20200927182349-00325.warc.gz"}
|
https://www.electro-tech-online.com/threads/boost-sound-from-raspberry-pi.132597/page-2
|
# Boost sound from Raspberry pi
Status
Not open for further replies.
#### lilimike
##### Member
When I connect the Raspberry Pi (running the Linux application Mplayer2 streaming music or playing an MP3 file) to the phone system and I put a phone on hold, I hear next to nothing, I just barely distinguish that music is playing.
When I try to connect an 8Ω speaker directly to the analog output of the Pi I am getting just a little more volume as with the phone system. Since I do not have access to the phone system I am testing with the speaker. I figured it was close enough.
The phone system in question is a Norstar ICS 4.0 and from its manual it indicates the folowing:
Code:
External music source (customer supplied)
The music source can be any approved low-power device such
as a radio with a high-impedance earphone jack. The
recommended ICS input level is 0.25 V rms across an input
impedance of 3300 Ω.
My objective is to reach he maximum volume possible with no distortion and from this I can control (reduce) the volume from the Mplayer2 command line application.
#### lilimike
##### Member
Hi
Knowing that the LM386 amp solution works perfectly to solve your sound problem, I would say you should let sleeping dogs lie, and move on to deal with overheating problem, as that's a whole lot easier to fix.
Three pin linear regulators produce heat, as watts, based on how much current is going through them times the voltage drop across them. You are dropping from 9 volts down to 5 volts, or 4 volts of drop, times the max current of the RassPi, which as stated is 1 Amp. So with 4 volts times 1 amp, you are dissipating 4 Watts in the LM7805. Without heat sinking, it is obviously going to overheat.
Your first and simplest option, is to try and heat sink it better. But this is not a great solution, as you need it in an enclosed space. Second is to try and drop some of the voltage in an external part before it gets to the regulator. Something like a resistor, transistor, or zener can be made to do this. But this is also not the best solution as the added part will just be taking the heat burden in the LM7805's place.
With the above in mind, why don't you use one of TI's "SIMPLE SWITCHER®" products, or something similar? They are very simple, easy to use buck converters in a TO-220-5 package with a minimal need for external parts. They are almost exactly like their three terminal cousins. The difference is these guys operate in switching mode, so they have almost no power loss, and thus, virtually no heat production. Doing this side steps the whole overheating issue, and has the added benefit of being more power efficient. Win-Win honestly.
Something like the LM2575T-5.0 would be a perfect fit, and is only \$2.50 +S&H. It only needs 4 external parts, two you are going to have to use for a three terminal regulator anyway. *HERE* is the data sheet. Note that C[SUB]out[/SUB], like most SMPS output caps, needs to be a low ESR type or it will fail prematurely. You CAN get away with paralleling a bunch of normal electrolytic caps together, but you probably shouldn't.
This is good information. I am still going to try a little harder to have my solution using the original 5V but as a next option I will try this out.
Thank you!
Mike
#### ()blivion
##### Active Member
I am still going to try a little harder to have my solution using the original 5V
Something like the MAX9722A can do what you want then as it has an internal charge pump. But you will need a TSSOP-16 or THIN QFN breakout to use it with protoboard. And using a breakout might cause layout related issues.
#### ronv
##### Well-Known Member
The LM386 should have worked with 5 volts. Since you have a scope set the output to .25 volts. This won't be very loud into a speaker, but might be loud in an earphone or phone system. While your at it measure the input from the raspberry.
#### ()blivion
##### Active Member
Yeah actually, it really should have worked. It should work with the Pi honestly, as 0.25 V RMS across 3300 Ω is not really very high demand. Shouldn't the Pi easily be able to output this? I wonder if the input to the phone system is not somehow damaged and drawing more current than it should be? Then again, I don't really know for sure.
I just came in because the regulator problem was easy to solve.
#### ronv
##### Well-Known Member
Your probably right (). I don't know what the raspberry put out. Do you?
#### lilimike
##### Member
I am trying again with the LM386 at 5V and the volume at 1/4 the sound is quite bad, sounds like a train + and distortion.
Do I need a tone generator to measure the output from the Raspberry Pi? using music it keeps changing.
Edit: Maybe my values or something is wrong?
This is what I have:
Last edited:
#### lilimike
##### Member
I think I have found why the sound is so bad...
I am using LM386N-4 and according the the datasheet Min = 5V
The RasPi is delivering 4.8V
Perhaps I should use LM386N-1 which requires Min=4V ?
#### ()blivion
##### Active Member
As far as I know, a LM386N is a LM386N no matter how you slice it. I don't think there is a real difference between -1 or -4. And even so, 4.8 volts is really close.
Edit: That doesn't mean that running so close to the min isn't the problem. Just that the different part number shouldn't make a difference.
What datasheet are you reading from? Can you provide a link?
#### ronv
##### Well-Known Member
With the capacitor from 1 to 8 your gain is 200 so the output is probably clipping even when the pot is close to 0. You might also put a little resistance in series with the cap on the output. If you don't have 10Ω use 4.7 or 15. When you scope the output the voltage swing should be below 4 and above 1 or it is still clipping. The input should be less than 0.2 volts peak to peak. With those 2 numbers you should be able to set the gain correctly.
Yes, the -1 part is the right one but the others will probably work.
#### audioguru
##### Well-Known Member
Remove your C1. It makes the gain way too high.
Add a 10 ohm resistor in series with C5 as shown on EVERY schematic in the datasheet of the LM386. It might oscillate without it.
The 220uF capacitor C2 is calculated for an 8 ohm load. To feed the 3300 ohm input of the phone system it should be 0.47uF or 1uF.
If you built the amplifier on a solderless breadboard then it is probably oscillating and distorting due to the high capacitance between rows of contacts.
With a 5V supply, the maximum output without distorting of an LM386 into a 3300 ohm load is 3.8Vp-p which is 1.34V RMS which is more than enough.
#### lilimike
##### Member
Ok I've removed C1 and that took a chunk out of the speaker volume as I went from a gain of 200 to 20, I added a 10Ω in series with C5 (that didn't make a difference but it is there now) and when I replaced C2 with 1uF the volume in the speaker is the same as if It was plugged in directly to the Pi (to my ears) I guess due to the speaker being 8Ω and the output now configured for 3.3k so I will solder everything like this and test it on the phone system hopefully this weekend.
Thank you all for your help.
Mike
#### lilimike
##### Member
I tested on the phone system and using 5V (from the Pi) I had to leave C1 and keep C2 at 220uF. Somehow I figured the 3.3k spec I read in the manual was either a min or a max or I simply had the wrong manual.
What I hear on the phone system is exactly the same as when I plug an 8Ω speaker.
Something I was not comfortable with was the volume generated distortion at less than about 1/4 of the way. I added a 100k resistor in series with the audio input and it now performs better, distortion starts at 3/4 up.
I have everything soldered and enclosed in a plastic box.
I still have a 4Hz low sound that becomes noticeable only when the music stops, I can live with this but if I had a fix it would make it perfect!
#### audioguru
##### Well-Known Member
Since you needed to use a 220uF output capacitor on the LM386 amplifier then the input impedance of the phone system music-on-hold must be 8 ohms, not 3.3k ohms.
#### lilimike
##### Member
yes that's what I figured.
#### ronv
##### Well-Known Member
You might need some big (maybe 47Ufd.) filter caps on the supply voltage and pin 7 to eliminated the "putt putts'. If you need more output without distortion it sounds like a 12 volt wall wort is in your future.
#### lilimike
##### Member
I tried adding a 47uF cap between pin 7 and ground, also tried between pin 7 and +5V and tried to add a 220uF between +5V and ground and it didn't make a difference. It sounds like a choo choo train when the volume is full down or when the music stops and turning the volume up doesn't seem to make that sound louder but when I stop the Mplayer service the sound stops so it must be something else but it is not noticeable when the music is playing so I can live with this.
As this project is for a specific use and I have to deliver, I will let it go as is. But working on this gave me a bunch of ideas so I've placed an order for a bunch of parts among which the LM2575T-5.0 suggested by ()blivion so I can get more power.
Mike
|
2021-05-17 03:10:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3986431062221527, "perplexity": 1557.6462062313283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991557.62/warc/CC-MAIN-20210517023244-20210517053244-00361.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/trigonometry/CLONE-68cac39a-c5ec-4c26-8565-a44738e90952/chapter-5-trigonometric-identities-section-5-4-sum-and-difference-identities-for-sine-and-tangent-5-4-exercises-page-227/49
|
## Trigonometry (11th Edition) Clone
Remember from Reciprocal Identities: $$\cot\theta=\frac{1}{\tan\theta}\hspace{1.5cm}\csc\theta=\frac{1}{\sin\theta}\hspace{1.5cm}\sec\theta=\frac{1}{\cos\theta}$$ So as we see here, we can always derive the cotangent, cosecant and secant if we already know the tangent, sine and cosine of the sum or difference of 2 numbers or angles. Therefore, in standard trigonometry texts, only sine, cosine and tangent of the sum or difference are necessary, as the rest can be deduced using Reciprocal Identities.
|
2022-06-27 05:12:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8802690505981445, "perplexity": 502.2945619672445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103328647.18/warc/CC-MAIN-20220627043200-20220627073200-00279.warc.gz"}
|
https://www.linstitute.net/archives/632632
|
USACO 2022 US Open Contest, Platinum Problem 1. 262144 Revisited
USACO 2022 US Open Contest, Platinum Problem 1. 262144 Revisited
Bessie likes downloading games to play on her cell phone, even though she does find the small touch screen rather cumbersome to use with her large hooves.
She is particularly intrigued by the current game she is playing. The game starts with a sequence of NN positive integers a1,a2,,aNa1,a2,…,aN (2N262,1442≤N≤262,144), each in the range 11061…106. In one move, Bessie can take two adjacent numbers and replace them with a single number equal to one greater than the maximum of the two (e.g., she might replace an adjacent pair (5,7)(5,7) with an 88). The game ends after N1N−1 moves, at which point only a single number remains. The goal is to minimize this final number.
Bessie knows that this game is too easy for you. So your job is not just to play the game optimally on aa, but for every contiguous subsequence of aa.
Output the sum of the minimum possible final numbers over all N(N+1)2N(N+1)2 contiguous subsequences of aa.
INPUT FORMAT (input arrives from the terminal / stdin):
First line contains NN.The next line contains NN space-separated integers denoting the input sequence.
OUTPUT FORMAT (print output to the terminal / stdout):
A single line containing the sum.
SAMPLE INPUT:
6
1 3 1 2 1 10
SAMPLE OUTPUT:
115
There are 672=216⋅72=21 contiguous subsequences in total. For example, the minimum possible final number for the contiguous subsequence [1,3,1,2,1][1,3,1,2,1] is 55, which can be obtained via the following sequence of operations:
original -> [1,3,1,2,1]
combine 1&3 -> [4,1,2,1]
combine 2&1 -> [4,1,3]
combine 1&3 -> [4,4]
combine 4&4 -> [5]
Here are the minimum possible final numbers for each contiguous subsequence:
final(1:1) = 1
final(1:2) = 4
final(1:3) = 5
final(1:4) = 5
final(1:5) = 5
final(1:6) = 11
final(2:2) = 3
final(2:3) = 4
final(2:4) = 4
final(2:5) = 5
final(2:6) = 11
final(3:3) = 1
final(3:4) = 3
final(3:5) = 4
final(3:6) = 11
final(4:4) = 2
final(4:5) = 3
final(4:6) = 11
final(5:5) = 1
final(5:6) = 11
final(6:6) = 10
SCORING:
• Test cases 2-3 satisfy N300N≤300.
• Test cases 4-5 satisfy N3000N≤3000.
• In test cases 6-8, all values are at most 4040.
• In test cases 9-11, the input sequence is non-decreasing.
• Test cases 12-23 satisfy no additional constraints.
Problem credits: Benjamin Qi
USACO 2022 US Open Contest, Platinum Problem 1. 262144 Revisited 题解(翰林国际教育提供,仅供参考)
[/hide]
(Analysis by Benjamin Qi)
I will refer to contiguous sequences as intervals. Define the value of an interval to be the minimum possible final number it can be converted into.
Subtask 1: Similar to 248, we can apply dynamic programming on ranges. Specifically, if dp[i][j]dp[i][j] denotes the value of interval iji…j, then
dp[i][j]={Aaiminik<jmax(dp[i][k],dp[k+1][j])+1i=ji<j.dp[i][j]={Aaii=jmini≤k<jmax(dp[i][k],dp[k+1][j])+1i<j.
The time complexity is O(N3)O(N3).
#include <bits/stdc++.h>
using namespace std;
template <class T> using V = vector<T>;
#define all(x) begin(x), end(x)
template <class T> void ckmin(T &a, const T &b) { a = min(a, b); }
int main() {
ios::sync_with_stdio(false);
cin.tie(nullptr);
int N;
cin >> N;
V<int> A(N);
for (int &x : A) cin >> x;
V<V<int>> dp(N, V<int>(N));
for (int i = N - 1; i >= 0; --i) {
dp.at(i).at(i) = A.at(i);
for (int j = i + 1; j < N; ++j) {
dp[i][j] = INT_MAX;
for (int k = i; k < j; ++k) {
ckmin(dp.at(i).at(j),
max(dp.at(i).at(k), dp.at(k + 1).at(j)) + 1);
}
}
}
int64_t ans = 0;
for (int i = 0; i < N; ++i) {
for (int j = i; j < N; ++j) {
ans += dp.at(i).at(j);
}
}
cout << ans << "\n";
}
Subtask 2: We can optimize the solution above by more quickly finding the maximum kk′ such that dp[i][k]dp[k+1][j]dp[i][k′]≤dp[k′+1][j]. Then we only need to consider k{k,k+1}k∈{k′,k′+1} when computing dp[i][j]dp[i][j]. Using the observation that kk′ does not decrease as jj increases and ii is held fixed leads to a solution in O(N2)O(N2):
#include <bits/stdc++.h>
using namespace std;
template <class T> using V = vector<T>;
#define all(x) begin(x), end(x)
template <class T> void ckmin(T &a, const T &b) { a = min(a, b); }
int main() {
ios::sync_with_stdio(false);
cin.tie(nullptr);
int N;
cin >> N;
V<int> A(N);
for (int &x : A) cin >> x;
V<V<int>> dp(N, V<int>(N));
for (int i = N - 1; i >= 0; --i) {
dp.at(i).at(i) = A.at(i);
int k = i - 1;
for (int j = i + 1; j < N; ++j) {
while (k + 1 < j && dp.at(i).at(k + 1) <= dp.at(k + 2).at(j)) ++k;
dp[i][j] = INT_MAX;
ckmin(dp[i][j], dp.at(k + 1).at(j));
ckmin(dp[i][j], dp.at(i).at(k + 1));
++dp[i][j];
}
}
int64_t ans = 0;
for (int i = 0; i < N; ++i) {
for (int j = i; j < N; ++j) {
ans += dp.at(i).at(j);
}
}
cout << ans << "\n";
}
Alternatively, finding kk′ with binary search leads to a solution in O(N2logN)O(N2logN).
Subtask 3: Similar to 262144, we can use binary lifting.
For each of v=1,2,3,v=1,2,3,…, I count the number of intervals with value at least vv. The answer is the sum of this quantity over all vv.
#include <bits/stdc++.h>
using namespace std;
template <class T> using V = vector<T>;
#define all(x) begin(x), end(x)
int main() {
ios::sync_with_stdio(false);
cin.tie(nullptr);
int N;
cin >> N;
V<int> A(N);
for (int &x : A) cin >> x;
V<V<int>> with_val;
for (int i = 0; i < N; ++i) {
while (size(with_val) <= A[i]) with_val.emplace_back();
with_val.at(A[i]).push_back(i);
}
V<int> nex(N + 1);
iota(all(nex), 0);
int64_t ans = 0;
for (int v = 1;; ++v) {
if (nex[0] == N) {
cout << ans << "\n";
exit(0);
}
// add all intervals with value >= v
for (int i = 0; i <= N; ++i) ans += N - nex[i];
for (int i = 0; i <= N; ++i) nex[i] = nex[nex[i]];
if (v < size(with_val)) {
for (int i : with_val.at(v)) {
nex[i] = i + 1;
}
}
}
}
Subtask 4: For each ii from 11 to NN in increasing order, consider the values of all intervals with right endpoint ii. Note that the value vv of each such interval must satisfy v[Ai,Ai+log2i]v∈[Ai,Ai+⌈log2i⌉] due to AA being sorted. Thus, it suffices to be able to compute for each vv the minimum ll such that dp[l][i]vdp[l][i]≤v. To do this, we maintain a partition of 1i1…i into contiguous subsequences such that every contiguous subsequence has value at most AiAi and is leftwards-maximal (extending any subsequence one element to the left would cause its value to exceed AiAi). When transitioning from i1i−1 to ii, we merge every two consecutive contiguous subsequences AiAi1Ai−Ai−1 times and then add contiguous subsequence [i,i][i,i] to the end of the partition.
#include <bits/stdc++.h>
using namespace std;
template <class T> using V = vector<T>;
#define all(x) begin(x), end(x)
int main() {
ios::sync_with_stdio(false);
cin.tie(nullptr);
int N;
cin >> N;
vector<int> A(N);
for (int &x : A) cin >> x;
assert(is_sorted(all(A)));
// left endpoints of each partition interval in decreasing order
deque<int> left_ends;
int64_t ans = 0;
for (int i = 0; i < N; ++i) {
if (i) {
for (int v = A[i - 1]; v < A[i]; ++v) {
if (size(left_ends) == 1) break;
// merge every two consecutive intervals in partition
deque<int> n_left_ends;
for (int j = 0; j < (int)size(left_ends); ++j) {
if ((j & 1) || j + 1 == (int)size(left_ends)) {
n_left_ends.push_back(left_ends[j]);
}
}
swap(left_ends, n_left_ends);
}
}
left_ends.push_front(i); // add [i,i] to partition
int L = i + 1;
for (int v = A[i]; L; ++v) {
int next_L = left_ends.at(
min((int)size(left_ends) - 1, (1 << (v - A[i])) - 1));
ans += (int64_t)(L - next_L) * v;
L = next_L;
}
}
cout << ans << "\n";
}
Full Credit: Call an interval relevant if it is not possible to extend it to the left or to the right without increasing its value.
Claim: The number of relevant intervals is O(NlogN)O(NlogN).
Proof: See the end of the analysis.
We'll compute the same quantities as in subtask 3, but this time, we'll transition from v1v−1 to vv in time proportional to the number of relevant intervals with value v1v−1 plus the number of relevant intervals with value vv, this will give us a solution in O(NlogN)O(NlogN).
For a fixed value of vv, say that an interval [l,r)[l,r) is a segment if it contains no value greater than vv, and min(Al1,Ar)>vmin(Al−1,Ar)>v. Say that an interval is maximal with respect to vv if it has value at most vv and extending it to the left or right would cause its value to exceed vv. Note that a maximal interval [l,r)[l,r) must be relevant, and it must either have value equal to vv or be a segment.
My code follows. ivals[i]ivals[i] stores all maximal intervals contained within the segment with left endpoint ii. The following steps are used to transition from v1v−1 to vv:
1. Apply halvehalve on all segments containing more than one maximal interval (the left endpoints of every such segment are stored by activeactive). Before, all intervals within the segment were maximal with respect to v1v−1. After, all intervals within the segment are maximal with respect to vv.
2. Add a segment and a maximal interval of the form [i,i+1)[i,i+1) for each ii satisfying Ai=vAi=v, and then merge adjacent segments.
#include <bits/stdc++.h>
using namespace std;
template <class T> using V = vector<T>;
#define all(x) begin(x), end(x)
int main() {
ios::sync_with_stdio(false);
cin.tie(nullptr);
int N;
cin >> N;
V<int> A(N);
V<V<int>> with_A;
for (int i = 0; i < N; ++i) {
cin >> A[i];
while ((int)size(with_A) <= A[i]) with_A.emplace_back();
with_A[A[i]].push_back(i);
}
// sum(l ... r)
auto sum_arith = [&](int64_t l, int64_t r) {
return (r + l) * (r - l + 1) / 2;
};
// total number of intervals covered by list of maximal intervals
auto contrib = [&](const list<pair<int, int>> &L) {
int64_t ret = 0;
for (auto it = begin(L);; ++it) {
auto [x, y] = *it;
if (next(it) == end(L)) {
ret += sum_arith(0, y - x);
break;
} else {
int next_x = next(it)->first;
ret += int64_t(next_x - x) * y - sum_arith(x, next_x - 1);
}
}
return ret;
};
int64_t num_at_least = (int64_t)N * (N + 1) / 2;
auto halve = [&](list<pair<int, int>> &L) {
if (size(L) <= 1) return;
num_at_least += contrib(L);
int max_so_far = -1;
list<pair<int, int>> n_L;
auto it = begin(L);
for (auto [x, y] : L) {
while (next(it) != end(L) && next(it)->first <= y) ++it;
if (it->second > max_so_far) {
n_L.push_back({x, max_so_far = it->second});
}
}
swap(L, n_L);
num_at_least -= contrib(L);
};
// doubly linked list to maintain segments
V<int> pre(N + 1);
iota(all(pre), 0);
V<int> nex = pre;
int64_t ans = 0;
V<int> active; // active segments
// maximal intervals within each segment
V<list<pair<int, int>>> ivals(N + 1);
for (int v = 1; num_at_least; ++v) {
ans += num_at_least; // # intervals with value >= v
V<int> n_active;
for (int l : active) {
halve(ivals[l]);
if (size(ivals[l]) > 1) n_active.push_back(l);
}
if (v < (int)size(with_A)) {
for (int i : with_A[v]) {
int l = pre[i], r = nex[i + 1];
bool should_add = size(ivals[l]) <= 1;
pre[i] = nex[i + 1] = -1;
nex[l] = r, pre[r] = l;
ivals[l].push_back({i, i + 1});
--num_at_least;
ivals[l].splice(end(ivals[l]), ivals[i + 1]);
}
}
swap(active, n_active);
}
cout << ans << "\n";
}
Proof of Claim: Let f(N)f(N) denote the maximum possible number of relevant subarrays for a sequence of size NN. We can show that f(N)O(logN!)O(NlogN)f(N)≤O(logN!)≤O(NlogN). This upper bound can be attained when all elements of the input sequence are equal.
The idea is to consider a Cartesian tree of aa. Specifically, suppose that one of the maximum elements of aa is located at position pp (1pN1≤p≤N). Then
f(N)f(p1)+f(Np)+#(relevant intervals containing p).f(N)≤f(p−1)+f(N−p)+#(relevant intervals containing p).
WLOG we may assume 2pN2p≤N.
Lemma:
#(relevant intervals containing p)O(plog(Np))#(relevant intervals containing p)≤O(plog(Np))
Proof of Lemma: We can in fact show that
#(relevant intervals containing p with value ap+k)min(p,2k):0klog2N.#(relevant intervals containing p with value ap+k)≤min(p,2k):0≤k≤⌈log2N⌉.
The pp comes from all relevant intervals with a fixed value having distinct left endpoints and the 2k2k comes from the fact that to generate a relevant interval of value ap+k+1ap+k+1 containing pp, you must start with a relevant interval of value ap+kap+k and choose to extend it either to the right or to the left.
To finish, observe that the summation log2Nk=0#(relevant intervals containing p with value ap+k)∑k=0⌈log2N⌉#(relevant intervals containing p with value ap+k) is dominated by the terms satisfying log2pklog2Nlog2p≤k≤⌈log2N⌉
Since O(plogNp)O(log(Np))O(logN!(p1)!(Np)!)O(plogNp)≤O(log(Np))≤O(logN!(p−1)!(N−p)!), the claim follows from the lemma by induction:
f(N)f(p1)+f(Np)+#(relevant intervals containing p)O(log(p1)!)+O(log(Np)!)+O(logN!log(p1)!log(Np)!)O(logN!).f(N)≤f(p−1)+f(N−p)+#(relevant intervals containing p)≤O(log(p−1)!)+O(log(N−p)!)+O(logN!−log(p−1)!−log(N−p)!)≤O(logN!).
Here is an alternative approach by Danny Mittal that uses both the idea from the subtask 4 solution and the Cartesian tree. He repeatedly finds the index of the rightmost maximum amidamid of the input sequence, solves the problem recursively on a1mid1a1…mid−1 and amid+1Namid+1…N, and then adds the contribution of all intervals containing amidamid. This also runs in O(NlogN)O(NlogN).
import java.io.BufferedReader;
import java.io.IOException;
import java.util.*;
public class Revisited262144Array {
static int n;
static int[] xs;
static int[] left;
static int[] right;
static int[] forward;
static int[] reverse;
public static int reduceForward(int start, int length, int lgFactor) {
if (lgFactor == 0) {
return length;
}
int factor = 1 << Math.min(lgFactor, 20);
int j = start;
for (int k = start + factor - 1; k < start + length - 1; k += factor) {
forward[j++] = forward[k];
}
forward[j++] = forward[start + length - 1];
return j - start;
}
public static void reduceReverse(int start, int length, int lgFactor) {
if (lgFactor == 0) {
return;
}
int factor = 1 << Math.min(lgFactor, 20);
if (length > factor) {
int j = start + 1;
for (int k = start + 1 + ((length - factor - 1) % factor); k < start + length; k += factor) {
reverse[j++] = reverse[k];
}
}
}
public static int funStuff(int from, int mid, int to, int riseTo) {
if (from > to) {
return 0;
}
int leftLength = funStuff(from, left[mid], mid - 1, xs[mid]);
int rightLength = funStuff(mid + 1, right[mid], to, xs[mid]);
int leftStart = from;
int rightStart = mid + 1;
int last = mid - 1;
for (int j = 1; j <= rightLength + 1; j++) {
int frontier = j > 1 ? forward[rightStart + (j - 2)] : mid;
long weight = frontier - last;
last = frontier;
int lastInside = mid + 1;
int leftLast = leftLength == 0 ? mid : reverse[leftStart];
for (int d = 0; d <= 18 && lastInside > leftLast; d++) {
if (1 << d >= j) {
int frontierInside;
if (1 << d == j) {
frontierInside = mid;
} else if (1 << d <= j + leftLength) {
frontierInside = reverse[leftStart + leftLength + j - (1 << d)];
} else {
frontierInside = leftLast;
}
long weightInside = lastInside - frontierInside;
lastInside = frontierInside;
answer += weight * weightInside * ((long) (xs[mid] + d));
}
}
}
forward[leftStart + leftLength] = mid;
System.arraycopy(forward, rightStart, forward, leftStart + leftLength + 1, rightLength);
reverse[leftStart + leftLength] = mid;
System.arraycopy(reverse, rightStart, reverse, leftStart + leftLength + 1, rightLength);
int length = reduceForward(leftStart, leftLength + 1 + rightLength, riseTo - xs[mid]);
reduceReverse(leftStart, leftLength + 1 + rightLength, riseTo - xs[mid]);
return length;
}
public static void main(String[] args) throws IOException {
xs = new int[n];
for (int j = 0; j < n; j++) {
xs[j] = Integer.parseInt(tokenizer.nextToken());
}
left = new int[n];
right = new int[n];
ArrayDeque<Integer> stack = new ArrayDeque<>();
for (int j = 0; j < n; j++) {
while (!stack.isEmpty() && xs[stack.peek()] <= xs[j]) {
left[j] = stack.pop();
}
if (!stack.isEmpty()) {
right[stack.peek()] = j;
}
stack.push(j);
}
while (stack.size() > 1) {
stack.pop();
}
forward = new int[n];
reverse = new int[n];
funStuff(0, stack.pop(), n - 1, Integer.MAX_VALUE);
}
|
2022-08-13 06:25:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29514849185943604, "perplexity": 5715.828800189238}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571909.51/warc/CC-MAIN-20220813051311-20220813081311-00142.warc.gz"}
|
https://gateoverflow.in/operating-system
|
# Recent questions and answers in Operating System
1
In the context of operating systems, which of the following statements is/are correct with respect to paging? Paging helps solve the issue of external fragmentation Page size has no impact on internal fragmentation Paging incurs memory overheads Multi-level paging is necessary to support pages of different sizes
1 vote
2
Which of the following statement(s) is/are correct in the context of $\text{CPU}$ scheduling? Turnaround time includes waiting time The goal is to only maximize $\text{CPU}$ utilization and minimize throughput Round-robin policy can be used even when the $\text{CPU}$ time required by each of the processes is not known apriori Implementing preemptive scheduling needs hardware support
3
Consider a computer system with multiple shared resource types, with one instance per resource type. Each instance can be owned by only one process at a time. Owning and freeing of resources are done by holding a global lock $(L)$. The following scheme ... deadlocks will not occur The scheme may lead to live-lock The scheme may lead to starvation The scheme violates the mutual exclusion property
4
consider a system using 2 level paging applicable page table is divided into 2K pages each of size 4 KB. if pas is 64 MB which is divided into 16K frames memory is byte addressable . page tabke entry size is 2 bytes in both the levels calculate the length of logical and physical address. total number of entry at second level
5
Ans. 52 msec
6
A user level process in Unix traps the signal sent on a Ctrl-C input, and has a signal handling routine that saves appropriate files before terminating the process. When a Ctrl-C input is given to this process, what is the mode in which the signal handling routine executes? User mode Kernel mode Superuser mode Privileged mode
7
The state of a process during context switching is 1. May be busy 2. May be idle 3. Always idle 4. always busy
8
Suppose that the running time for each process in milliseconds is an exponential random variable with parameter λ=1/20. If process P1 arrives immediately ahead of the process P2 in the running state, then the probability that process P2 will have to wait more than 20 milliseconds is _____________ . A 0.274 B 0.324 C 0.428 D 0.368 How to approach this. even not able to understand the question.
9
The read system call to fetch data from a file always blocks the invoking process.[True / False] [blocking means context switching to another process]
1 vote
10
Consider three processes (process id $0,1,2$ respectively) with compute time bursts $2,4$ and $8$ time units. All processes arrive at time zero. Consider the Longest Remaining Time First (LRTF) scheduling algorithm. In LRTF ties are broken by giving priority to the process with the lowest process id. The average turn around time is $13$ units $14$ units $15$ units $16$ units
11
Consider the following pseudocode, where $\textsf{S}$ is a semaphore initialized to $5$ in line $\#2$ and $\textsf{counter}$ is a shared variable initialized to $0$ in line $\#1$. Assume that the increment operation in line $\#7$ is $\textit{not}$ ... $0$ after all the threads successfully complete the execution of $\textsf{parop}$ There is a deadlock involving all the threads
12
Consider the following two-process synchronization solution. ... two- process synchronization solution. This solution violates mutual exclusion requirement. This solution violates progress requirement. This solution violates bounded wait requirement.
13
The $P$ and $V$ operations on counting semaphores, where s is a counting semaphore, are defined as follows: $P(s):$ $s=s-1;$ If $s < 0$ then wait; $V(s):$ $s=s+1;$ If $s \leq0$ then wake up process waiting on s; Assume that $P_b$ and $V_b$ the wait and signal operations on ... $x_b$ and $y_b$ are respectively $0$ and $0$ $0$ and $1$ $1$ and $0$ $1$ and $1$
14
Let the time taken to switch from user mode to kernel mode of execution be $T1$ while time taken to switch between two user processes be $T2$. Which of the following is correct? $T1 > T2$ $T1 = T2$ $T1 < T2$ Nothing can be said about the relation between $T1$ and $T2$
1 vote
15
At a particular time of computation, the value of a counting semaphore is $7$. Then $20 \ P$ (wait) operations and $15 \ V$ (signal) operations are completed on this semaphore. What is the resulting value of the semaphore? $28$ $12$ $2$ $42$
1 vote
16
In a particular system it is observed that, the cache performance gets improved as a result of increasing the block size of the cache. The primary reason behind this is : Programs exhibits temporal locality Programs have small working set Read operation is frequently required rather than write operation Programs exhibits spatial locality
1 vote
17
Consider the following functions $f()$ and $g().$ f(){ w = 3; w = 4; } g(){ z = w; z = z + 2*w; print(z); } We start with $w$ set to $0$ and execute $f()$ and $g()$ in parallel-that is, at each step we either execute one statement from $f()$ or one statement from $g()$. What is the set of possible values printed by $g()?$ $0,9,12$ $0,8,9,12$ $0,6,8,9,11,12$ $0,4,6,9,10,12$
18
State any undesirable characteristic of the following criteria for measuring performance of an operating system: Turn around time
19
State any undesirable characteristic of the following criteria for measuring performance of an operating system: Waiting time
1 vote
20
Consider the following multi-threaded code segment (in a mix of C and pseudo-code), invoked by two processes $P_1$ and $P_2$, and each of the processes spawns two threads $T_1$ and $T_2$: int x = 0; // global Lock L1; // global main () { create a thread to execute foo() ... will print the value of $y$ as $2.$ Both $T_1$ and $T_2$, in both the processes, will print the value of $y$ as $1.$
21
Q1: SAFE OR NOT?? i have doubt with the answer given becz i am getting safe for question 1 and unsafe for question 2 but opposite answer is given in test series/.
22
Explain the difference between internal fragmentation and external fragmentation. Which one occurs in paging systems? Which one occurs in systems using pure segmentation?
23
Explain how hard links and soft links differ with respective to i-node allocations.
24
Consider the following page reference string. $1\ 2\ 3\ 4\ 2\ 1\ 5\ 6\ 2\ 1\ 2\ 3\ 7\ 6\ 3\ 2\ 1\ 2\ 3\ 6\$ What are the minimum number of frames required to get a single page fault for the above sequence assuming LRU replacement strategy? $7$ $4$ $6$ $5$
1 vote
25
Suppose that a machine has $438-bit$ virtual addresses and $32-bit$ physical addresses. What is the main advantage of a multilevel page table over a single-level one? With a two-level page table, $16-KB$ pages, and $4-byte$ entries, how many bits should be allocated for the top-level page table field and how many for the next level page table field? Explain.
26
Consider a system having $10$ IO bound jobs and $1$ CPU bound job.. If IO bound job issue an IO request once for every ms of CPU computation and that each IO request takes $10$ ms. If context switch overhead is $0.1$ ms.Using round -robin scheduling with a time quantum of $10$ ms. , the CPU efficiency is __________________
27
Consider the following four processes with their corresponding arrival time and burst time: $\begin{array} \text{Process}&\text{Arrival time}&\text{Burst time(in ms)}\\ \text{P1}&0.0&8\\ \text{P2}&0.6&6\\ \text{P3}&3.8&4\\ \text{P4}&4.4&2\end{array}$ What is the average turn around time (in ms) for these processes using FCFS scheduling algorithm? $15$ $12.8$ $13$ none of the options
1 vote
28
Starvation can be avoided by which of the following statements: By using shortest job first resource allocation policy . By using first come first serve resources allocation policy. (i) only (i) and (ii) only (ii) only None of the options
29
Process is in a ready state _______ . when process is scheduled to run after some execution when process is unable to run until some task has been completed when process is using the $CPU$ none of the options
30
Three CPU-bound tasks, with execution times of $15,12$ and $5$ time units respectively arrive at times $0,t$ and $8$, respectively. If the operating system implements a shortest remaining time first scheduling algorithm, what should be the value of $t$ to have $4$ context switches? Ignore the context switches at time $0$ and at the end. $0<t<3$ $t=0$ $t<=3$ $3<t<8$
1 vote
31
Jobs keep arriving at a processor. A job can have an associated time length as well as a priority tag. New jobs may arrive while some earlier jobs are running. Some jobs may keep running indefinitely. A ... the following job-scheduling policies is starvation free? Round - robin Shortest job first Priority queuing Latest job first None of the others
32
Consider the following set of processes, assumed to have arrived at time $0$. Consider the CPU scheduling algorithms Shortest Job First (SJF) and Round Robin (RR). For RR, assume that the processes are scheduled in the order$P_1, P_2, P_3, P_4$ ... absolute value of the difference between the average turnaround times (in ms) of SJF and RR (round off to $2$ decimal places is_______
33
Consider the following statements about process state transitions for a system using preemptive scheduling. A running process can move to ready state. A ready process can move to running state. A blocked process can move to running state. A blocked process can move to ready state. Which of the above statements are TRUE? I, II, and III only II and III only I, II, and IV only I, II, III and IV only
34
Names of some of the Operating Systems are given below: MS-DOS XENIX OS/$2$ In the above list, following operating systems didn’t provide multiuser facility. (a) only (a) and (b)only (b) and (c) only (a),(b) and (c)
35
Which statements is not correct about “init” process in Unix? It is generally the parent of the login shell. It has PID $1$. It is the first process in the system. Init forks and execs a ‘getty’ process at every port connected to a terminal.
1 vote
36
Consider three CPU-intensive processes, which require $10,20$ and $30$ time units and arrive at times $0,2$ and $6$, respectively. How many context switches are needed if the operating system implements a shortest remaining time first scheduling algorithm? Do not count the context switches at time zero and at the end. $1$ $2$ $3$ $4$
1 vote
37
Consider three processes, all arriving at time zero, with total execution time of $10,20$ and $30$ units, respectively. Each process spends the first $20\%$ of execution time doing I/O, the next $70\%$ of time doing computation, and the last $10\%$ of time doing I/O again. The ... as much as possible. For what percentage of time does the CPU remain idle? $0\%$ $10.6\%$ $30.0\%$ $89.4\%$
Determine the number of page faults when references to pages occur in the following order: $1,2,4,5,2,1,2,4$. Assume that the main memory can accommodate $3$ pages and the main memory already has the pages $1$ and $2$, with page $1$ having been brought earlier than page $2$.(LRU algorithm is used). $3$ $5$ $4$ None of these.
If a processor has $32$-bit virtual address, $28$-bit physical address, $2$ kb pages. How many bits are required for the virtual, physical page number? $17,21$ $21,17$ $6,10$ None
|
2021-06-22 11:15:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37532103061676025, "perplexity": 1303.0590914205357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488517048.78/warc/CC-MAIN-20210622093910-20210622123910-00020.warc.gz"}
|
https://infoscience.epfl.ch/record/139364
|
## On the naturality of the exterior differential
We give sufficient conditions for the naturallity of the exterior differential under Sobolev mappings. In other words we study the validity of the equation $d\, f^* \alpha = f^*\; d\alpha$ for a smooth form $\alpha$ and a Sobolev map $f$.
Published in:
C. R. Math. Rep. Acad. Sci. Canada, 30, 1, 1-10
Year:
2008
Keywords:
Laboratories:
Record created 2009-07-08, last modified 2018-03-17
n/a:
PDF
External link:
URL
Rate this document:
1
2
3
(Not yet reviewed)
|
2018-07-17 16:03:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33315181732177734, "perplexity": 2955.0984127491693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589752.56/warc/CC-MAIN-20180717144908-20180717164908-00291.warc.gz"}
|
https://exampur.com/short-quiz/1025/
|
# UP LEKHPAL QUANT QUIZ 1
Attempt now to get your rank among 19 students!
## Question 1:
The difference between two number is 3 and the difference between their cube is 999. Find the difference between their square -
## Question 2:
A train run from Delhi to Rohtak at an average speed of $30 \mathrm{~km} / \mathrm{h}$ and return at an average speed of $40 \mathrm{~km} / \mathrm{hr}$. The average speed of the train in the whole journey is -
## Question 3:
Direction : Table given below shows number of tickets sold in six difference theaters number of tickets sold to children and remaining tickets sold to adults (male and female). Study the data carefully and answer the following questions.
Total 80 tickets are sold each theatre.
Find the ratio of number of tickets sild to males A3 and A 6 theatre together to number of tickets sold to females by A3 and A5 theatre together.
## Question 4:
If $\left(\frac{1}{2^{1}}\right)+\left(\frac{1}{2^{2}}\right) \ldots \ldots .\left(\frac{1}{2^{10}}\right)=\frac{1}{k}$ then what is the value of $k=$ ?
## Question 5:
3 men and 4 woman can do a piece of work in 7 days, where as 2 men and 1 women can do it in 41 days 7 women will complete the some work in?
## Question 6:
A completed a work in 24 days and B completed the same work in 30 days. They started together and worked for 12 days and after 12 days $\mathrm{A}$ left the work and $\mathrm{B}$ complete remaining work alone. Then find the number of days to complete the remaining work.
## Question 7:
The in-radius of an equilateral triangle is of length $3 \mathrm{~cm}$. Then the length of each of its median is:
## Question 8:
At what rate of compound Interest per annum will be a sum of ₹ 10000 becomes ₹ 12544 in 2 years?
## Question 9:
Simplifed form of $\left(25^{\frac{3}{2}}+25^{-\frac{3}{2}}\right)$ is
## Question 10:
$7 \div 21-4 \div 3+4-4 \div 10$ of $4 \times 10$
|
2023-02-05 14:49:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41748911142349243, "perplexity": 819.918837632384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500255.78/warc/CC-MAIN-20230205130241-20230205160241-00434.warc.gz"}
|
https://www.physicsforums.com/threads/1-oo-0.179633/
|
# 1/oo = 0 ?
1. Aug 7, 2007
### Nick666
Well? Is it equal to zero ? If there are threads with this subject, redirect me to them please.
2. Aug 7, 2007
### Kummer
Please look in the philosophy forum.
(Because this is not a topic mathematicians discuss, just philosophers.)
Last edited: Aug 7, 2007
3. Aug 7, 2007
### mgb_phys
To an engineer or physicist yes.
We aren't as squeamish as mathematicians when it comes to needing an answer.
4. Aug 7, 2007
5. Aug 7, 2007
### homology
Or, heck, check out nonstandard analysis... But I assume you're (Nick666) talking about calculus and perhaps a limit that comes up? The idea is that 1/big is small, and 1/bigger is smaller, and so I can always choose an x to make 1/x as small as you'd like it (or as close to zero).
Cheers,
Kevin
6. Aug 7, 2007
### Moridin
Here is the correct mathematical notation:
$$\lim_{x\to\infty} \frac{1}{x} = 0$$
1 over infinity is not a valid computation. Actually, it should be that "the limit of 1 over x as x goes to infinity is equal to zero".
7. Aug 7, 2007
### matt grime
Oh dear, so many misconceptions here.
1/oo is a perfectly good symbol. In the extended complex plane it is 0. As it would be in the extended reals - you do not need limits at all to answer that. However, the symbol 1/oo does not have a canonical meaning - I can think of no symbol in mathematics that has a canonical meaning. It's not even true that there is a unique meaning for the symbol 1, or 0 for that matter, is there, so why should there be such a meaning here?
8. Aug 8, 2007
### jostpuur
Nick666, do you know yourself what you mean with the infinity? Is there a definition you are using?
9. Aug 8, 2007
### Nick666
Let oo be 999... :) . ( oh, can 999... be infinity ?)
Last edited: Aug 8, 2007
10. Aug 8, 2007
### HallsofIvy
Staff Emeritus
Now you're just pulling our chain!
11. Aug 8, 2007
### jostpuur
If you write 999..., I'm afraid I'll have to ask again, that do you know yourself what you mean by that?
For example, a number 123 is $1\cdot 10^2 + 2\cdot 10^1 + 3\cdot 10^0$. In general natural numbers can be written as $\sum_{k=0}^N a_k 10^k$, where for all k $a_k\in\{0,1,2,\ldots,9\}$. Your number starts like $9\cdot 10^{?} + \cdots$, and what do you have up there in the exponent?
Writing ...999 would make more sense, because it would be $\sum_{k=0}^{\infty} 9\cdot 10^k$, but I don't know what this means either, because the sum doesn't converge towards any natural number.
It seems your problem is, that you don't know what you mean with the infinity. If you are interested in the basics of analysis, I think Moridin's answer has the point. $\infty$ is a symbol, that usually means that there is some kind of limiting process. The symbol doesn't have an independent meaning there, but it gets meaning in expressions like $\lim_{n\to\infty}$ and $\sum_{k=0}^{\infty}$.
12. Aug 8, 2007
### Nick666
999... As in infinitely many 9`s .
And 1/"that sum you wrote" = ?
13. Aug 8, 2007
### Nick666
And another question about the sum you wrote. Isnt every element of that sum a natural number? (9, 90, 900, 9000 etc) Or let me put it another way. 10^k, when k ->oo , isnt that a natural number ? I mean, if we multiply 10 by 10 by 10........ and so on, shouldnt we get a natural number?
Last edited: Aug 8, 2007
14. Aug 8, 2007
### CompuChip
Yes it is.
So is every partial sum (cutting off the summation after a finite number of terms).
But the sum itself isn't.
15. Aug 8, 2007
### Nick666
See my above edited post.
But if we add a bunch of natural numbers, no matter how many, isnt it logic that we should also get a natural number ? (or maybe this is why I got low grades at math haha)
16. Aug 8, 2007
### jostpuur
No at all! $\lim_{k\to\infty} 10^k$ is not a natural number.
17. Aug 8, 2007
### Moridin
As you cannot compute [itex]\infty[/tex] (division by zero is undefined), how would it be possible to compute something that involves it without using limits? I'm not sure I understand.
18. Aug 8, 2007
### matt grime
It's just a symbol. One that is used in the context of limits in analysis, and one that is not "computed' (whatever that means) in terms of limits in other contexts.
19. Aug 8, 2007
### Moridin
20. Aug 8, 2007
### Nick666
I still dont understand how, if you add a natural number to a natural number and another natural number and so on,you dont get a natural number. If you add 1 apple and 1 apple and 1 apple and so on, dont you get an infinite number.... of apples ???
|
2017-01-24 23:51:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7998034358024597, "perplexity": 1245.3765522577473}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00125-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://www.zbmath.org/?q=ai%3Akoo.bonyong
|
# zbMATH — the first resource for mathematics
Isogeometric shape design sensitivity analysis using transformed basis functions for Kronecker delta property. (English) Zbl 1297.74096
Summary: The isogeometric shape design sensitivity analysis (DSA) includes the desirable features; easy design parameterization and accurate shape sensitivity embedding the higher-order geometric information of curvature and normal vector. Due to the non-interpolatory property of NURBS basis, however, the imposition of essential boundary condition is not so straightforward in the isogeometric method. Taking advantages of geometrically exact property, an isogeometric DSA method is developed applying a mixed transformation to handle the boundary condition. A set of control point and NURBS basis function is added using the $$h$$-refinement and Newton iterations to precisely locate the control point to impose the boundary condition. In spite of additional transformation, its computation cost is comparable to the original one with penalty approach since the obtained Kronecker delta property enables to reduce the size of system matrix. Through demonstrative numerical examples, the effectiveness, accuracy, and computing cost of the developed DSA method are discussed.
##### MSC:
74P15 Topological methods for optimization problems in solid mechanics 65D17 Computer-aided design (modeling of curves and surfaces) 49Q12 Sensitivity analysis for optimization problems on manifolds
Full Text:
##### References:
[1] Bazilevs, Y.; Calo, V. M.; Cottrell, J. A.; Evans, J. A.; Hughes, T. J.R.; Lipton, S.; Scott, M. A.; Sederberg, T. W., Isogeometric analysis using T-splines, Comput. Methods Appl. Mech. Engrg., 199, 229-263, (2010) · Zbl 1227.74123 [2] Bazilevs, Y.; Calo, V. M.; Hughes, T. J.R.; Zhang, Y., Isogeometric fluid-structure interaction: theory, algorithms and computations, Compu. Mech., 43, 3-37, (2008) · Zbl 1169.74015 [3] Bazilevs, Y.; Michler, C.; Calo, V. M.; Hughes, T. J.R., Isogeometric variational multiscale modeling of wall-bounded turbulent flows with weakly enforced boundary conditions on unstretched meshes, Comput. Methods Appl. Mech. Engrg., 199, 780-790, (2010) · Zbl 1406.76023 [4] Benson, D. J.; Bazilevs, Y.; Hsu, M. C.; Hughes, T. J.R., Isogeometric shell analysis: the Reissner-Mindlin shell, Comput. Methods Appl. Mech. Engrg., 199, 276-289, (2010) · Zbl 1227.74107 [5] Chen, J. S.; Wang, H. P., New boundary condition treatments in meshless computation of contact problems, Comput. Methods Appl. Mech. Engrg., 187, 441-468, (2000) · Zbl 0980.74077 [6] Cho, S.; Ha, S. H., Isogeometric shape design optimization: exact geometry and enhanced sensitivity, Struct. Multidiscip. Optim., 38, 1, 53-70, (2009) · Zbl 1274.74221 [7] Choi, K. K.; Kim, N. H., Structural sensitivity analysis and optimization 1: linear systems, (2004), Springer New York [8] Cottrell, J. A.; Hughes, T. J.R.; Reali, A., Studies of refinement and continuity in isogeometric structural analysis, Comput. Methods Appl. Mech. Engrg., 196, 4160-4183, (2007) · Zbl 1173.74407 [9] Cottrell, J. A.; Reali, A.; Bazilevs, Y.; Hughes, T. J.R., Isogeometric analysis of structural vibrations, Comput. Methods Appl. Mech. Engrg., 195, 5257-5296, (2006) · Zbl 1119.74024 [10] Farin, G., Curves and surfaces for CAGD: A practical guide, (2002), Academic Press [11] Fernandez-Mendez, S.; Huerta, A., Imposing essential boundary conditions in mesh-free methods, Comput. Methods Appl. Mech. Engrg., 193, 1257-1275, (2004) · Zbl 1060.74665 [12] Ha, S. H.; Cho, S., Numerical method for shape optimization using T-spline based isogeometric method, Struct. Multidiscip. Optim., 42, 3, 417-428, (2010) · Zbl 1274.74270 [13] Hughes, T. J.R.; Cottrell, J. A.; Bazilevs, Y., Isogeometric analysis: CAD, finite elements, NURBS, exact geometry and mesh refinement, Comput. Methods Appl. Mech. Engrg., 194, 4135-4195, (2005) · Zbl 1151.74419 [14] Manh, N. D.; Evgrafov, A.; Gersborg, A. R.; Gravesen, J., Isogeometric shape optimization of vibrating membranes, Comput. Methods Appl. Mech. Engrg., 200, 1343-1353, (2011) · Zbl 1228.74062 [15] Piegl, L.; Tiller, W., The NURBS book (monographs in visual communication), (1997), Springer-Verlag New York [16] Press, W. H.; Teukolsky, S. A.; Vetterling, W. T.; Flannery, B. P., Numerical recipes: the art of scientific computing, (2007), Cambridge University Press New York · Zbl 1132.65001 [17] Qian, X., Full analytical sensitivities in NURBS based isogeometric shape optimization, Comput. Methods Appl. Mech. Engrg., 199, 2059-2071, (2010) · Zbl 1231.74352 [18] Rogers, D. F., An introduction to NURBS with historical perspective, (2001), Academic Press San Diego, CA [19] Wall, W. A.; Frenzel, M. A.; Cyron, C., Isogeometric structural shape optimization, Comput. Methods Appl. Mech. Engrg., 197, 2976-2988, (2008) · Zbl 1194.74263 [20] Wang, D.; Xuan, J., An improved NURBS-based isogeometric analysis with enhanced treatment of essential boundary conditions, Comput. Methods Appl. Mech. Engrg., 199, 2425-2436, (2010) · Zbl 1231.74498 [21] Xenophontos, C.; Christodoulou, Georgiou., The singular function boundary integral method for Laplacian problems with boundary singularities in two and three-dimensions, Procedia Comput. Sci., 1, 2583-2591, (2010)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2021-09-19 00:08:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4649277329444885, "perplexity": 10751.848264296194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056578.5/warc/CC-MAIN-20210918214805-20210919004805-00267.warc.gz"}
|
https://svn.geocomp.uq.edu.au/escript/trunk/doc/cookbook/example07.tex?revision=3003&view=markup&sortby=log&pathrev=3373
|
# Contents of /trunk/doc/cookbook/example07.tex
Revision 3003 - (show annotations)
Wed Apr 7 02:29:57 2010 UTC (10 years, 10 months ago) by ahallam
File MIME type: application/x-tex
File size: 3320 byte(s)
Pressure wave working, accelleration example, new entries to cookbook for example 7.
1 2 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 3 % 4 % Copyright (c) 2003-2010 by University of Queensland 5 % Earth Systems Science Computational Center (ESSCC) 6 7 % 8 % Primary Business: Queensland, Australia 9 % Licensed under the Open Software License version 3.0 10 11 % 12 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 13 14 15 16 The acoustic wave equation governs the propagation of pressure waves. Wave 17 types that obey this law tend to travel in liquids or gases where shear waves 18 or longitudinal style wave motion is not possible. The obvious example is sound 19 waves. 20 21 The acoustic wave equation is; 22 \begin{equation} 23 \nabla ^2 p - \frac{1}{c^2} \frac{\partial ^2 p}{\partial t^2} = 0 24 \label{eqn:acswave} 25 \end{equation} 26 where $p$ is the pressure, $t$ is the time and $c$ is the wave velocity. 27 28 29 30 \section{Numerical Solution Stability} 31 Unfortunately, the wave equation is belongs to a class of equations called 32 \textbf{stiff} PDEs. This types of equations can be difficult to solve 33 numerically as they tend to oscilate about the exact solution and can 34 eventually blow up. To counter this problem, explicitly stable schemes like the 35 backwards Euler method are required. In terms of the wave equation, the 36 analytical wave must not propagate faster than the numerical wave is able to, 37 and in general, needs to be much slower than the numerical wave. 38 39 For example, a line 100m long is discretised into 1m intervals or 101 nodes. If 40 a wave enters with a propagation velocity of 100m/s then the travel time for 41 the wave between each node will be 0.01 seconds. The time step, must therefore 42 be significantly less than this. Of the order $10E-4$ would be appropriate. 43 44 This requirement for very small step sizes makes stiff equations difficult to 45 solve numerically due to the large number of time iterations required in each 46 solution. Models with very high velocities and fine meshes will be the worst 47 affected by this problem. 48 49 50 \section{Displacement Solution} 51 \sslist{example07a.py} 52 53 We begin the solution to this PDE with the centred difference formula for the 54 second derivative; 55 \begin{equation} 56 f''(x) \approx \frac{f(x+h - 2f(x) + f(x-h)}{h^2} 57 \label{eqn:centdiff} 58 \end{equation} 59 substituting \refEq{eqn:centdiff} for $\frac{\partial ^2 p }{\partial t ^2}$ 60 in \refEq{eqn:acswave}; 61 \begin{equation} 62 \nabla ^2 p - \frac{1}{c^2h^2} \left[p_{(t+1)} - 2p_{(t)} + p_{(t-1)} \right] 63 = 0 64 \label{eqn:waveu} 65 \end{equation} 66 Rearranging for $p_{(t+1)}$; 67 \begin{equation} 68 p_{(t+1)} = c^2 h^2 \nabla ^2 p_{(t)} +2p_{(t)} - p_{(t-1)} 69 \end{equation} 70 this can be compared with the general form of the \modLPDE module and it 71 becomes clear that $D=1$, $X=-c^2 h^2 \nabla ^2 p_{(t)}$ and $Y=2p_{(t)} - 72 p_{(t-1)}$. 73 74 \section{Acceleration Solution} 75 \sslist{example07b.py} 76 77 An alternative method is to solve for the acceleration $\frac{\partial ^2 78 p}{\partial t^2}$ directly, and derive the the displacement solution from the 79 PDE solution. \refEq{eqn:waveu} is thus modified; 80 \begin{equation} 81 \nabla ^2 p - \frac{1}{c^2} a = 0 82 \label{eqn:wavea} 83 \end{equation} 84 and can be solved directly with $Y=0$ and $X=-c^2 \nabla ^2 p_{(t)}$. 85 After each iteration the displacement is re-evaluated via; 86 \begin{equation} 87 p_{(t+1)}=2p_{(t)} - p_{(t-1)} + h^2a 88 \end{equation} 89 90
|
2021-03-03 03:14:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7394102215766907, "perplexity": 1522.1024247377832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178365186.46/warc/CC-MAIN-20210303012222-20210303042222-00235.warc.gz"}
|
https://runestone.academy/runestone/books/published/webfundamentals/Flask/redirects.html
|
# 10.7. Redirects and Errors¶
To redirect a user to another endpoint, use the redirect() function; to abort a request early with an error code, use the abort() function:
from flask import abort, redirect, url_for
@app.route('/')
def index():
abort(401)
this_is_never_executed()
This is a rather pointless example because a user will be redirected from the index to a page they cannot access (401 means access denied) but it shows how that works.
By default a black and white error page is shown for each error code. If you want to customize the error page, you can use the errorhandler() decorator:
from flask import render_template
@app.errorhandler(404)
def page_not_found(error):
return render_template('page_not_found.html'), 404
Note the 404 after the render_template() call. This tells Flask that the status code of that page should be 404 which means not found. By default 200 is assumed which translates to: all went well.
See error-handlers for more details.
|
2020-05-26 17:17:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31061995029449463, "perplexity": 5424.4198758044595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391277.13/warc/CC-MAIN-20200526160400-20200526190400-00139.warc.gz"}
|
https://www.semanticscholar.org/paper/'t-Hooft-surface-operators-in-five-dimensions-and-Yoshida/27558cbf407ec7a71f2be7b36ddde93694f09265
|
Corpus ID: 233481866
# 't Hooft surface operators in five dimensions and elliptic Ruijsenaars operators
@inproceedings{Yoshida2021tHS,
title={'t Hooft surface operators in five dimensions and elliptic Ruijsenaars operators},
author={Yutaka Yoshida},
year={2021}
}
We introduce codimension three magnetically charged surface operators in fivedimensional (5d) N = 1 supersymmetric gauge on T 2 × R3. We evaluate the vacuum expectation values (vevs) of surface operators by supersymmetric localization techniques. Contributions of Monopole bubbling effects to the path integral are given by elliptic genera of world volume theories on D-branes. Our result gives an elliptic deformation of the SUSY localization formula [1] (resp. [2, 3]) of BPS ’t Hooft loops (resp… Expand
#### References
SHOWING 1-10 OF 36 REFERENCES
Line operators on S^1xR^3 and quantization of the Hitchin moduli space
• Physics
• 2011
We perform an exact localization calculation for the expectation values of Wilson-'t Hooft line operators in N=2 gauge theories on S^1xR^3. The expectation values are naturally expressed in terms ofExpand
SUSY localization for Coulomb branch operators in omega-deformed 3d N=4 gauge theories
• Physics, Mathematics
• 2019
We perform SUSY localization for Coulomb branch operators of 3d $\mathcal{N}=4$ gauge theories in $\mathbb{R}^3$ with $\Omega$-deformation. For the dressed monopole operators whose expectation valuesExpand
ABCD of ’t Hooft operators
• Physics
• 2020
We compute by supersymmetric localization the expectation values of half-BPS ’t Hooft line operators in N = 2 U(N), SO(N) and USp(N) gauge theories on S1 × R3 with an Ω-deformation. We evaluate theExpand
Wilson-’t Hooft lines as transfer matrices
• Physics
• 2020
We establish a correspondence between a class of Wilson-’t Hooft lines in four-dimensional N $$\mathcal{N}$$ = 2 supersymmetric gauge theories described by circular quivers and transfer matricesExpand
Coulomb branches of $3d$ $\mathcal{N}=4$ quiver gauge theories and slices in the affine Grassmannian
• Mathematics, Physics
• Advances in Theoretical and Mathematical Physics
• 2019
This is a companion paper of arXiv:1601.03586. We study Coulomb branches of unframed and framed quiver gauge theories of type $ADE$. In the unframed case they are isomorphic to the moduli space ofExpand
On monopole bubbling contributions to ’t Hooft loops
• Physics, Mathematics
• Journal of High Energy Physics
• 2019
A bstractMonopole bubbling contributions to supersymmetric ’t Hooft loops in 4d N$$\mathcal{N}$$ = 2 theories are computed by SQM indices. As recently argued, those indices are hard to compute dueExpand
Quantized Coulomb Branches, Monopole Bubbling and Wall-Crossing Phenomena in 3d N = 4 Theories
To study the quantized Coulomb branch of 3d N = 4 unitary SQCD theories, we propose a new method to compute correlators of monopole and Casimir operators that are inserted in the R × R2 OmegaExpand
Wall-crossing and operator ordering for ’t Hooft operators in $$\mathcal{N}$$ = 2 gauge theories
• Physics, Mathematics
• Journal of High Energy Physics
• 2019
Abstract We study half-BPS ’t Hooft line operators in 4d $$\mathcal{N}$$ N = 2 U(N ) gauge theories on S 1 × ℝ3 with an Ω-deformation. The recently proposed brane construction ofExpand
On ’t Hooft defects, monopole bubbling and supersymmetric quantum mechanics
• Physics
• Journal of High Energy Physics
• 2018
A bstractWe revisit the localization computation of the expectation values of ’t Hooft operators in N$$\mathcal{N}$$ = 2* SU(N) theory on ℝ3 × S1. We show that the part of the answer arising fromExpand
t Hooft Defects and Wall Crossing in SQM
• Physics, Mathematics
• 2018
Abstract In this paper we study the contribution of monopole bubbling to the expectation value of supersymmetric ’t Hooft defects in Lagrangian theories of class S on ℝ3 × S 1. This can beExpand
|
2021-11-30 09:57:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5743457674980164, "perplexity": 2987.2817582144653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358966.62/warc/CC-MAIN-20211130080511-20211130110511-00343.warc.gz"}
|
https://stats.stackexchange.com/questions/274087/should-we-use-hotelling-t2-test-or-something-else
|
# Should we use Hotelling $T^2$ test or something else?
As part of our project, we have used a combination of three machine learning classifiers combined with a voting algorithm over it to obtain reasonably good results. The input data for the classifiers is a set of questions and the output for each is a probability distribution over 6 labelled classes.
Now, as part of hypothesis testing, our professor had suggested us to try multivariate tests -- in particular, the Hotelling $T^2$ test -- to find out whether the results of the three classifiers are statistically significant and to find the parameters that have provided us the maximum information.
We do not have a strong background in statistics but from what we have read, we felt that this test is usually used when you test on two different sets of samples as opposed to our case, where we have the same data used to generate the outputs for the three classifiers.
Based on this source of info, this is what we have understood:
T-tests are used to determine if two different sets of samples are represent the same kind of population
Eg. 10 patients in ward 1
13 patients in ward 2
If we are to conduct an experiment with these two sets, can we do so without the results being different due to difference in the people taken
Eg. One room had HIV patients while the other had blood cancer patients
The results might be really different if we know this. But if we don't, using use other factors like blood test results (eg. Sodium level, wbc count etc), can we say that these two groups are similar enough
This leads us to believe that maybe we are:
1. Not understanding the test correctly, or
2. Using the wrong test statistic
Could you help us in this regard? Thank you.
• Answers that combine our question with the underlying question in this link will be really helpful – doodhwala Apr 17 '17 at 4:56
|
2020-07-08 04:29:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.545760989189148, "perplexity": 461.5359673892656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896374.33/warc/CC-MAIN-20200708031342-20200708061342-00311.warc.gz"}
|
https://economics.stackexchange.com/questions/47306/economic-growth-in-a-dsge-model-despite-mean-zero-shocks
|
# Economic growth in a DSGE model, despite mean-zero shocks
The DSGEs I've seen have steady-states, and mean-zero shocks.
Can these predict growth in GDP / capital etc?
Is this possible despite them being equilibrium models, or do you have to completely change your approach and switch to a Solow-Swan-type model to predict GDP growth?
Yes, there are DSGE models that can be used for forecasting.
These models typically have a particular kind of steady-state, which is, more precisely, called balanced growth path (BGP). On the BGP (in the absence of shocks), key indicators growth at the same constant rate. For example, GDP, household consumption, investment all grow at 2% a year. This is consistent with constant steady-state ratios of the indicators over GDP, for example $$\frac{K}{Y}$$ and $$\frac{C}{Y}$$ would be constant, while indicators grow at the same rate.
The rate itself is often built in as an assumption, based on economic priors and analysis done outside of the DSGE model (for example estimates of labor-productivity growth). Because of this fact, forecast of equilibrium growths are not just boring, but not really forecasts. However, an economy is hardly ever on its balanced growth path, so the strength of these models is in describing the dynamics back towards equilibrium, for example after a shock to global oil prices. Immediately after the shock hits, investment will typically fall at a faster rate than GDP, and then grow faster after the impact has bottomed out.
Note that these models need to be related to actual data via estimation, often using the Kalman Filter to identify unobservable variables, such as the output gap, and shocks (such as a technology shock).
To see how data can be related to model variables, consider the measurement equation (assuming the use of a Kalman filter). $$Y$$ is actual GDP in quarterly frequency, $$\hat{y_t}$$ is the deviation of output from its trend level at time $$t$$, and 2% is the assumed annual growth rate. Then this can be fed into the model as $$\log Y_t - \log Y_{t-1} - 0.005 = \hat{y}_t - \hat{y}_{t-1}$$ This means in particular that the model solution can be translated back into numbers that are consistent with actual data and hence can be used for forecasting.
References: For an overall good and detailed example, see the ECB's New Area Wide Model. It includes a section on model dynamics (4), and one on how assumptions are built into the model (3.2 & 3.3). The references in that paper are also worth checking out. For an even more modern version, see their new model, which includes many more financial market dynamics.
• Awesome, thanks for the detail and links! Aug 21, 2021 at 16:23
• @Mich55 in addition to the references in the +1 answer above I would also recommend you to have a look at Wickens Macroeconomic Theory a dynamic stochastic equilibrium approach - it’s quite comprehensive text and if I remember correctly it has an online companion with some example code
– 1muflon1
Aug 21, 2021 at 17:18
|
2022-06-30 01:29:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7103087902069092, "perplexity": 878.2732499768492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103646990.40/warc/CC-MAIN-20220630001553-20220630031553-00745.warc.gz"}
|
https://api-project-1022638073839.appspot.com/questions/how-do-you-differentiate-f-x-sqrt-1-x-2-using-the-chain-rule
|
How do you differentiate f(x)=sqrt(1/x^2) using the chain rule?
1 Answer
Mar 5, 2017
$\frac{d}{\mathrm{dx}} \sqrt{\frac{1}{x} ^ 2} = - \frac{\left\mid x \right\mid}{x} ^ 3$
Explanation:
You can name:
$y \left(x\right) = \frac{1}{x} ^ 2 = {x}^{-} 2$
so that:
$\frac{\mathrm{df}}{\mathrm{dx}} = \frac{\mathrm{df}}{\mathrm{dy}} \frac{\mathrm{dy}}{\mathrm{dx}} = \frac{d}{\mathrm{dy}} \left(\sqrt{y}\right) \frac{d}{\mathrm{dx}} \left({x}^{-} 2\right) = \frac{1}{2 \sqrt{y}} \left(- 2 {x}^{-} 3\right) = \frac{1}{2 \sqrt{\frac{1}{x} ^ 2}} \left(- 2 {x}^{-} 3\right) = - \frac{\sqrt{{x}^{2}}}{x} ^ 3 = - \frac{\left\mid x \right\mid}{x} ^ 3$
You can also note that:
$f \left(x\right) = \sqrt{\frac{1}{x} ^ 2} = \frac{1}{\left\mid x \right\mid}$
so that:
$\left\{\begin{matrix}\frac{\mathrm{df}}{\mathrm{dx}} = \frac{d}{\mathrm{dx}} \left(\frac{1}{x}\right) = - \frac{1}{x} ^ 2 \text{ for " x > 0 \\ (df)/dx = d/dx (-1/x) = 1/x^2" for } x < 0\end{matrix}\right.$
which is clearly the same.
|
2021-04-12 13:37:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9815645217895508, "perplexity": 3983.031000744022}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038067400.24/warc/CC-MAIN-20210412113508-20210412143508-00606.warc.gz"}
|
https://www.physicsforums.com/threads/pv-n-constant-refresher.256366/
|
# PV^n=constant (refresher)
1. Sep 14, 2008
### Saladsamurai
1. The problem statement, all variables and given/known data
A closed system consisting of 2 lb of a gas undergoes a process in which pV^n=constant. For: p1=20 lb/in^2 , V1=10 ft^3 and p2=100lb/in^2 V2=2.9 ft^3
(a)What is n ?
(b)What is the specific volume at states 1 and 2 in ft^3/lb?
(c)Sketch the process on pressure-volume coordinates
For (a), I don't need to convert the units all to feet or all to inches right? i can just say p1V1^n=p2V2^n correct?
(b) Is just a matter of find the mass m=Weight/g
(c) Is confusing me? Is this just a graph? With p on the horizontal axis and V on the vertical?
Thanks!!!
2. Sep 14, 2008
### Saladsamurai
Okay, I got (a).....the units cancel anyway.
But for (b) I cannot tell if they are giving me 2lb as a mass or as a weight. It's a thermodynamics book, so I don't know what the convention is? they did not specify lbf (force) or lbm (mass)
3. Sep 15, 2008
### Andrew Mason
Correct.
$$P_1V_1^n = 5P_1(.29V_1)^n$$
Correct
You have to plot points in between the end points as well.
AM
4. Sep 15, 2008
### Andrew Mason
Are the units of pressure Force/area or mass/area?
AM
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Have something to add?
|
2017-02-20 18:07:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4706847369670868, "perplexity": 3175.6824429094436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00134-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://www.semanticscholar.org/paper/Matched-pairs-of-Courant-algebroids-Grutzmann-Sti'enon/d044b45132eaa2acb4095c280d9ae04efb138e37
|
# Matched pairs of Courant algebroids
@article{Grutzmann2014MatchedPO,
title={Matched pairs of Courant algebroids},
author={Melchior Grutzmann and Mathieu Sti'enon},
journal={Indagationes Mathematicae},
year={2014},
volume={25},
pages={977-991}
}
• Published 5 April 2012
• Mathematics
• Indagationes Mathematicae
The standard cohomology of regular Courant algebroids
• Mathematics
• 2021
For any Courant algebroid E over a smooth manifold M with characteristic distribution F which is regular, we study the standard cohomology H• st(E) by using a special spectral sequence. We prove a
Atiyah class of a Manin pair
• Mathematics
• 2020
A Courant algebroid $E$ with a Dirac structure $L\subset E$ is said to be a Manin pair. We first discuss $E$-Dorfman connections on predual vector bundles $B$ and develop the corresponding Cartan
Dirac groupoids and Dirac bialgebroids
• M. J. Lean
• Mathematics
Journal of Symplectic Geometry
• 2019
We describe infinitesimally Dirac groupoids via geometric objects that we call Dirac bialgebroids. In the two well-understood special cases of Poisson and presymplectic groupoids, the Dirac
Linear generalised complex structures.
• Mathematics
• 2020
This paper studies linear generalised complex structures over vector bundles, as a generalised geometry version of holomorphic vector bundles. In an adapted linear splitting, a linear generalised
DIRAC ACTIONS AND LU’S LIE ALGEBROID
Poisson actions of Poisson Lie groups have an interesting and rich geometric structure. We will generalize some of this structure to Dirac actions of Dirac Lie groups. Among other things, we extend a
Gauge theory for string algebroids
• Mathematics
• 2020
We introduce a moment map picture for holomorphic string algebroids where the Hamiltonian gauge action is described by means of Morita equivalences, as suggested by higher gauge theory. The zero
Infinitesimal moduli for the Strominger system and generalized Killing spinors
• Mathematics
• 2015
We construct the space of infinitesimal variations for the Strominger system and an obstruction space to integrability, using elliptic operator theory. Motivated by physics, we provide refinements of
Canonical metrics on holomorphic Courant algebroids
• Mathematics
Proceedings of the London Mathematical Society
• 2022
Yau's solution of the Calabi Conjecture implies that every K\"ahler Calabi-Yau manifold $X$ admits a metric with holonomy contained in $\mathrm{SU}(n)$, and that these metrics are parametrized by the
On Dorfman connections of a Courant algebroid
• Mathematics
• 2020
We extend the Courant-Dorfman algebra of a Courant algebroid E to an algebra of differential operators on tensor products of E with values in tensor bundles of a vector bundle B, predual of E.
## References
SHOWING 1-10 OF 19 REFERENCES
Matched pairs of Lie algebroids
• T. Mokri
• Mathematics
Glasgow Mathematical Journal
• 1997
Abstract We extend to Lie algebroids the notion variously known as a double Lie algebra (Lu and Weinstein), matched pair of Lie algebras (Majid), or twilled extension of Lie algebras
A construction of Courant algebroids on foliated manifolds
For any transversal-Courant algebroid E on a foliated manifold (M,F), and for any choice of a decomposition T M = TF © Q, we construct a
On Regular Courant Algebroids
• Mathematics
• 2009
For any regular Courant algebroid, we construct a characteristic class a la Chern-Weil. This intrinsic invariant of the Courant algebroid is a degree-3 class in its naive cohomology. When the Courant
Courant algebroids, derived brackets and even symplectic supermanifolds
In this dissertation we study Courant algebroids, objects that first appeared in the work of T. Courant on Dirac structures; they were later studied by Liu, Weinstein and Xu who used Courant
Remarks on the Definition of a Courant Algebroid
The notion of a Courant algebroid was introduced by Liu, Weinstein, and Xu in 1997. Its definition consists of five axioms and a defining relation for a derivation. It is shown that two of the axioms
MATCHED PAIRS OF LIE GROUPS ASSOCIATED TO SOLUTIONS OF THE YANG-BAXTER EQUATIONS
Two groups G, H are said to be a matched pair if they act on each other and these actions, (a, /?), obey a certain compatibility condition. In such a situation one may form a bicrossproduct group,
Manin Triples for Lie Bialgebroids
• Mathematics
• 1995
In his study of Dirac structures, a notion which includes both Poisson structures and closed 2-forms, T. Courant introduced a bracket on the direct sum of vector fields and 1-forms. This bracket does
Holomorphic Poisson Manifolds and Holomorphic Lie Algebroids
• Mathematics
• 2010
We study holomorphic Poisson manifolds and holomorphic Lie algebroids from the viewpoint of real Poisson geometry. We give a characterization of holomorphic Poisson structures in terms of the Poisson
|
2022-08-16 15:13:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8633798360824585, "perplexity": 1783.2937581234357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572408.31/warc/CC-MAIN-20220816151008-20220816181008-00119.warc.gz"}
|
https://www.nag.com/numeric/py/nagdoc_latest/naginterfaces.library.sparse.real_gen_basic_diag.html
|
# naginterfaces.library.sparse.real_gen_basic_diag¶
naginterfaces.library.sparse.real_gen_basic_diag(comm)[source]
real_gen_basic_diag is the third in a suite of three functions for the iterative solution of a real general (nonsymmetric) system of simultaneous linear equations (see Golub and Van Loan (1996)). real_gen_basic_diag returns information about the computations during an iteration and/or after this has been completed. The first function of the suite, real_gen_basic_setup(), is a setup function; the second function, real_gen_basic_solver(), is the iterative solver itself.
These three functions are suitable for the solution of large sparse general (nonsymmetric) systems of equations.
For full information please refer to the NAG Library document for f11bf
https://www.nag.com/numeric/nl/nagdoc_28.6/flhtml/f11/f11bff.html
Parameters
commdict, communication object
Communication structure.
This argument must have been initialized by a prior call to real_gen_basic_setup().
Returns
itnint
The number of iterations carried out by real_gen_basic_solver().
stplhsfloat
The current value of the left-hand side of the termination criterion used by real_gen_basic_solver().
stprhsfloat
The current value of the right-hand side of the termination criterion used by real_gen_basic_solver().
anormfloat
If in the previous call to real_gen_basic_setup(), then contains , where , or , either supplied or, in the case of or , estimated by real_gen_basic_solver(); otherwise .
sigmaxfloat
If in the previous call to real_gen_basic_setup(), the current estimate of the largest singular value of the preconditioned iteration matrix, either when it has been supplied to real_gen_basic_setup() or it has been estimated by real_gen_basic_solver() (see also Notes for real_gen_basic_setup and Parameters for real_gen_basic_setup); otherwise, is returned.
Raises
NagValueError
(errno )
real_gen_basic_diag has been called out of sequence.
Notes
real_gen_basic_diag returns information about the solution process. It can be called either during a monitoring step of real_gen_basic_solver() or after real_gen_basic_solver() has completed its tasks. Calling real_gen_basic_diag at any other time will result in an error condition being raised.
For further information you should read the documentation for real_gen_basic_setup() and real_gen_basic_solver().
References
Golub, G H and Van Loan, C F, 1996, Matrix Computations, (3rd Edition), Johns Hopkins University Press, Baltimore
|
2022-11-30 01:09:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7310205698013306, "perplexity": 1311.6455294107802}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710712.51/warc/CC-MAIN-20221129232448-20221130022448-00310.warc.gz"}
|
https://socratic.org/questions/5a34c87511ef6b38b4b041db
|
# What causes competitive inhibition?
Dec 16, 2017
Competitive inhibition is caused by a reversible inhibitor(competitive one) which is selected by binding site of enzyme but can't activate the catalytic site.
#### Explanation:
Sometime a compound has the same structure to that of a normal substrate and fits at the binding portion of the active site. Thus in this way the enzyme can't be available to a normal substrate.
So, due to structural similarity with a normal substrate, a competitive inhibitor is selected by the binding site but is not able to activate the catalytic site. As it occupies the binding site, the binding site remains unavailable for a normal substrate. So, there is not any kind of product formation. This is known as competitive inhibition.
Example:
Malonic acid has structural similarity with succinic acid. Succinic acid is specific substrate for succinic dehydrogenase(enzyme). But in some cases, malonic acid fits in binding site of succinic dehydrogenase as a competitive inhibitor but is not able to activate the catalytic site so products are not formed.
$N o t e :$
Active site is divided into two sites:
Binding site : This site holds proper substrate and fits it as enzyme-substrate or ES complex.
Catalytic site : This part of active site transforms the substrate into products which means it is vital for catalytic activity of enzyme.
Hope it helps...
|
2019-01-19 23:13:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25348961353302, "perplexity": 2494.345068607623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583684033.26/warc/CC-MAIN-20190119221320-20190120003320-00256.warc.gz"}
|
https://deepai.org/publication/optimal-upper-bounds-on-expected-kth-record-values-from-igfr-distributions
|
# Optimal upper bounds on expected kth record values from IGFR distributions
The paper concerns the optimal upper bounds on the expectations of the kth record values (k >= 1) centered about the sample mean. We consider the case, when the records are based on the infinite sequence of the independent identically distributed random variables, which distribution function is restricted to the family of distributions with the increasing generalized failure rate (IGFR). Such a class can be defined in terms of the convex orders of some distribution functions. Particularly important examples of IGFR class are the distributions with the increasing density (ID) and increasing failure rate (IFR). Presented bounds were obtained with use of the projection method, and are expressed in the scale units based on the standard deviation of the underlying distribution function.
## Authors
• 2 publications
• ### On upper bounds on expectations of gOSs based on DFR and DFRA distributions
We focus on the problem of establishing the optimal upper bounds on gene...
02/10/2020 ∙ by Agnieszka Goroncy, et al. ∙ 0
• ### On Generalized Reversed Aging Intensity Functions
The reversed aging intensity function is defined as the ratio of the ins...
09/10/2020 ∙ by Francesco Buono, et al. ∙ 0
• ### Lomax distribution and asymptotical ML estimations based on record values for probability density function and cumulative distribution function
Here in this paper, it is tried to obtain and compare the ML estimations...
10/16/2019 ∙ by Saman Hosseini, et al. ∙ 0
• ### Records from partial comparisons and discrete approximations
In this paper we study records obtained from partial comparisons within ...
06/19/2018 ∙ by Ghurumuruhan Ganesan, et al. ∙ 0
• ### Maximum Likelihood Estimations Based on Upper Record Values for Probability Density Function and Cumulative Distribution Function in Exponential Family and Investigating Some o
In this paper a useful subfamily of the exponential family has been cons...
10/29/2017 ∙ by S. D. Gore, et al. ∙ 0
• ### Risk Bounds for Infinitely Divisible Distribution
In this paper, we study the risk bounds for samples independently drawn ...
02/14/2012 ∙ by Chao Zhang, et al. ∙ 0
• ### MMSE Bounds Under Kullback-Leibler Divergence Constraints on the Joint Input-Output Distribution
This paper proposes a new family of lower and upper bounds on the minimu...
06/05/2020 ∙ by Michael Fauß, et al. ∙ 0
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## 1 Introduction
Let us consider the infinite sequence , , of independent and identically distributed random variables with the common cummulative distribution function (cdf) and finite mean
and variance
. By denote the order statistics of . Further, we are interested in the increasing subsequences of of the th greatest order statistics, for a fixed . Formally, we define the (upper) th records , , by introducing first the th record times as
T(k)0=k,T(k)n=min{j>T(k)n−1:Xj>XT(k)n−1+1−k:T(k)n−1},n=1,2,….
Then the th record values are given by
R(k)n=XT(k)n+1−k:T(k)n,n=0,1,….
Note that classic upper records are defined by , and we say that such a record occurs at time if is greater than the maximum of previous observations .
Records are widely used, not only in the statistical applications. The most obvious one that arises at the first glance is the prediction of sport achievements and natural disasters. The first mention of the classic records comes from Chandler (1952), while the th record values were introduced by Dziubdziela and Kopociński (1976). For the comprehensive overview of the results on the record values the reader is referred to Arnold, Balakrishnan and Nagaraja (1998) and Nevzorov (2001).
The distribution function of the th record value is given by the following formula
F(k)n=1−[1−F(x)]kn∑i=0kii!(−ln[1−F(x)])i. (1.1)
If the cdf
is absolutely continuous with a probability density function (pdf)
, then the distribution function (1.1) also has the pdf given by
f(k)n(x)=kn+1n!(−ln[1−F(x)])n[1−F(x)]k−1f(x).
In particular case of the standard unform underlying cdf , the corresponding distribution of the uniform th record is given by the following equations
G(k)n(x) = 1−(1−x)kn∑i=0kii![−ln(1−x)]i,0
Now, recall the cdf of the generalized Pareto distribution
Wα(x)=⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩1−(1−αx)1/α,for% x≥0, if α<0,1−(1−αx)1/α,for 0≤x≤1α, % if α>0,1−e−x,for x≥0, if α=0. (1.3)
Next, we say that the cdf precedes the cdf in the convex transform order, and we write , if the composition is concave on the support of . Following the reasoning of Goroncy and Rychlik (2015) and Bieniek and Szpak (2017), we consider the following family of distributions with the increasing generalized failure rate defined as with respect to
IGFR(α)={F:F≺Wα}. (1.4)
Indeed, if the distribution function is continuous with the density function , then the generalized failure rate defined as
γα(x)=(W−1α(F))′(x)=(1−F(x))α−1f(x), (1.5)
is increasing. Note that the expression in is just the product of the conventional failure rate and a power of the survival function .
For
we obtain the standard uniform distribution function
and the family IGFR(1)=ID of the increasing density distributions, respectively. On the other hand for , the cdf
is the cdf of the standard exponential distribution and in the result we get the family IGFR(0)=IFR of the increasing failure rate distributions.
The aim of this paper is to establish the optimal upper bounds on
ER(k)n−μσ, (1.6)
where the cdf is restricted to the IGFR() class of distributions, for arbitrarily chosen and . In the special case , which reduces to the order statistics , the readers are referred to Rychlik (2014), who established the optimal bounds for ID and IFR distributions.
The bounds on the th records, in particular classic record values have been widely considered in the literature, beginning with Nagaraja (1978). He used the Schwarz inequality to obtain the upper bounds on the expectations of the classic records, which were expressed in terms of the mean and standard deviation of the underlying distribution. The Hölder inequality was used by Raqab (2000), who presented more general bounds expressed in the scale units generated by the
th central absolute moments
, . Also, he considered records from the symmetric populations. Differences of the consecutive record values (called record spacings) based on the general populations and from distributions with the increasing density and increasing failure rate were considered by Rychlik (1997). His results were generalized by Danielak (2005) into the arbitrary record increments.
The th record values were considered by Grudzień and Szynal (1985), who by use of the Schwarz inequality obtained non-sharp upper bounds expressed in terms of the population mean and standard deviation. Respective optimal bounds were derived by Raqab (1997), who applied the Moriguti (1953) approach. Further, the Hölder along with the Moriguti inequality were used by Raqab and Rychlik (2002) in order to get more general bounds. Gajek and Okolewski (2003) dealt with the expected th record values based on the non-negative decreasing density and decreasing failure rate populations evaluated in terms of the population second raw moments. Results for the adjacent and non-adjacent th records were obtained by Raqab (2004) and Danielak and Raqab (2004a). Evaluations for the second records from the symmetric populations were considered by Raqab and Rychlik (2004). Danielak and Raqab (2004b) presented the mean-variance bounds on the expectations of th record spacings from the decreasing density and decreasing failure rate families of distributions. Further, Raqab (2007) considered second record increments from decreasing density families. Bounds for the th records from decreasing generalized failure rate populations were evaluated by Bieniek (2007). Expected th record values, as well as their differences from bounded populations were determined by Klimczak (2007), who expressed the bounds in terms of the lengths of the support intervals.
Regarding the lower bounds on the record values, there are not many papers concerning the problem, in opposite to the literature on the lower bounds for the order statistics and their linear combinations (see e.g. Goroncy and Rychlik (2006a), Goroncy and Rychlik (2006b, 2008), Rychlik (2007), Goroncy (2009)). The lower bounds on the expected th record values expressed in units generated by the central absolute moments of various orders, in the general case of the arbitrary parent distributions were presented by Goroncy and Rychlik (2011). There are also a few papers concerning the lower bounds on records indirectly, namely in the more general case of the generalized order statistics (Goroncy (2014), Bieniek and Goroncy (2017)).
Below we present a procedure which provides the basis of obtaining the optimal upper bounds on in the case of our interest. It is well known that
ER(k)n=∫10F−1(x)g(k)n(x)dx=∫10F−1(x)kn+1n![−ln(1−x)]n(1−x)k−1dx,
therefore
ER(k)n−μσ=∫10F−1(x)−μσ[g(k)n(x)−1]dx. (1.7)
Due to the further application, we subtract 1 from in the formula above, but one could replace it with an arbitrary constant. Changing the variables in , for a fixed, absolutely continuous cdf with the pdf on the support , , we obtain
ER(k)n−μσ=∫d0F−1(W(x))−μσ(g(k)n(W(x))−1)w(x)dx. (1.8)
Further assume that satisfies
d∫0x2w(x)dx<∞. (1.9)
Let us consider the Hilbert space of the square integrable functions with respect to on , and denote the norm of an arbitrary function as
||f||W=(∫d0|f(x)|2w(x)dx)1/2.
Moreover, let stand for the projection operator onto the following convex cone
CW={g∈L2W:g is nondecreasing and concave}. (1.10)
In order to find the upper optimal bounds on , we will use the Schwarz inequality combined with the well-known projection method (see Rychlik (2001), for details). It is clear that can be bounded by the -norm of the projection of the function , as follows
ER(k)n−μσ≤||PWhW||W, (1.11)
with the equality attained for cdf satisfying
F−1(W(x))−μσ=PWhW(x)||PWhW||W. (1.12)
In our case we fix and the problem of establishing the optimal upper bounds on easily boils down to determining the -norm of the projection of the function onto . Note that in order to apply the projection method, we need the condition to be fulfilled by the distribution function . Bieniek (2008) showed, that in that case we need to confine ourselves to parameters , what we do in our further considerations.
## 2 Auxiliary results
In this section we recall the results of Goroncy and Rychlik (2015, 2016), who determined the projection of the function satisfying particular conditions, onto the cone of nondecreasing and concave functions. These conditions are presented below.
(A) Let be bounded, twice differentiable function on , such that
d∫0h(x)w(x)dx=0.
Moreover, assume that is strictly decreasing on , strictly convex increasing on , strictly concave increasing on with , and strictly decreasing on with for some .
The projection of the function satisfying conditions (A) onto the convex cone is either first linear, then coinciding with and ultimately constant, or just linear and then constant, depending on the behaviour on some particular auxiliary functions, which are introduced below.
First, denote
TW(β)=h(β)(1−W(β))−d∫βh(x)w(x)dx,0≤β≤d, (2.1)
which is decreasing on , increasing on and decreasing on , having the unique zero in . Moreover, let
λW(y) = y∫0(x−y)(h(x)−h(y))w(x)dxy∫0(x−y)2w(x)dx, (2.2) YW(y) = λW(y)−h′(y), (2.3) ZW(y) = y∫0(h(x)−h(y)−λW(y)(x−y))w(x)dx, (2.4)
for . The precise form of the projection of the function satisfying (A) onto the cone is described in the proposition below (cf. Goroncy and Rychlik (2016), Proposition 1).
###### Proposition 1.
If the zero of belongs to the interval and the set is nonempty, then
PWh(x)=⎧⎪⎨⎪⎩h(y∗)+λW(y∗)(x−y∗),0≤x
where is the projection of h onto . Otherwise we define
with
Let denote the set of arguments satisfying the following condition
d∫yh(x)w(x)dx1−W(y)=−y∫0(x−y)h(x)w(x)dxy∫0(x−y)w(x)dxy∫0(x−y)2w(x)dx−(y∫0(x−y)w(x)dx)2>0. (2.5)
Then is nonempty and for unique .
Note that there are only two possible shapes of projection functions of the function onto . The first one requires compliance with certain conditions and can be briefly described as: linear - identical with - constant (l-h-c, for short). The second possible shape does not have a part which corresponds to the function , and we will refer to it as l-c (linear and constant) from now on. The original version of this proposition can be found in Goroncy and Rychlik (2015), however there was no clarification about the parameter in case of the l-c type of the projection, therefore we refer to Goroncy and Rychlik (2016).
We will also need some results on the projection of the functions satisfying conditions (), which are a slight modification of conditions (A). We state that the function satisfies () if conditions (A) are modified so that and . This in general means that the function does not have the decreasing part at the right end of the support and in particular does not have to be bounded from above. The proposition below (cf. Goroncy and Rychlik (2016), Proposition 6) describes the shape of the projection in this case.
###### Proposition 2.
If the function satisfies conditions , then the set is nonempty and for we have
PWh(x)={h(y∗)+λW(y∗)(x−y∗),0≤x
## 3 Main results
Let us focus now on the case and denote
hα(x)=hWα(x)=^g(k)n(x)−1, (3.1)
where
^g(k)n=g(k)n∘Wα. (3.2)
We also denote .
The substantial matter in determining the bounds on is to learn the shapes of the functions for arbitrary and , which correspond with the shapes of compositions , and are presented in the lemma below (comp. with Bieniek (2007), Lemma 3.2).
###### Lemma 1.
If , then the shape of is as follows:
• If , then , , is convex increasing.
• If , then , , is convex increasing, concave increasing and concave decreasing.
• If , then is concave increasing-decreasing, and , , is convex increasing, concave increasing and concave decreasing.
• If , then is concave increasing, concave decreasing and convex decreasing, and , , is convex increasing, concave increasing, concave decreasing and convex decreasing.
If , then the shape of is as follows:
• If , then is linear increasing and , , is convex increasing.
• If , then is concave increasing and then decreasing, , , is convex increasing, concave increasing, and decreasing.
If , then the shape of is as follows:
• If , then is concave increasing, , , is convex increasing and concave increasing.
• If , then is concave increasing, concave decreasing and convex decreasing and , , is convex increasing, concave increasing, concave decreasing and convex decreasing.
It is worth mentioning that slight differences between the lemma above and Lemma 3.2 in Bieniek (2007) are the result of different notations of record values.
Note that the case is covered by the above lemma, except the setting (ii) for , which is not possible in this case (cf. Rychlik (2001), p.136). Case comes from Rychlik (2001, p.136). In order to determine the shape of for , we notice that for and
(^g(k)n(x))′ = 11−αx[k^g(k)n−1(x)−(k−1)^g(k)n(x)], (3.3) (^g(k)n(x))′′ = 1(1−αx)2[k2^g(k)n−2(x)−k(2k−α−2)^g(k)n−1(x) (3.4) +(k−1)(k−1−α)^g(k)n(x)],
and use the variation diminishing property (VDP) of the linear combinations of (see Gajek and Okolewski, 2003). Other special cases of we calculate separately in order to obtain the shapes of .
Faced with this knowledge, we conclude that satisfy conditions (A) with , for and if , for and or and if , as well as for and if . Moreover, we have
d∫0hα(x)w(x)dx=1∫0(g(k)n(u)−1)du=0,
, . The value of in the local maximum point has to be positive, since the function starts and finishes with negative values and integrates to zero, which means that has to cross the -asis and changes the sign from negative to positive, finishing with negative value at . Therefore, we can use Proposition 1 in order to obtain the projection of onto and finally determine the desired bounds according to . Moreover, satisfy conditions () in case of the first record values () for , and we are entitled to use Proposition 2 then. Other cases can be dealt without the above results. These imply the particular shapes of the projections which can be one of the three possible kinds. The first one coincides with the original function (first values of the classic records for ), the second shape is the linear increasing function (classic record values for and or and ), and the last one is the projection coinciding with the function at the beginning and ultimately constant (first values of the th records for and or , ).
In order to simplify the notations, we will denote the projection of function onto with respect to by from now on.
### 3.1 Bounds for the classic records
In the proposition below we present the bounds on the classic record values (). This case does not require using the Proposition 1, since the shapes of the densities of records do not satisfy conditions (A), but possibly satisfy conditions ().
###### Proposition 3.
Assume that .
• Let . If , then we have the following bound
ER(1)1−μσ≤1, (3.5)
with the equality attained for the exponential distribution function
F(x)=1−exp{−1−x−μσ},x>μ−σ. (3.6)
If , then the set is nonempty and for we have
ER(1)n−μσ≤Cα(y∗), (3.7)
where
C2α(y) = [1−(1−αy)1/α][1+(^g(1)n(y)−1)2]−^G(1)n(y) 2(1+2α)(^g(1)n(y)−1)[1−y(1+α)−(1−αy)1/α+1]2−2(1−αy)1/α+2+y(1+2α)(αy+y−2) ⋅⎡⎢⎣^g(1)n(y)y(1+α)−1+(1−αy)1/α+11+α−y∫0^G(1)n(x)dx⎤⎥⎦ +(1−2α)([y(1+α)−1+(1−αy)1/α+1]^g(1)n(y)−(1+α)y∫0^G(1)n(x)dx)2(1+α)[2−2(1−αy)1/α+2+y(1+2α)(αy+y−2)].
The equality in is attained for distribution functions IGFR that satisfy the following condition
F−1(Wα(x))=⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩σCα(y∗)[^g(1)n(y∗)−1+λα(y∗)(x−y∗)]+μ,0≤x
• Let now . We have the following bound
ER(1)n−μσ≤n,
with the equality attained for the exponential distribution function .
• Suppose , . Then we have the following bound
ER(1)n−μσ≤√(2α+1)(2a∗b∗+(α+1)b2∗)+2a2∗(1+α)(2α+1), (3.9)
where
a∗ = (1+α)2(2α+1)⎡⎢ ⎢⎣1α(1+α)−1/α∫0^G(1)n(x)dx⎤⎥ ⎥⎦, (3.10) b∗ = −a∗1+α=−(1+α)(2α+1)⎡⎢ ⎢⎣1α(1+α)−1/α∫0^G(1)n(x)dx⎤⎥ ⎥⎦. (3.11)
The equality in is attained for the following distribution function
F(x)=1−(1−α(1+(x−μ)σa∗√(2α+1)(2a∗b∗+(α+1)b2∗)+2a2∗(1+α)(2α+1)))1/α.
Proof. Fix . Let us first consider case (i), i.e. . Here we have to add an additional restriction , which has been mentioned at the end of Section 2. If , then the function is increasing and concave, hence its projection onto is the same as . The bound can be determined via its norm, which square is given by
||Pαhα||2=||hα||2=d∫0(^g(k)n(x)−1)2wα(x)dx=k2(n+1)(2n)!(n!)2(2k−1)2n+1−1, (3.12)
since
d∫0[^g(k)n(x)]2wα(x)dx = 1∫0[g(k)n(x)]2dx= = k2(n+1)(2n)!(n!)2(2k−1)2n+11∫0g(2k−1)2n(x)dx=k2(n+1)(2n)!(n!)2(2k−1)2n+1.
Taking into account that as well as are equal to one, formula implies .
Suppose now that . Note that in this case satisfy conditions (). Using Proposition 2, we have the following projection of onto the cone ,
Pαhα(x)=⎧⎪ ⎪⎨⎪ ⎪⎩^g(k)n(y∗)−1+λα(y∗)(x−y∗),0≤x
An appropriate counterpart of function in our case is
λα(y)=λWα(y)=^g(k)n(y)y∫0Wα(x)dx−y∫0^G(k)n(y)dx2yy∫0Wα(x)dx−2y∫0xWα(x)dx, (3.13)
with , since simple calculations show that
y∫0(x−y)wα(x)dx = −y∫0Wα(x)dx, (3.14) y∫0(x−y)2wα(x)dx = 2⎛⎜⎝yy∫0Wα(x)dx−y∫0xWα(x)dx⎞⎟⎠, (3.15) y∫0x^g(k)n(x)wα(x)dx = y^G(k)n(y)−y∫0^G(k)n(x)dx. (3.16)
Having
y∫0Wα(x)dx = y+(1−αy)1+1/α−11+α, y∫0xWα(x)dx = 12y2+(1−αy)1+1/α1+αy+(1−αy)2+1/α−1(1+α)(1+2α),
for , we conclude that takes the form
λα(y)=(1+2α)[y(1+α)−1+(1−αy)1/α+1]^g(k)n(y)−(1+α)y∫0^G(k)n(x)dx2−2(1−αy)1/α+2+y(1+2α)(αy+y−2), (3.17)
with . In consequence for and we have
||Pαhα||2=C2α(y∗),
where is given in . The square root of the expression above determines the optimal bound on .
Consider now case (iii) with and , which requires more explanation. With such parameters function is increasing and convex. This implies that its projection onto the cone of the nondecreasing and concave functions is linear increasing. The justification for this is similar as e.g. in Rychlik (2014, p.9). The only possible shape of the closest increasing and convex function to the function is the linear increasing one , say, which has at most two crossing points with . Since
d∫0hα(x)wα(x)dx=d∫0Pαhα(x)wα(x)=0, (3.18)
(see e.g. Rychlik (2001)), we obtain
b0=−a01+α. (3.19)
Next, in order to determine the optimal parameter , we need to minimize the distance between the function and its projection
Dα(a0)=||Pαhα−hα||2.
For we have and . Therefore
Dα(a)=a21/α∫0(x−11+α)2wα(x)dx−2a1/α∫0(x−11+α)hα(x)wα(x)dx+1/α∫0h2α(x)wα(x)dx. (3.20)
Using
1/α∫0(x−11+α)2wα(x)dx = 1(2α+1)(1+α)2, 1/α∫0(x−11+α)hα(x)wα(x)dx = 1α(1+α)−1/α∫0^G(1)n(x)dx,
we get the minimum of equal to . Since , we also obtain . Finally, the optimal bound can be determined by calculating the square root of
||Pαhα||2=1/α∫0(a∗x+b∗)2(1−αx)1/α−1dx,
which equals .
Let finally consider the case (ii). Note that for function is increasing and concave, and the case is analogous to (i) with , when we get the bound equal to 1. If , then is increasing and convex and its projection onto the cone of the nondecreasing and concave functions is linear, as in case (iii). Here the analogue to is . For , we have , , which gives us the distance function , which is minimized for . Hence , and we get the optimal bound equal to .
The distributions for which the equalities are attained in all the above cases can be determined using the condition with and .
### 3.2 Bounds for the kth records, k≥2
As soon as we give some auxiliary calculations, we are ready to formulate the results on the upper bounds of the expected th record values, based on the the IGFR() family of distributions. For being the GPD distribution, we have the corresponding function of given by
Tα(β)=TWα(β)=(1−Wα(β))[−1kn−1∑i=0^g(k)i(β)+(1−1k)^g(k)n(β)],β∈
|
2021-01-23 22:41:29
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9423755407333374, "perplexity": 841.7019092687058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703538741.56/warc/CC-MAIN-20210123222657-20210124012657-00001.warc.gz"}
|
http://gate-exam.in/EE/Syllabus/Electrical-Engineering/Power-Systems/Symmetrical-Components
|
# GATE Questions & Answers of Symmetrical Components
## What is the Weightage of Symmetrical Components in GATE Exam?
Total 1 Questions have been asked from Symmetrical Components topic of Power Systems subject in previous GATE papers. Average marks 1.00.
The series impedance matrix of a short three-phase transmission line in phase coordinates is $\begin{bmatrix}Z_s&Z_m&Z_m\\Z_m&Z_s&Z_m\\Z_m&Z_m&Z_s\end{bmatrix}$ . If the positive sequence impedance is $\left(1+\;j\;10\right)\;\Omega$ , and the zero sequence is $\left(4+\;j\;31\right)\;\Omega$ , then the imaginary part of $Z_m\;(in\;\Omega)$ is ______(up to 2 decimal places).
|
2019-04-20 09:01:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9683712124824524, "perplexity": 2599.6946041130236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529472.24/warc/CC-MAIN-20190420080927-20190420102927-00350.warc.gz"}
|
https://www.physicsforums.com/threads/radiation-in-the-far-field-for-a-current-carrying-loop.786264/
|
Radiation in the far field for a current carrying loop
1. Dec 7, 2014
Mr. Rho
Hi people, I have a problem with some integral here.
I have a loop of radius a, with a current I = Ioe-iωt' and trying to calculate the radiating fields in the far zone, my procedement is:
Current density: J(r',t') = Ioδ(r'-a)δ(θ'-π/2)e-iωt'/2πa2 φ (φ direction)
Here t' = t - |r-r'|/c (retarded time)
I evaluate the vector potential: A(r,t) = ∫vJ(r',t')dV/|r-r'|
I approximate 1/|r-r'| ≈ 1/r because in the far zone this term does not affect too much compared with the exponential, and |r-r'| ≈ r - aSinθCos(φ-φ') for the exponential, that plays a major role in the far zone.
The problem is that I reach integrals that I can't solve:
∫ sinφ' e-ikaSinθCos(φ-φ') dφ'
for x direction
and ∫ sinφ' e-ikaSinθCos(φ-φ') dφ' for y direction (both integrals from 0 to 2π).
Maybe I'm doing something wrong, but I don't know, any help?
(r' are the source coordinates and r the observer coordinates)
2. Dec 7, 2014
TSny
Usually it is assumed that you are working with low enough frequencies and a small enough loop that you can take $a$ to be small in relation to the wavelength of the radiation. Then you can approximate $e^{-ika \sin \theta \cos (\phi - \phi ')}$.
|
2017-12-11 08:26:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9039121866226196, "perplexity": 1528.895551419896}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948512584.10/warc/CC-MAIN-20171211071340-20171211091340-00635.warc.gz"}
|
https://zbmath.org/?q=an:1044.46011
|
zbMATH — the first resource for mathematics
Banach spaces with few operators. (English) Zbl 1044.46011
Johnson, W. B. (ed.) et al., Handbook of the geometry of Banach spaces. Volume 2. Amsterdam: North-Holland (ISBN 0-444-51305-1/hbk). 1247-1297 (2003).
The focal points of this beautiful exposition are a number of longstanding open problems in the geometry of infinite-dimensional Banach spaces which were solved in the 1990’s by the author and W. T. Gowers. These include the following problems. (1) The hyperplane problem: is every (infinite-dimensional Banach space) $$X$$ isomorphic to its hyperplanes? (2) The unconditional basic sequence problem: does every $$X$$ contain an unconditional basic sequence? Can every $$X$$ be decomposed as $$X= W\oplus Y$$ where $$W$$ and $$Y$$ are closed infinite-dimensional subspaces? The author presents a nice short history of these and quite a few more related problems. The unifying theme of many of the problems considered is to construct a Banach space $$X$$ with very few operators (e.g., any bounded linear operator $$T$$ on $$X$$ has the form $$T= \lambda \text{ Id } +S$$ where $$S$$ is strictly singular) or, more generally, with a prescribed class of operators.
The main part of the paper is devoted to the construction of hereditarily indecomposable (H. I.) Banach spaces (i.e., if $$Y$$ is a subspace of $$X$$ with $$Y = W\oplus Z$$ then $$W$$ or $$Z$$ must be finite dimensional) which was first achieved by the author and W. T. Gowers. Again, the history and motivation for the construction is presented (Tsirelson’s space and Schlumprecht’s space). The author also presents a proof of the theorem of S. Argyros and V. Felouzis that $$\ell_p$$ ($$1<p<\infty$$) is a quotient of an H.I. space.
This paper is a superb summary of an exciting period in Banach space theory.
For the entire collection see [Zbl 1013.46001].
MSC:
46B03 Isomorphic theory (including renorming) of Banach spaces 46B20 Geometry and structure of normed linear spaces 46B15 Summability and bases; functional analytic aspects of frames in Banach and Hilbert spaces 46-00 General reference works (handbooks, dictionaries, bibliographies, etc.) pertaining to functional analysis
|
2021-06-19 01:30:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7609285712242126, "perplexity": 509.34233469768867}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487643354.47/warc/CC-MAIN-20210618230338-20210619020338-00282.warc.gz"}
|
http://mexproject.it/jele/poisson-equation-in-semiconductor.html
|
# Poisson Equation In Semiconductor
and Zou, W. Keywords: Boltzmann-Poisson system for semiconductors, WENO scheme, spherical coordinate system The Boltzmann equation (BTE) describes electron transport in semiconductor devices. In this paper we study the Cauchy problem for 1-D Euler–Poisson system, which represents a physically relevant hydrodynamic model but also a challenging case for a bipolar semiconductor. Thanks for contributing an answer to Physics Stack Exchange! Please be sure to answer the question. It should be noticed that the delta function in this equation implicitly defines the density which is important to correctly interpret the equation in actual physical quantities. In a doped semiconductor, the equation n*p = ni^2 If doped with DONORS, the concentration Nd = n, if doped with ACCEPTORS, the concentration Na = p. ε 0 is the permittivity in free space, and ε s is the permittivity in the semiconductor and-x p and x n are the edges of. The model provides a general method for ionic current simulation for semiconductor-based nanodevices with arbitrary geometry, however we are primarily focused on nanoporous devices. Abstract An effective iterative finite difference method for solving a nonlinear Poisson equation for semiconductor device theory is presented. Numbers in brackets indicate the number of Questions available on that topic. 1, the potential φ(x,y,z)satisfies Poisson equation in the semiconductor as follows [16–19]: ∂2φ ∂x 2 + ∂2φ ∂y + ∂2φ ∂z2 =− q εs p−n+N+ D. An example of its application to an FET structure is then presented. A Software Package for Numerical Simulation of Semiconductor Equation S. Poisson’s equation – Steady-state Heat Transfer Additional simplifications of the general form of the heat equation are often possible. There are two applications of Gauss's Law used in MOS derivations for computing the surface potential equation (SPE). The second derivatives appearing in the weak formulation of the Poisson equation are calculated from the C0 velocity approximation using a least-squares method. The potential V in the Poisson equation, with an applied voltage V b, has the boundary conditions of the form V (0)=0, V (L)=V b (14) The left hand side of eqn. Poisson equation $$abla \cdot (\epsilon abla V) = -(p - n + N_D^+ - N_A^-)$$ and a number of boundary conditions. Efficient solution of the Schroedinger-Poisson equations in layered semiconductor devices. Sze Physics of Semiconductor Devices States in a semiconductor Bands and gap Impurities Electrons and holes Position of the Fermi level Intrinsic Doped= Extrinsic The p-n junction Band bending, depletion region Forward and reverse biasing. The SHE-Poisson system describes carrier transport in semiconductors with self-induced electrostatic potential. BLOG Three Semiconductor Device Models Using the Density-Gradient Theory; KNOWLEDGE BASE Understanding the Fully Coupled vs. In macroscopic semiconductor device modeling, Poisson's equation and the continuity equations play a fundamental role. Lesson 11 of 26 • 10 upvotes • 8:25 mins. Key words and phrases. This cycle of solving the two differential equations is iterated to convergence. The charge transport equations are then cou-pled to Poisson's equation for the elec-trostatic potential. This is the current which is due to the transport of charges occurring because of non-uniform concentration of charged particles in a semiconductor. Boltzmann transport equation. Journal of Differential Equations 255 :10, 3150-3184. Poisson's equation, one of the basic equations in electrostatics, is derived from the Maxwell's equation and the material relation stands for the electric displacement field, for the electric field, is the charge density, and. Electronic Devices , First yr Playlist https://www. In recent decades, the Schrodinger-Poisson system has been studied widely by many authors, because it has strong physical background and interesting meaning. Consider a 3D MOSFET as shown in Fig. The limit system is governed by the classical drift-di usion model. The basic functionality of an EEPTROM device can be understood with a complete electrostatic analysis, making it an ideal application for the solver. flows to semiconductor modeling to tissue engineering. Poisson's equation - Steady-state Heat Transfer. Finally, putting these in Poisson’s equation, a single equation for. Here, we examine a benchmark model of a GaAs nanowire to demonstrate how to use this feature in the Semiconductor Module, an add-on product to the COMSOL Multiphysics® software. Finally, putting these in Poisson's equation, a single equation for. The PNP system of equations is analyzed. For semiconductor device analysis Poisson's Equation is written in the form V*=--9(p-n+Nd-N. Especially, we analyze the impact of aspect ratio on the random dopant fluctuation in multi-gate devices. The FP method is a. We consider the periodic problem for 2‐fluid nonisentropic Euler‐Poisson equations in semiconductor. The two-dimension. - The Semi-Classical Vlasov. How to solve continuity equations together with Poisson equation? working a lot with semiconductor phyics, I wonder if there is a way to solve the common. The basic functionality of an EEPTROM device can be understood with a complete electrostatic analysis, making it an ideal application for the solver. Phys112 (S2014) 9 Semiconductors Semiconductors cf. This paper provides an introduction to some novel aspects of the transmission line matrix (TLM) numerical technique with particular reference to the modeling of processes in semiconductor materials and devices. Customer Need Process Simulation Device Simulation Parameter Extraction Circuit Level Simulation yes Computational Electronics no Fig. 122 Poisson equation has following form: − ℏ2 2mz d2ξ(z) dz2 −qV(z)ξ(z)=E0ξ(z), (1a) d2V(z) dz2 qN0 εs |ξ(z)|2. 1 The Fish Distribution? The Poisson distribution is named after Simeon-Denis Poisson (1781–1840). - Magnetic Fields. The nonlinear Poisson equation and analytical solution The investigated 1-D symmetric DG-MOSFET is illustrated in Fig. Defect Density Estimation in Semiconductor Manufacturing Mike Pore, Advanced Micro Devices AMD mail stop 613, 5204 E. Several physical phenomena may be described by PE [1]. At interfaces, the Dirichlet boundary condition is automatically applied at metal/insulator or metal/semiconductor. The solution of the nonlinear Poisson equation provides thermal equilibrium characteristics of the device. This means that the system can be decoupled and reduced to a single nonlinear Poisson equation for the elec-trostatic eld subject to the Boltzmann distribution of the charged particles (i. 21 sentence examples: 1. It explains about the poisson and continuity Equations moreover it also explains how the equations are related using the derivation. ) where * is the electrostatic potential, p is the hole coration, n is the electron concentration, N, sN. The Schrödinger and Poisson equations are self-consistently solved in a finite quantum box which includes the whole metal-insulator-semiconductor structure. Show Poisson's equation for the semiconductor surface band bending may be solved as phi(x) = phi_S(1 - x/x_d)^2 where phi_s = qN_Ax^2_d/2K_s elementof_0 Is the surface potential, and the bulk charge density is Q_B = - qN_Ax_d = - squareroot 2K_S elementof_0 qN_A phi_S. where (mesh. The Schrödinger and Poisson equations are self‐consistently solved in a finite quantum box which includes the whole metal‐insulator‐semiconductor structure. To do this, we will create 4 ticks on the x axis, x1 being somewhere to the left of -1, x2 = -1, x3 = 0. html db/journals/cacm/cacm41. It aims to describe the distribution of the electric potential in solution in the direction normal to a charged surface. Poisson's law can then be rewritten as: (1 exp( )) ( ) 2 2 kT q qN dx. EE 436 band-bending - 6 We can re-write Poisson's equation using this new band-bending parameter: Inserting the ρ(x) for uniformly doped n-type semiconductor: This is the Poisson-Boltzmann equation for a uniformly doped n-type semiconductor. There are two applications of Gauss's Law used in MOS derivations for computing the surface potential equation (SPE). AQUILA is a MATLAB toolbox for the one- or two dimensional simulation of the electronic properties of GaAs/AlGaAs semiconductor nanostructures. The effects of drift and diffusion coupled with Poisson's equation do not by any means give an exhaustive account of all the physics involved in the operation of a semiconductor device. Vlasov equation and generalized Poisson equation are used here to obtain the energies of oscillations in nuclei. } Solving Poisson's equation for the potential requires knowing the charge density distribution. 2 =# q $n. A new iterative method for solving the discretized nonlinear Poisson equation of semiconductor device theory is presented. 2016 Silicon Valley Engineering Hall of Fame Induction. 3) where εs is the semiconductor permittivity and, for silicon, is. Poisson's Equation. [1] exp(x) > F 1/2 (x) for x > 0, MB statistics is invalid. Electronic Devices , First yr Playlist https://www. You can choose between solving your model with the finite volume method or the finite element method. 5, and x4 being somewhere beyond x3. son’s equation solver will take about 90% of total time. This is the current which is due to the transport of charges occurring because of non-uniform concentration of charged particles in a semiconductor. For these systems, the main challenge lies in the efficient and accurate solution of the self-consistent one-band and multi-band Schrödinger-Poisson equations. Solution of the Wigner-Poisson Equations for RTDs M. 1 Introduction. The numerical modelling of semiconductor devices is usually based on four coupled differential equations: the Poisson equation, electron and hole balance equations (called current continuity equations) and energy balance equation. For a homogeneous, isotropic and linear medium, the Poisson’s equation is A special case of Poisson’s equation can be defined if there is no charge in the space. Modeling and 2d–Simulation of Quantum–Well Semiconductor Lasers including the Schr¨odinger–Poisson system • H. They are used to solve for the electrical performance of the electronic devices upon applying stimuli on them. equation (which describes the diffusion of ions under the effect of an electric potential) with the Poisson equation (which relates charge density with electric potential). One of the central problems in traditional mesh-based methods is the assignment of charge to the regular mesh imposed for the discretisation. surface reconstruction as the solution to a Poisson equation. In macroscopic semiconductor device modeling, Poisson's equation and the continuity equations play a fundamental role. When we apply a field to MOS, what happens in the semiconductor? what is the charge profile in the semiconductor? We need to calculate the electrostatic potential and charge density at the channel beneath the oxide (or insulating layer). Clipper Circuits. The main idea is to use iterative schemes to solve a system of linear partial differential equations together with nonlinear algebraic equations instead of solving a fully nonlinear system of partial differential equations. Derivation of the model equations 2. abstract = "Self-consistent semiconductor device modeling requires repeated solution of the 2D or 3D Poisson equation that describes the potential profile in the device for a given charge distribution. The numerical modelling of semiconductor devices is usually based on four coupled differential equations: the Poisson equation, electron and hole balance equations (called current continuity equations) and energy balance equation. The Boltzmann-Poisson system The temporal evolution of the electron distribution function f (t;x ;k ) in semiconductors depending on time t, position x and electron wave vector k is governed by the Boltzmann transport equation [10] @f @ t + 1. Before we detail the derivation of the model, we introduce shortly in some basic notions of semiconductor theory. The possible local charge unbalance requires that the Poisson equation be included. ThePoisson-Boltzmann equation arises because in some cases the charge den-sity ρdepends on the potential ψ. 1-14 shows the positions of the Fermi-levels in an N-type semiconductor and in a P-type semiconductor, respectively. Nernst–Planck equations to be solved in complex chemical and biological systems with multiple ion species by substituting Nernst–Planck equations with Boltzmann distributions of ion concentrations. All these four equations are non-linear. 3 Uniqueness Theorem for Poisson's Equation Consider Poisson's equation ∇2Φ = σ(x) in a volume V with surface S, subject to so-called Dirichlet boundary conditions Φ(x) = f(x) on S, where fis a given function defined on the boundary. Diffusion current is a current in a semiconductor caused by the diffusion of charge carriers (holes and/or electrons). Poisson-Nernst-Planck equations, which are the basic continuum model of ionic permeation and semicon-ductor physics. We would like to point out that the Euler-Poisson equation is closely related to the Schr¨odinger-Poisson equation via the semi-classical limit and the Vlasov-Poisson equation as well as the Wigner equation. and the electric field is related to the electric potential by a gradient relationship. This cycle of solving the two differential equations is iterated to convergence. The electric poten-. Semiconductor devices can be simulated by solving a set of conservation equations for the electrons and holes coupling with the Poisson equation for the electrostatic potential. φ (x) in a doped semiconductor in TE materializes: ! d. Based on the numerical solution of Schrodinger–Poisson (SP) equations, the¨ new Poisson equation developed is optimized with respect to (1) the position. The Vlasov-Poisson equations arise in semiconductor device modeling [23] and plasma physics [18]. The semiconductor Boltzmann equation (BTE) gives quite accurate simulation results, but the numerical methods to solve this equation (for example Monte-Carlo method) are too expensive. Unfortunately, this is a non-linear differential equation. It is shown that the solutions converges to the stationary solutions exponentially in time. 1 Introduction. Poisson equation fails to model the physics accurately. Poisson's equation - Steady-state Heat Transfer. [15, 23], which may now be called Boltzmann-Bloch equations. Abram 1996-06-01 00:00:00 Combines the techniques of fast Fourier transforms, Buneman cyclic reduction and the capacity matrix in a finite difference Poisson solver specifically designed for modelling realistic electronic device structures. Felipe The Poisson Equation for Electrostatics. 2 Continuity Equations 10 2. In this paper, we provide analytical solutions to the steady state Poisson-Nernst-Planck (PNP) systems of equations for situations relevant to applications involving bioelectric dressings and bandages. It has up to now a cartesian 1Dx-1Dv version and a 2Dx-2Dv version. [1] exp(x) > F 1/2 (x) for x > 0, MB statistics is invalid. 2 Poisson in weak variational form Here, we want to solve Poisson equation that arises in electrostatics. Stiles# *Department of Physics, University of Guelph, Guelph, ON N1G2W1, Canada ([email protected] The semiclassical Boltzmann transport equation (BTE) coupled with the Poisson equation serves as a general theoretical framework for. As a result, efficient methods for the solution of 2D and 3D Poisson's equations are desired. PY - 2007/11/22. The non-homogeneous version of Laplace’s equation −∆u = f is called Poisson’s equation. The existence of the Euler-Poisson model, a simplified version of the hydrodynamic model, for unipolar semiconductor devices at steady state is examined first. Efficient Poisson equation solvers for large scale 3D simulations. Poisson's equation can be solved separately in the n-type and p-type region as was done in section 3. Therefore, it becomes very important to develop a very e cient Poisson’s equation solver to enable 3D devices based multi-scale simulation. The resulting transport equations are used for simulating the charge transport in a silicon MOSFET. The Schrödinger equation is solved by the split operator method while a relaxation method was used to solve the. The above equation is referred as Poisson s equation. In mathematics, Poisson's equation is a partial differential equation of elliptic type with broad utility in mechanical engineering and theoretical physics. deterministic computations of the transients for the Boltzmann-Poisson system describing electron transport in semiconductor devices. potential arising from the redistributed charges is obtained by solving Poisson's equation. The recombination of injected electrons and holes is modeled as a Langevin process. Journal of Differential Equations 255 :10, 3150-3184. Stationary solutions. 内容摘要:In this talk, we consider the well-posedness, ill-posedness and the regularity of stationary solutions to Euler-Poisson equations with sonic boundary for semiconductor models, andprove that, when the doping profile is subsonic, the corresponding system with sonic boundary possess a unique interior subsonic solution, and atleast one interior supersonic solution; and if the relaxation time is large andthe doping profile is a small perturbation of constant, then the. The realistic semiconductor device simulation (both classical, Monte Carlo or quantum mechanical) in many cases requires a 3D solution of the Poisson equation and leads to enormous problem sizes [1]. 4 Review of the fast convergent Schroedinger-Poisson solver for the static and dynamic analysis of carbon nanotube eld e ect transistors by Pourfath et al [74]. We show that spin polarization of electrons in the semiconductor, Pn, near the interface increases both with the forward and reverse current and reaches saturation at certain relatively large. The charge transport equations are then cou-pled to Poisson's equation for the elec-trostatic potential. Here, we examine a benchmark model of a GaAs nanowire to demonstrate how to use this feature in the Semiconductor Module, an add-on product to the COMSOL Multiphysics® software. Phys112 (S2014) 9 Semiconductors Semiconductors cf. A Novel Efficient Numerical Solution of Poisson's Equation for Arbitrary Shapes in Two Dimensions - Volume 20 Issue 5 - Zu-Hui Ma, Weng Cho Chew, Li Jun Jiang. AQUILA is a MATLAB toolbox for the one- or two dimensional simulation of the electronic properties of GaAs/AlGaAs semiconductor nanostructures. Modeling and 2d–Simulation of Quantum–Well Semiconductor Lasers including the Schr¨odinger–Poisson system • H. The effects of drift and diffusion coupled with Poisson's equation do not by any means give an exhaustive account of all the physics involved in the operation of a semiconductor device. The only equation left to solve is Poisson’s Equation, with n(x) and p(x) =0, abrupt doping profile and ionized dopant atoms. Cheng and C. PoissonEquation temperature = models. nextnano will be exhibitor at the International Workshop on Nitride Semiconductors in Berlin, Germany. Poisson's equation then becomes: d E d x = ρ ε = q ε (− N A + N D) or , where. The collisional term models optical-phonon interactions which become dominant under strong energetic conditions corresponding to nano-scale active regions under applied bias. All these four equations are non-linear. the band offset between the conduction band of the semiconductor and the conduction band of the oxide). A 2D simulation of MESFET using Poisson’s equation and current continuity equations is performed using a non-uniform mesh generated by interpolating wavelet scheme [42]. the direct solution of partial di erential equations. The electric field is related to the charge density by the divergence relationship. 0009 % Ouput: 0010 % u : the numerical solution of Poisson equation at the mesh points. [10,11,14,17,18,23–25] and the references therein). I have tried some python FEM solvers, FEniCS/Dolfin and SfePy , but with no luck, due to being unable to formulate them in the weak variational form with test functions. ThePoisson-Boltzmann equation arises because in some cases the charge den-sity ρdepends on the potential ψ. I have drawn the situation below. With scaling down of semiconductor devices, it's more important to simulate their characteristics by solving the Schrodinger and Poisson equations self-consistently. Considera 3D MOSFET as shownin Fig. equations in layered semiconductor devices [12]. The continuity equations can be derived using the following: By applying the divergence operator: , to the equation and considering that the divergence of the curl of any vector field equals zero. Above code uses a specialized version of function where is used instead of version from numpy. This distribution is important to determine how the electrostatic interactions will affect the molecules in solution. equation) Considering p-type semiconductor with doping (Poisson-Boltzmann concentration NA -- [N_6**) - -) - ** (* 137 -1)] (a) Derive the above Poisson-Boltzmann equation from the following Poisson's equation based on Boltzmann statistics, d [p. The numerical modelling of semiconductor devices is usually based on four coupled differential equations: the Poisson equation, electron and hole balance equations (called current continuity equations) and energy balance equation. The only equation left to solve is Poisson’s Equation, with n(x) and p(x) =0, abrupt doping profile and ionized dopant atoms. As the frequency approaches the THz regime, the quasi-static approximation fails and full-wave dynamics must be considered. - The Vlasov Equation. In electrostatics, the electric field E can be expressed in terms of an electric potential φ: E= 3 (1) Where is the divergence operator The potential itself satisfies Poisson's equation: 0 2 0! 3= (2). Poisson equation fails to model the physics accurately. Drift-Diffusion_models. If we have knowledge of a potential field, with the aid of Poisson s equation we can find the density of charge causing the field. import oedes from oedes import models # Define doping profile def doping_profile (mesh, ctx, eq): Nd = ctx. (10) represents a quasi-linear hyperbolic operator, whereas, the diffusive terms give contribution in the right hand side. 0005 % pfunc : the RHS of poisson equation (i. Kindly suggest me any textual material, that discusses the solution of multidimensional Poisson's equation for a semiconductor device structure containing multiple layers of different materials. Polarization. 12:22 mins. It aims to describe the distribution of the electric potential in solution in the direction normal to a charged surface. Abstract An effective iterative finite difference method for solving a nonlinear Poisson equation for semiconductor device theory is presented. A method of solving a second order differential equation as described in claim 9 wherein said equation is a Poisson's equation for solving the potential distribution and depletion layer of a two-sided semiconductor p-n junction, with distance represented by time, when a voltage is applied across said junction. poisson-equation schrodinger-equation 2d-materials strain schroedinger-poisson 2d-screening polar-discontinuities 1D Schroedinger solver in semiconductor with non-parabolicity. Poisson's equation commonly used for semiconductor device simulation: 7. 2 =# q$ n. ers in the accumulation layer is described by Schrodinger wave equation while their¨ charge distribution must satisfy the Poisson equation [4]. Journal of Differential Equations 255 :10, 3150-3184. The finite difference formulation leading to a matrix of seven diagonals is used. This distribution is important to determine how the electrostatic interactions. Important theorems from multi-dimensional integration []. Continuity Equations. The Poisson equation is discretized using the central difference approximation for the 2nd derivative: For the drift-diffusion equations, a special discretization approach called Scharfetter-Gummel is needed for the drift-diffusion equation in order to insure numerical stability. A numerical study of the Gaussian beam methods for one-dimensional Schr¨odinger-Poisson equations ∗ Shi Jin†, Hao Wu ‡, and Xu Yang § June 6, 2009 Abstract As an important model in quantum semiconductor devices, the. This is required for support of sensitivity analysis with. In Bloch’s approximation, we derive a telegrapher’s-Poisson system for the electron number density and the electric potential, which could allow simple semiconductor calculations, but still including wave propagation effects. For example, under steady-state conditions, there can be no change in the amount of energy storage (∂T/∂t = 0). The Third International Congress on Industrial and. How to solve continuity equations together with Poisson equation? working a lot with semiconductor phyics, I wonder if there is a way to solve the common. The Poisson equation can be solved separately in the case of thermal. Contour plot of a scalar function over the complex domain in MATLAB. Therefore, the Poisson's equation given by the governing PDE and its boundary conditions: can be written using the WRM as follows: with and the weighting functions. Thanks for contributing an answer to Physics Stack Exchange! Please be sure to answer the question. PNP equations are also known as the drift-diffusion equations for the description of currents in semiconductor. The Boltzmann-Poisson system The temporal evolution of the electron distribution function f (t;x ;k ) in semiconductors depending on time t, position x and electron wave vector k is governed by the Boltzmann transport equation [10] @f @ t + 1. This system of equations has found much use in the modeling ofsemiconductors[24]. The Poisson equation is solved in a rectangular prism of semiconductor with the boundary conditions commonly used in semiconductor device modeling. Poisson equation, constitute the well-known drift-diusion model. As a consequence numerical methods have been developed, which allow for reasonably efficient computer simulations in many cases of practical relevance. 6 The Basic Semiconductor Equations 41 2. Yield Modeling Each semiconductor manufacturer has its own methods for modeling and predicting the yield of new products, estimating the yield of existing products, and verifying sus-pected causes of yield loss. The potential V in the Poisson equation, with an applied voltage V b, has the boundary conditions of the form V (0)=0, V (L)=V b (14) The left hand side of eqn. We would like to point out that the Euler-Poisson equation is closely related to the Schr¨odinger-Poisson equation via the semi-classical limit and the Vlasov-Poisson equation as well as the Wigner equation. The Schroedinger–Poisson equations , and every set of approximate equations given in the previous section have the general structure (31) L ϕ = S (Ψ), H (ϕ) Ψ = E Ψ, where L is a Poisson operator, S (Ψ), is the source density due to any doping and the occupied states, and H (ϕ) is the Schroedinger operator with a potential depending on. We will derive the Fermi energy level for a uniformly doped semiconductor. The Semiconductor interface solves Poisson's equation in conjunction with the continuity equations for the charge carriers. An algorithm for this non-linear problem is presented in a multiband kṡP framework for the electronic band structure using the finite element method. Solving it numeri-cally is not an easy task because the BTE is an integro-differential equation with six dimensions in position-wave-vector and one in time. Finite difference scheme for semiconductor Boltzmann equation 737 2 Basic Equation The BTE for electrons and one conduction band writes [3], [6]: ∂f ∂t +v(k)·∇ xf − q ¯h E ·∇ kf = Q(f). , a lot may contain 25 wafers). Poisson's equation then becomes: d E d x = ρ ε = q ε (− N A + N D) or , where. 3 in cylindrical coordinates. This cycle of solving the two differential equations is iterated to convergence. We investigate, by means of the techniques of symmetrizer and an induction argument on the order of the mixed time-space derivatives of solutions in energy estimates, the periodic problem in a three-dimensional torus. Deepali Goyal. The collisional term models optical-phonon interactions which become dominant under strong energetic conditions corresponding to nano-scale active regions under applied bias. semiconductor structure can impose a significant effect on the charge distribution in the mechanical components of NEMS. Continuity Equations. This cycle of solving the two differential equations is iterated to convergence. The Spherical Harmonics Expansion (SHE) assumes a momentum distribution function only depending on the microscopic kinetic energy. - Particle Ensembles. The finite difference formulation leading to a matrix of seven diagonals is used. When using depletion approximation, we are assuming that the carrier concentration ( n and p ) is negligible compared to the net doping concentration ( N A and N D ) in the region straddling the metallurgical junction, otherwise known as the depletion region. semiconductor devices and physics, Poisson equation is applied to describe the variation of electrostatic potential within a specified regime [16]. Also we know that. To motivate the work, we provide a thorough discussion of the Poisson-Boltzmann equation, including derivation from a few basic assumptions, discussions of special case solutions, as well as common (analytical) approximation techniques. Secondly, the values of electric potential are updated at each mesh point by means of explicit formulas (that is, without the solution of simultaneous equations). The possible local charge unbalance requires that the Poisson equation be included. Finally, putting these in Poisson’s equation, a single equation for. where (mesh. Numbers in brackets indicate the number of Questions available on that topic. We are interested in the deterministic computation of the transients for the Boltzmann-Poisson system describing electron transport in semiconductor devices. No smallness and regularity conditions are assumed. There is a planar heterojunction inside the prism. Introduction. Under most circumstances, the equations can be simplified, and 2-D and 1-D models might be sufficient. The Poisson equation, the continuity equations, the drift and diffusion current equations are considered the basic semiconductor equations. A fast Poisson solver for realistic semiconductor device structures A fast Poisson solver for realistic semiconductor device structures M. The boundary condition of the Schrödinger and Poisson equations are also an important issue. This potential alters the initial band edge potential with flat bands, and Schr¨odinger's equation is solved once again for the new total potential energy. Constant Thermal Conductivity and Steady-state Heat Transfer - Poisson's equation. Before we detail the derivation of the model, we introduce shortly in some basic notions of semiconductor theory. deterministic computations of the transients for the Boltzmann-Poisson system describing electron transport in semiconductor devices. 4 Review of the fast convergent Schroedinger-Poisson solver for the static and dynamic analysis of carbon nanotube. As a result, efficient methods for the solution of 2D and 3D Poisson's equations are desired. The Poisson equation can be solved separately in the case of thermal. Woolard3, and P. Specifically, like [Kaz05] we compute a 3D in-dicator function χ(defined as 1 at points inside the model, and 0 at points outside), and then obtain the. A general method for the study of quantum effects in accumulation layers is presented. Segregated approach and Direct vs. ConstTemperature electron = models. 19 (2004) 917-922 PII: S0268-1242(04)75094-4 A quantum correction Poisson equation for metal-oxide-semiconductor structure simulation Yiming Li Department of Computational Nanoelectronics, National Nano Device Laboratories,. AU - Tayeb, Mohamed Lazhar. Similarly to the Poisson equation, the general form of the Schrödinger equation (2) will be expressed in paragraph 2. Anderson: Department of Mathematics, University of California, Los Angeles, Box 951555, Los Angeles, CA 90095-1555, United States: Published in: · Journal:. - The Semi-Classical Vlasov. The goal here is to discuss the influence of the relaxation mechanism and the Poisson coupling on the existence and asymptotic behavior of (weak) entropy solutions. A novel strategy for calculating excess chemical potentials through fast Fourier transforms is proposed which reduces computational complexity from O(N2) to O(NlogN) where N is the more » number of grid points. Poisson equation fails to model the physics accurately. Cheng and I. Continuity Equations. For semiconductor device analysis Poisson's Equation is written in the form V*=--9(p-n+Nd-N. An accelerated iterative method for a self-consistent solution of the coupled Poisson-Schrodinger equations is presented by virtue of the Anderson mixing scheme. a charge distribution inside, Poisson’s equation with prescribed boundary conditions on the surface, requires the construction of the appropiate Green function, whose discussion shall be ommited. When a doped semiconductor contains excess holes it is called "p-type", and when it contains excess free electrons it is known as "n-type", where p (positive for holes) or n (negative forelectrons) is the sign of the charge of the majority mobile charge carriers. Stationary solutions. Contour plot of a scalar function over the complex domain in MATLAB. The object of this research is to further understand the hydrodynamic model for semiconductor devices derived from moments of the Boltzmann's equation. A general Poisson equation for electrostatics is giving by d dx s(x) d dx ˚(x) = q[N D(x) n(x)] 0 (2. This system of equations has found much use in the modeling ofsemiconductors[24]. We study spin transport in forward and reverse biased junctions between a ferromagnetic metal and a degenerate semiconductor with a δ−doped layer near the interface at relatively low temperatures. flows to semiconductor modeling to tissue engineering. Hamilton’s equations are the Poisson bracket of the coordinates with the Hamitonian. The Boltzmann-Poisson system The temporal evolution of the electron distribution function f (t;x ;k ) in semiconductors depending on time t, position x and electron wave vector k is governed by the Boltzmann transport equation [10] @f @ t + 1. Since then, this method has been extensively developed and applied to various new fields. The collisional term models optical-phonon interactions which become dominant under strong energetic conditions corresponding to nano-scale active regions under applied bias. au) The description of a conducting medium in thermal equilibrium, such as an electrolyte. The existence of the Euler-Poisson model, a simplified version of the hydrodynamic model, for unipolar semiconductor devices at steady state is examined first. Similarly to the Poisson equation, the general form of the Schrödinger equation (2) will be expressed in paragraph 2. solution by solving Poisson's equation analytically2. 1 The Fish Distribution? The Poisson distribution is named after Simeon-Denis Poisson (1781–1840). semiconductor. The Poisson equation is solved in a rectangular prism of semiconductor with the boundary conditions commonly used in semiconductor device modeling. 1 Poisson's Equation 8 2. Introduction. This is required for support of sensitivity analysis with. by the Poisson equation and the equation derived here, in a Schottky barrier junction (i. To understand yield loss mechanisms, these are mathematically expressed in terms of 'yield models', which are equations that translate defect density distributions into predicted yields. 13 also: S. A coupled quantum drift-diffusion Schr¨odinger-Poisson model for stationary resonant tunneling simulations in one space dimension is proposed. Continuity Equations. Suppose the presence of Space Charge present in the space between P and Q. Customer Need Process Simulation Device Simulation Parameter Extraction Circuit Level Simulation yes Computational Electronics no Fig. Asymptotic-preserving numerical schemes for the semiconductor We present asymptotic-preserving numerical schemes for the semiconductor Boltzmann equation e -cient in the high eld regime. We report on a self-consistent computational approach based on the semiclassical, steady-state Boltzmann transport equation and the Poisson equation for the study of charge and spin transport in inhomogeneous semiconductor structures. The effects of drift and diffusion coupled with Poisson's equation do not by any means give an exhaustive account of all the physics involved in the operation of a semiconductor device. The two dimensional stationary Schr¨odinger-Poisson equation with mixed boundary conditions in non-smooth domains. 1, the potential )φ(x, y, z satisfies Poisson’s equation in the semiconductor as follows [3]: () 2. We show that spin polarization of electrons in the semiconductor, Pn, near the interface increases both with the forward and reverse current and reaches saturation at certain relatively large. Efficient solution of the Schroedinger-Poisson equations in layered semiconductor devices. Poisson’s equation relates the charge contained within the crystal with the electric field generated by this excess charge, as well as with the electric potential created. 12:22 mins. When we apply a field to MOS, what happens in the semiconductor? what is the charge profile in the semiconductor? We need to calculate the electrostatic potential and charge density at the channel beneath the oxide (or insulating layer). PNP equations are also known as the drift-diffusion equations for the description of currents in semiconductor. This method has two main advantages. de Abstract—We present a full Newton-Raphson approach for solving the Poisson, Schrodinger and Boltzmann equations in a¨. LASATER Center for Research in Scientific Computing, Department of Mathematics, North Carolina State University, The Wigner-Poisson equations describe the time-evolution of the electron distribution within the RTD. - Magnetic Fields. It appears as the relative. [10,11,14,17,18,23–25] and the references therein). explain semiconductor equations. If both donors and acceptors are present in a semiconductor, the dopant in greater concentration dominates, and the one in smaller concentration becomes negligible. ACM 7 CACMs1/CACM4107/P0101. Lasater1, C. [15] and the ref-erences therein), as well as in the case of irregular domains (see e. φ (x) in a doped semiconductor in TE materializes: ! d. Poisson Solver – Carrier Statistics Poisson equation in a semiconductor: Maxwell-Boltzmann (MB) statistics Fermi-Dirac (FD) statistics Fermi-Dirac integral of 1/2 order [1] J. This code implements the MCMC and ordinary differential equation (ODE) model described in [1]. The semiclassical Boltzmann transport equation (BTE) coupled with the Poisson equation serves as a general theoretical framework for. The finite difference formulation leading to a matrix of seven diagonals is used. the Poisson-Boltzmannequation makeit a formidable problem, for both analytical and numericaltechniques. Hussein et al. Two nonlinear relaxation methods are presented to solve the discretized equations; both minimize appropriate functionals. SEMICONDUCTOR DEVICE PHYSICS Semiconductor device phenomenon is described and governed by Poisson's equation (1) d a s where N x N N p n q x , ( ) 2 2 (1) Is the effective doping concentration defined for the semiconductor, N(x) is the position dependent net doping density, Nd is the donor density, and Na is the acceptor density. Euler–Poisson equations. Poisson equation in which the Maxwell-Boltzmann relation is also used. Newton-Raphson approach for nanoscale semiconductor devices Dino Ruic´* and Christoph Jungemann Chair of Electromagnetic Theory RWTH Aachen University Kackertstraße 15-17, 52072 Aachen, Germany *Email: [email protected] It aims to describe the distribution of the electric potential in solution in the direction normal to a charged surface. 2 =# q \$ n. We will derive the Fermi energy level for a uniformly doped semiconductor. The Madelung-type equations derived by Gardner [6] and Gasser et al. Poisson and Continuity Equation. 2010 Mathematics Subject Classi cation. ELECTRONICS: Semiconductor Diodes Laplace's and Poisson's Equations. searching for Poisson's equation 17 found (174 total) alternate case: poisson's equation. Finding the scalar potential from the Poisson equation is a common, yet challenging problem in semiconductor modeling. (2)] for n (x) and E(x) using a finite difference method. For example, under steady-state conditions, there can be no change in the amount of energy storage (∂T/∂t = 0). There is a planar heterojunction inside the prism. iterative technique in which Poisson's equation and the continuity equations are alternatively solved until the desired accuracy is obtained for each time step. Customer Need Process Simulation Device Simulation Parameter Extraction Circuit Level Simulation yes Computational Electronics no Fig. The first Maxwell equation for the electrical field E under these conditions is. It may be modified for a dielectric medium having relative. the Poisson-Boltzmannequation makeit a formidable problem, for both analytical and numericaltechniques. Finite difference scheme for semiconductor Boltzmann equation 737 2 Basic Equation The BTE for electrons and one conduction band writes [3], [6]: ∂f ∂t +v(k)·∇ xf − q ¯h E ·∇ kf = Q(f). The above equation is referred as Poisson s equation. The Poisson-Boltzmann equation is often ap-plied to salts, since both positive and negative are present in in concentrations that vary. Poisson's Equation and Einstein Equation: From Poisson's equation we get an idea of how the derivative of electric field changes with the donor or acceptor impurity concentration. First, it converges for any initial guess (global convergence). Stiles# *Department of Physics, University of Guelph, Guelph, ON N1G2W1, Canada ([email protected] First, it converges for any initial guess (global convergence). One of the central problems in traditional mesh-based methods is the assignment of charge to the regular mesh imposed for the discretisation. All these four equations are non-linear. Some features of this site may not. The proposed numerical technique is a flnite. (2013) Global existence and asymptotic behavior of smooth solutions to a bipolar Euler-Poisson equation in a bound domain. location equations is duly modified by us-ing a scaled block-limited partial pivoting procedure of Gauss elimination, it is found that the rate of convergence of the iterative method is significantly improved and that a solution becomes possible. Applying Gauss's Law to the volume shown in Fig. Boltzmann equation for the charge carriers, coupled to the Poisson equation for the electric poten-tial. It comes from Maxwell's first equation, which in turn is based on Coulomb's law for electrostatic force of a charge distribution. Electronic Devices , First yr Playlist https://www. Author: Christopher R. Poisson's equation, one of the basic equations in electrostatics, is derived from the Maxwell's equation and the material relation stands for the electric displacement field, for the electric field, is the charge density, and. The equations of Poisson and Laplace can be derived from Gauss's theorem. This potential alters the initial band edge potential with flat bands, and Schr¨odinger’s equation is solved once again for the new total potential energy. The continuity equations can be derived using the following: By applying the divergence operator: , to the equation and considering that the divergence of the curl of any vector field equals zero. In addition to the heat transfer simulation, SibLin is equally suitable for solving of 3D Poisson and Diffusions equations or drift current speading equation that describes resistance of three-dimensional structures. Also we know that. - The Whole Space Vlasov Problem. I have tried some python FEM solvers, FEniCS/Dolfin and SfePy , but with no luck, due to being unable to formulate them in the weak variational form with test functions. In addition, poisson is French for fish. Felipe The Poisson Equation for Electrostatics. (x)+N, -N, ] (b) Show that the surface electric field Es can be obtained as follows: (Hint: E=-dy/dx) E == /28,8,IFW) (c) Derive the. One of the central problems in traditional mesh-based methods is the assignment of charge to the regular mesh imposed for the discretisation. The above equation is derived for free space. We will start by finishing up on uniform doping in a semiconductor. (2013) Asymptotic behavior of solutions to Euler-Poisson equations for bipolar hydrodynamic model of semiconductors. We will introduce the Poisson Equation and. Electro-diffusion (Fick’s law) Electrophoresis (Kohlrausch’s laws) Electrostatic force (Poisson’s law) Nernst-Planck equations describe electro- diffusion and electrophoresis Poisson’s equation is used for the electrostatic force between ions. Studies in the Wigner-Poisson and Schr¨odinger-Poisson Systems by Bruce V. ∇×H=J+∂D ∂t. Kittel and Kroemer chap. 2) is necessarily to be imposed for solvability of the problem. This distribution is important to determine how the electrostatic interactions. Advanced Trigonometry Calculator Advanced Trigonometry Calculator is a rock-solid calculator allowing you perform advanced complex ma. location equations is duly modified by us-ing a scaled block-limited partial pivoting procedure of Gauss elimination, it is found that the rate of convergence of the iterative method is significantly improved and that a solution becomes possible. 0006 % bfunc : the boundary function representing the Dirichlet B. This distribution is important to determine how the electrostatic interactions will affect the molecules in solution. Lecture 7 OUTLINE Poisson’s equation Work function Metal-Semiconductor Contacts Equilibrium energy band diagrams Depletion-layer width Reading: Pierret 5. For the classical calculation (LaserDiode_InGaAs_1D_cl_nnp. We will compare simulation results for two Poisson models: the singles/doubles/triples (denoted 1/2/3) model and the cache model. [8] also include a pressure term and a momentum relaxation term taking into account interactions of the electrons with the semiconductor crystal, and are self-consistently coupled to the Poisson equation for the electrostatic potential 0; Ex = n b(x): (1). We study spin transport in forward and reverse biased junctions between a ferromagnetic metal and a degenerate semiconductor with a δ−doped layer near the interface at relatively low temperatures. and the electric field is related to the electric potential by a gradient relationship. To motivate the work, we provide a thorough discussion of the Poisson-Boltzmann equation, including derivation from a few basic assumptions, discussions of special case solutions, as well as common (analytical) approximation techniques. Cheng and I. Then the program solves the coupled current-Poisson-Schroedinger equations in a self-consistent way (input file: LaserDiode_InGaAs_1D_qm_nnp. Secondly, the values of electric potential are updated at each mesh point by means of explicit formulas (that is, without the solution of simultaneous equations). When using depletion approximation, we are assuming that the carrier concentration ( n and p ) is negligible compared to the net doping concentration ( N A and N D ) in the region straddling the metallurgical junction, otherwise known as the depletion region. The Poisson-Nernst-Planck equations are relevant in numerous electrobiochemical applications. The 1D version is coupled either to Poisson's equation or to Maxwell's equations and solves both the relativistic and the non relativistic Vlasov equations. The effects of drift and diffusion coupled with Poisson's equation do not by any means give an exhaustive account of all the physics involved in the operation of a semiconductor device. Finally, putting these in Poisson’s equation, a single equation for. The equations can be discretized using finite differences as follows. - Bounded Position Domains. Box 5800, MS-1111. semiconductor structure can impose a significant effect on the charge distribution in the mechanical components of NEMS. 2 Poisson's Equation Poisson's equation correlates the electrostatic potential to a given charge distribution. We are interested in the deterministic computation of the transients for the Boltzmann-Poisson system describing electron transport in semiconductor devices. Such relation has been the subject of a consider-. Phys112 (S2014) 9 Semiconductors Semiconductors cf. It was observed that highest electric potential. To understand yield loss mechanisms, these are mathematically expressed in terms of 'yield models', which are equations that translate defect density distributions into predicted yields. semiconductor devices and physics, Poisson equation is applied to describe the variation of electrostatic potential within a specified regime [16]. This is the current which is due to the transport of charges occurring because of non-uniform concentration of charged particles in a semiconductor. The Poisson and continuity equations present three coupled partial differential equations with three variables, Ψ, n and p. and Zou, W. PNP equations are also known as the drift-diffusion equations for the description of currents in semiconductor. location equations is duly modified by us-ing a scaled block-limited partial pivoting procedure of Gauss elimination, it is found that the rate of convergence of the iterative method is significantly improved and that a solution becomes possible. 5) where ε s is the semiconductor permittivity, and the space charge density ρ(x)is given by ρ(x)= q(p−n−N a). This paper investigates the random dopant fluctuation of multi-gate metal - oxide - semiconductor field-effect transistors (MOSFETs) using analytical solutions of three-dimensional (3D) Poisson's equation verified with device simulation. JavaScript is disabled for your browser. 1 Poisson's Equation 8 2. The numerical modelling of semiconductor devices is usually based on four coupled differential equations: the Poisson equation, electron and hole balance equations (called current continuity equations) and energy balance equation. We have a total of 464 Questions available on CSIR (Council of Scientific & Industrial Research) Physical Sciences. LASATER Center for Research in Scientific Computing, Department of Mathematics, North Carolina State University, The Wigner-Poisson equations describe the time-evolution of the electron distribution within the RTD. A coupled quantum drift-diffusion Schr¨odinger-Poisson model for stationary resonant tunneling simulations in one space dimension is proposed. Based on the numerical solution of Schrodinger–Poisson (SP) equations, the¨ new Poisson equation developed is optimized with respect to (1) the position. It simulates both the pn-junction and the sub-gate region of the MISFET for a wide range of material parameters under both equilibrium and biased conditions. 2 Poisson's Equation Charge Density in a Semiconductor Assuming the dopants are completely ionized: r = q (p - n + ND - NA) Work Function Metal-Semiconductor Contacts There are 2 kinds of metal. the Poisson-Boltzmannequation makeit a formidable problem, for both analytical and numericaltechniques. This paper reviews the numerical issues arising in the simulation of electronic states in highly confined semiconductor structures like quantum dots. For these systems, the main challenge lies in the efficient and accurate solution of the self-consistent one-band and multi-band Schrödinger-Poisson equations. It aims to describe the distribution of the electric potential in solution in the direction normal to a charged surface. Poisson{Boltzmann (PB) equation. e # q"(x)/kT. In modern semiconductor device simulations, the classical macroscopic models, such as drift diffusion, energy transport models, are not adequate to capture the subtle kinetic effects that happen in nano-scales. 0009 % Ouput: 0010 % u : the numerical solution of Poisson equation at the mesh points. The numerical modelling of semiconductor devices is usually based on four coupled differential equations: the Poisson equation, electron and hole balance equations (called current continuity equations) and energy balance equation. The Poisson-Boltzmann equation is often ap-plied to salts, since both positive and negative are present in in concentrations that vary. It solves for both the electron and hole concentrations explicitly. fluctuation of threshold voltage induced by random doping in metal-oxide-semiconductor field-effect-transistors (MOSFETs) is analyzed by using a simple technique based on the solution of the two-dimension and three-dimension nonlinear Poisson equation. Poisson’s equation and in the section 3 we describe the self-consistent method which we use to simul-taneously solve both Poisson’s and Thomas–Fermi equations. A coupled quantum drift-diffusion Schr¨odinger-Poisson model for stationary resonant tunneling simulations in one space dimension is proposed. Gray* and P. Poisson equation finite-difference with pure Neumann boundary conditions. It is shown that the solutions converges to the stationary solutions exponentially in time. Stability analysis and quasi-neutral limit for the Euler-Poisson equations, Theory of evolution equations and applications to nonlinear problems, RIMS Kyoto University, Japan, October 2016. An example of its application to an FET structure is then presented. The Poisson equation is written with respect to a function φ(x,y,z,. The single processor implementation of the corresponding 3D codes is limited by both the processor speed and the huge memory-access bottleneck. 20) assuming the semiconductor to be non-degenerate and fully ionized. 1067-1076, 1982. - The Classical Hamiltonian. The Poisson equation, the continuity equations, the drift and diffusion current equations are considered the basic semiconductor equations. The effects of drift and diffusion coupled with Poisson's equation do not by any means give an exhaustive account of all the physics involved in the operation of a semiconductor device. A Akinpelu 1, O. (2019) Multi-dimensional bipolar hydrodynamic model of semiconductor with insulating boundary conditions and non-zero doping profile. The Poisson–Boltzmann equation is derived via mean-field. The solution of the nonlinear Poisson equation provides thermal equilibrium characteristics of the device. Both the parabolic and the quasi-parabolic band approximations are considered. equation concerning a free electron concentration n(x,y,z,t) in the conduction band of a semiconductor, the equation concerning an ionized donor concentration N(x,y,z,t). 7 yielding an expression for (x = 0) which is almost identical to equation : (4. Lesson 11 of 26 • 10 upvotes • 8:25 mins. the direct solution of partial di erential equations. power series representation. Above code uses a specialized version of function where is used instead of version from numpy. 5) where ε s is the semiconductor permittivity, and the space charge density ρ(x)is given by ρ(x)= q(p−n−N a). Efficient Poisson equation solvers for large scale 3D simulations. Poisson equation fails to model the physics accurately. with the Euler-Poisson equations — the so called critical threshold phenomena, where the answer to the question of global vs local existence depends on whether the initial configuration crosses an intrinsic, 0(1) critical threshold. The electron current continuity equation is solved foru(g+1) givenf (g) and v(g). Poisson equation, constitute the well-known drift-diusion model. In addition, poisson is French for fish. Semiconductors Intrinsic Semiconductors, Free Electrons, and Holes Extrinsic Semiconductors Equilibrium in the Absence of Electric Field Equilibrium in the Presence of Electric Field Semiconductors in Nonequilibrium Quasi-Fermi Levels Relations between Charge Density, Electric Field, and Potentials Poisson's Equation Conduction Transit Time. Kaiser and J. No smallness and regularity conditions are assumed. Felipe The Poisson Equation for Electrostatics. It can be included in an introductory course in semiconductor device physics as a demonstration of the numerical analysis of devices. 内容摘要:In this talk, we consider the well-posedness, ill-posedness and the regularity of stationary solutions to Euler-Poisson equations with sonic boundary for semiconductor models, andprove that, when the doping profile is subsonic, the corresponding system with sonic boundary possess a unique interior subsonic solution, and atleast one interior supersonic solution; and if the relaxation time is large andthe doping profile is a small perturbation of constant, then the. (4) needs to be solved self-consistently with the Schro¨dinger equation in the semiconductor structure to obtain the potential field and the charge distribution. Boltzmann-Poisson system, semiconductor devices, doping pro le, inverse problems, parameter identi cation, inverse doping, drift-di usion. - Magnetic Fields. (2)] for n (x) and E(x) using a finite difference method. a (x) [ ] Clif Fonstad, 9/17/09 Lecture 3 - Slide 13. First, it converges for any initial guess (global convergence). EMT: Laplace and Poisson Equations : 16th April, 2020. The nonlinear Poisson equation is replaced by an equivalent diffusion equation. Here are 1D, 2D, and 3D models which solve the semiconductor Poisson-Drift-Diffusion equations using finite-differences. Based on approximations of potential distribution, our solution scheme successfully takes the effect of doping concentration in each region. To resolve this, please try setting "qr" to anything besides zero. where (mesh. We study spin transport in forward and reverse biased junctions between a ferromagnetic metal and a degenerate semiconductor with a δ−doped layer near the interface at relatively low temperatures. The continuity equations can be derived using the following: By applying the divergence operator: , to the equation and considering that the divergence of the curl of any vector field equals zero. The effects of drift and diffusion coupled with Poisson's equation do not by any means give an exhaustive account of all the physics involved in the operation of a semiconductor device. The nonlinear Poisson equation, encountered in semiconductor device simulation, is discretized by the mixed finite element method. Walmsley; R. The limit system is governed by the classical drift-di usion model. T1 - Diffusion limit of a semiconductor Boltzmann-Poisson system. length * 0. In order to simplify the numerical investigation of carrier transport in nanodevices without jeopardizing the rigor of a full quantum mechanical treatment, we have exploited an existing variational principle to solve self-consistently Poisson's equation and Schrödinger's equation as well as an appropriate transport equation within the scope of the generalized local density approximation (GLDA). In macroscopic semiconductor device modeling, Poisson's equation and the continuity equations play a fundamental role. The Navier-Stokes-Poisson system is used to describe the motion of a compressible viscous isotropic Newtonian uid in semiconductor devices [5, 12] or in plasmas [12, 21]. There is a planar heterojunction inside the prism. The Poisson and continuity equations present three coupled partial differential equations with three variables, Ψ, n and p. The electric poten-. The Semiconductor interface solves Poisson's equation in conjunction with the continuity equations for the charge carriers. 6, and 6, denote any finite difference in time and space, respectively; the specific form of these operators determines the numerical method used. 2016 White House National Medal of Technology and Innovation Video / Photo. The nonlinear partial differential equations of the model consist of the steady. When the governing equations are strongly coupled (e. deterministic computations of the transients for the Boltzmann-Poisson system describing electron transport in semiconductor devices. Clipper Circuits. Poisson’s equation – Steady-state Heat Transfer Additional simplifications of the general form of the heat equation are often possible. The Schroedinger–Poisson equations , and every set of approximate equations given in the previous section have the general structure (31) L ϕ = S (Ψ), H (ϕ) Ψ = E Ψ, where L is a Poisson operator, S (Ψ), is the source density due to any doping and the occupied states, and H (ϕ) is the Schroedinger operator with a potential depending on. We investigate, by means of the techniques of symmetrizer and an induction argument on the order of the mixed time-space derivatives of solutions in energy estimates, the periodic problem in a three-dimensional torus. The steady state behaviour of the electron distribution function is. The program is quite user friendly, and runs on a Macintosh, Linux or PC. Like much previous work (Section 2), we approach the problem of surface reconstruction using an implicit function framework. Poisson-Nernst-Planck equations, which are the basic continuum model of ionic permeation and semicon-ductor physics. To motivate the work, we provide a thorough discussion of the Poisson-Boltzmann equation, including derivation from a few basic assumptions, discussions of special case solutions, as well as common (analytical) approximation techniques. ε 0 is the permittivity in free space, and ε s is the permittivity in the semiconductor and-x p and x n are the edges of. Here are 1D, 2D, and 3D models which solve the semiconductor Poisson-Drift-Diffusion equations using finite-differences. Numerical simulations helped to plan experi-. Under most circumstances, the equations can be simplified, and 2-D and 1-D models might be sufficient. They are used to solve for the electrical performance of. Although the Poisson-Nernst-Planckequations were applied to. 内容摘要:In this talk, we consider the well-posedness, ill-posedness and the regularity of stationary solutions to Euler-Poisson equations with sonic boundary for semiconductor models, andprove that, when the doping profile is subsonic, the corresponding system with sonic boundary possess a unique interior subsonic solution, and atleast one interior supersonic solution; and if the relaxation time is large andthe doping profile is a small perturbation of constant, then the. This is done by solving the Poisson's equation. in transient simulations), the Newto~Raphson method is typically required although at a cost of. The SHE-Poisson system describes carrier transport in semiconductors with self-induced electrostatic potential. Semiconductor Devices - 2014 Lecture Course Semiconductor base Contact Metal 1D – Poisson Equation. (1b) Here, ξ(z) is the normalized wave function for the lowest energy level E0, εs is the dielectric constant of the semiconductor, V(z) and N0 is the potential and the total number of electrons in the accumulation. 3 Uniqueness Theorem for Poisson's Equation Consider Poisson's equation ∇2Φ = σ(x) in a volume V with surface S, subject to so-called Dirichlet boundary conditions Φ(x) = f(x) on S, where fis a given function defined on the boundary. Poisson's Equation This next relation comes from electrostatics, and follows from Maxwell’s equations of electromagnetism. The continuity equations can be derived using the following: By applying the divergence operator: , to the equation and considering that the divergence of the curl of any vector field equals zero. The Poisson equation is not a basic equation, but follows directly from the Maxwell equations if all time derivatives are zero, i. 2-d problem with Dirichlet Up: Poisson's equation Previous: An example 1-d Poisson An example solution of Poisson's equation in 1-d Let us now solve Poisson's equation in one dimension, with mixed boundary conditions, using the finite difference technique discussed above. It is focussed on a presentation of a hierarchy of models ranging from kinetic quantum transport equations to the classical drift diffusion equations. When there are sources S(x) of solute (for example, where solute is piped in or where the solute is generated by a chemical reaction), or of heat (e. The collisional term models optical-phonon interactions which become dominant under strong energetic conditions corresponding to nano-scale active regions under applied bias. Poisson’s equation and in the section 3 we describe the self-consistent method which we use to simul-taneously solve both Poisson’s and Thomas–Fermi equations. e # q"(x)/kT. - The Poisson Equation. the direct solution of partial di erential equations. However, when noise presented in measured data is high, no di erence in the reconstructions can be observed. Moreover, the equation appears in numerical splitting strategies for more complicated systems of PDEs,. The Poisson equation is a widely accepted model for electrostatic analysis. The equations of Poisson and Laplace can be derived from Gauss's theorem. LaPlace's and Poisson's Equations. Please read the PDF file supplied for further instructions on how to use this code. Poisson's equation, one of the basic equations in electrostatics, is derived from the Maxwell's equation and the material relation stands for the electric displacement field, for the electric field, is the charge density, and. In this note, we present a framework for the large time behavior of general uniformly bounded weak entropy solutions to the Cauchy problem of Euler-Poisson system of semiconductor devices. A useful approach to the calculation of electric potentials is to relate that potential to the charge density which gives rise to it. Walmsley; R. The fundamentals of semiconductors are typically found in textbooks discussing quantum mechanics, electro- magnetics, solid-state physics and statistical thermodynamics. - The Transport Equation. Boltzmann transport equation. Morrison’s 60th Birthday), v17 (2011), pp. Asymptotic-preserving numerical schemes for the semiconductor We present asymptotic-preserving numerical schemes for the semiconductor Boltzmann equation e -cient in the high eld regime. Electrons are supposed to occupy the lowest miniband, exchange of lateral momentum is ignored and the electron-electron interaction is treated in the Hartree approximation. These models can be used to model most semiconductor devices. n = Nd-Na, p = Na-Nd. 1067-1076, 1982. The collisional term models optical-phonon interactions which become dominant under strong energetic conditions corresponding to nano-scale active regions under applied bias. EE 436 band-bending – 6 We can re-write Poisson’s equation using this new band-bending parameter: Inserting the ρ(x) for uniformly doped n-type semiconductor: This is the Poisson-Boltzmann equation for a uniformly doped n-type semiconductor. Poisson Solver - Carrier Statistics Poisson equation in a semiconductor: Maxwell-Boltzmann (MB) statistics Fermi-Dirac (FD) statistics Fermi-Dirac integral of 1/2 order [1] J. The main difficulty of such computation arises from the very high dimensions of the model, making it necessary to use relatively coarse meshes and hence requiring the numerical solver to. , lithium-ion (Li-ion) batteries, fuel cells) and biological membrane channels [6–13]. 20) assuming the semiconductor to be non-degenerate and fully ionized. Poisson's equation relates the charge contained within the crystal with the electric field generated by this excess charge, as well as with the electric potential created. An example of its application to an FET structure is then presented. d2ψ n (x) dx2 = qρ(x) εskT d2ψ. The Journal of Chemical Physics 2003, 119 (21) , 11035.
|
2020-10-26 07:49:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6697452664375305, "perplexity": 1033.724093655177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107890586.57/warc/CC-MAIN-20201026061044-20201026091044-00054.warc.gz"}
|
https://chemistry.stackexchange.com/questions/680/does-borax-do-anything-more-for-boosting-detergent-than-adding-active-oxygen-w
|
Does borax do anything more for “boosting” detergent than adding active oxygen would?
Borax, $\ce{Na2B4O7}$, is often marketed as a "laundry booster" under the brand "20 Mule Team Borax". The unit crystal of borax can be seen below.
Other laundry products in the past have added sodium perborate (which can be produced from borax, hydrogen peroxide, and sodium hydroxide) as a bleaching additive. The "peroxide" portion can readily be seen below.
Through what mechanism does the plain borax "boost" the detergent? Would it be more effective to throw some weak peroxide and a small amount of base into the laundry to form the perborate instead of using plain borax?
|
2020-02-21 07:50:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5186330080032349, "perplexity": 8871.673132661661}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145443.63/warc/CC-MAIN-20200221045555-20200221075555-00486.warc.gz"}
|
https://www.jobilize.com/precalculus/section/algebraic-conic-sections-in-polar-coordinates-by-openstax?qcr=www.quizover.com
|
# 12.5 Conic sections in polar coordinates (Page 4/8)
Page 4 / 8
## Converting a conic in polar form to rectangular form
Convert the conic $\text{\hspace{0.17em}}r=\frac{1}{5-5\mathrm{sin}\text{\hspace{0.17em}}\theta }$ to rectangular form.
We will rearrange the formula to use the identities
Convert the conic to rectangular form.
$4-8x+3{x}^{2}-{y}^{2}=0$
Access these online resources for additional instruction and practice with conics in polar coordinates.
Visit this website for additional practice questions from Learningpod.
## Key concepts
• Any conic may be determined by a single focus, the corresponding eccentricity, and the directrix. We can also define a conic in terms of a fixed point, the focus $\text{\hspace{0.17em}}P\left(r,\theta \right)\text{\hspace{0.17em}}$ at the pole, and a line, the directrix, which is perpendicular to the polar axis.
• A conic is the set of all points $\text{\hspace{0.17em}}e=\frac{PF}{PD},$ where eccentricity $\text{\hspace{0.17em}}e\text{\hspace{0.17em}}$ is a positive real number. Each conic may be written in terms of its polar equation. See [link] .
• The polar equations of conics can be graphed. See [link] , [link] , and [link] .
• Conics can be defined in terms of a focus, a directrix, and eccentricity. See [link] and [link] .
• We can use the identities and to convert the equation for a conic from polar to rectangular form. See [link] .
## Verbal
Explain how eccentricity determines which conic section is given.
If eccentricity is less than 1, it is an ellipse. If eccentricity is equal to 1, it is a parabola. If eccentricity is greater than 1, it is a hyperbola.
If a conic section is written as a polar equation, what must be true of the denominator?
If a conic section is written as a polar equation, and the denominator involves what conclusion can be drawn about the directrix?
The directrix will be parallel to the polar axis.
If the directrix of a conic section is perpendicular to the polar axis, what do we know about the equation of the graph?
What do we know about the focus/foci of a conic section if it is written as a polar equation?
One of the foci will be located at the origin.
## Algebraic
For the following exercises, identify the conic with a focus at the origin, and then give the directrix and eccentricity.
Parabola with $\text{\hspace{0.17em}}e=1\text{\hspace{0.17em}}$ and directrix $\text{\hspace{0.17em}}\frac{3}{4}\text{\hspace{0.17em}}$ units below the pole.
Hyperbola with $\text{\hspace{0.17em}}e=2\text{\hspace{0.17em}}$ and directrix $\text{\hspace{0.17em}}\frac{5}{2}\text{\hspace{0.17em}}$ units above the pole.
Parabola with $\text{\hspace{0.17em}}e=1\text{\hspace{0.17em}}$ and directrix $\text{\hspace{0.17em}}\frac{3}{10}\text{\hspace{0.17em}}$ units to the right of the pole.
Ellipse with $\text{\hspace{0.17em}}e=\frac{2}{7}\text{\hspace{0.17em}}$ and directrix $\text{\hspace{0.17em}}2\text{\hspace{0.17em}}$ units to the right of the pole.
Hyperbola with $\text{\hspace{0.17em}}e=\frac{5}{3}\text{\hspace{0.17em}}$ and directrix $\text{\hspace{0.17em}}\frac{11}{5}\text{\hspace{0.17em}}$ units above the pole.
Hyperbola with $\text{\hspace{0.17em}}e=\frac{8}{7}\text{\hspace{0.17em}}$ and directrix $\text{\hspace{0.17em}}\frac{7}{8}\text{\hspace{0.17em}}$ units to the right of the pole.
Cos45/sec30+cosec30=
Cos 45 = 1/ √ 2 sec 30 = 2/√3 cosec 30 = 2. =1/√2 / 2/√3+2 =1/√2/2+2√3/√3 =1/√2*√3/2+2√3 =√3/√2(2+2√3) =√3/2√2+2√6 --------- (1) =√3 (2√6-2√2)/((2√6)+2√2))(2√6-2√2) =2√3(√6-√2)/(2√6)²-(2√2)² =2√3(√6-√2)/24-8 =2√3(√6-√2)/16 =√18-√16/8 =3√2-√6/8 ----------(2)
exercise 1.2 solution b....isnt it lacking
I dnt get dis work well
what is one-to-one function
what is the procedure in solving quadratic equetion at least 6?
Almighty formula or by factorization...or by graphical analysis
Damian
I need to learn this trigonometry from A level.. can anyone help here?
yes am hia
Miiro
tanh2x =2tanhx/1+tanh^2x
cos(a+b)+cos(a-b)/sin(a+b)-sin(a-b)=cotb ... pls some one should help me with this..thanks in anticipation
f(x)=x/x+2 given g(x)=1+2x/1-x show that gf(x)=1+2x/3
proof
AUSTINE
sebd me some questions about anything ill solve for yall
cos(a+b)+cos(a-b)/sin(a+b)-sin(a-b)= cotb
favour
how to solve x²=2x+8 factorization?
x=2x+8 x-2x=2x+8-2x x-2x=8 -x=8 -x/-1=8/-1 x=-8 prove: if x=-8 -8=2(-8)+8 -8=-16+8 -8=-8 (PROVEN)
Manifoldee
x=2x+8
Manifoldee
×=2x-8 minus both sides by 2x
Manifoldee
so, x-2x=2x+8-2x
Manifoldee
then cancel out 2x and -2x, cuz 2x-2x is obviously zero
Manifoldee
so it would be like this: x-2x=8
Manifoldee
then we all know that beside the variable is a number (1): (1)x-2x=8
Manifoldee
so we will going to minus that 1-2=-1
Manifoldee
so it would be -x=8
Manifoldee
so next step is to cancel out negative number beside x so we get positive x
Manifoldee
so by doing it you need to divide both side by -1 so it would be like this: (-1x/-1)=(8/-1)
Manifoldee
so -1/-1=1
Manifoldee
so x=-8
Manifoldee
Manifoldee
so we should prove it
Manifoldee
x=2x+8 x-2x=8 -x=8 x=-8 by mantu from India
mantu
lol i just saw its x²
Manifoldee
x²=2x-8 x²-2x=8 -x²=8 x²=-8 square root(x²)=square root(-8) x=sq. root(-8)
Manifoldee
I mean x²=2x+8 by factorization method
Kristof
I think x=-2 or x=4
Kristof
x= 2x+8 ×=8-2x - 2x + x = 8 - x = 8 both sides divided - 1 -×/-1 = 8/-1 × = - 8 //// from somalia
Mohamed
i am in
Cliff
hii
Amit
how are you
Dorbor
well
Biswajit
can u tell me concepts
Gaurav
Find the possible value of 8.5 using moivre's theorem
which of these functions is not uniformly cintinuous on (0, 1)? sinx
helo
Akash
hlo
Akash
Hello
Hudheifa
which of these functions is not uniformly continuous on 0,1
|
2020-03-31 19:14:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 17, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8492009043693542, "perplexity": 1941.3475934615292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370503664.38/warc/CC-MAIN-20200331181930-20200331211930-00266.warc.gz"}
|
https://www.physicsforums.com/threads/differentiability-of-a-function-question-on-bounding.861346/
|
# Differentiability of a function -- question on bounding
Tags:
1. Mar 9, 2016
### cacofolius
1. The problem statement, all variables and given/known data
I need to see if the function defined as
$f(x,y) = \left\{ \begin{array}{lr} \frac{xy^2}{x^2 + y^2} & (x,y)\neq{}(0,0)\\ 0 & (x,y)=(0,0) \end{array} \right.$
is differentiable at (0,0)
2. Relevant equations
A function is differentiable at a point, if It can be approximated at that point by a linear transformation,
$\lim_{(u, v) \rightarrow (0, 0)} \frac {f(u, v) - df_{(0, 0)}(u, v) - f(0, 0)} {||(u, v)||} =0$
Useful bounds:
$|u|\leq ||(u, v)||; |v|\leq ||(u, v)||$
$|u|^2=u^2\leq ||(u, v)||^2= (\sqrt{u^2+v^2})^2 = (u^2+v^2)$
3. The attempt at a solution
Both partial derivatives are zero at the point, so all I have left is
$\lim_{(u, v) \rightarrow (0, 0)} \frac {\frac{uv^2}{u^2 + v^2}} {||(u, v)||} =\lim_{(u, v) \rightarrow (0, 0)} \frac {uv^2}{(u^2 + v^2)||(u, v)||}$
Now the bounds:
$\frac {uv^2}{(u^2 + v^2)||(u, v)||}\leq\frac {u||(u, v)||^2}{(u^2 + v^2)||(u, v)||} = \frac {u||(u, v)||}{(u^2 + v^2)} \leq \frac {||(u, v)||^2}{(u^2 + v^2)}= \frac{||(u, v)||^2}{||(u, v)||^2}=1$
Does this means that the function is not differentiable at (0,0), or did I make a mistake along the way ? Thanks in advance for the help.
2. Mar 9, 2016
### stevendaryl
Staff Emeritus
In polar coordinates, the function looks a lot better-behaved:
Letting $x=r cos(\theta)$, $y = r sin(\theta)$, then your function in polar coordinates becomes $r sin(\theta) cos(\theta)$, which seems very innocuous.
3. Mar 9, 2016
### cacofolius
Thanks for the help!
4. Mar 9, 2016
### Ray Vickson
That should be $r \cos(\theta) \sin^2(\theta)$.
5. Mar 9, 2016
### Samy_A
How does showing that $\frac {uv^2}{(u^2 + v^2)||(u, v)||} \leq 1$ prove that the function is not differentiable in (0,0)?
Use @stevendaryl 's suggestion (as corrected by @Ray Vickson ) to show that the limit isn't 0 or doesn't exist.
6. Mar 9, 2016
### cacofolius
You're right, Samy_A, it doesn't, and with the polar coordinates I have a function which is the product of two functions $H(r,\theta)=F(r)G(\theta)$ and while $\lim_{(r) \rightarrow (0)} F(r) = 0$, $G(\theta)$ is bounded in [0,2pi], therefore the original function is differentiable in (0,0). Thank you, everybody.
7. Mar 9, 2016
### Ray Vickson
Isn't the definition of differentiability at (0,0) that there exist constants $a$ and $b$ such that
$$f(x,y) = f(0,0) + a x + b y + o\left(\sqrt{x^2+y^2} \right)?$$
That is, we have $f(x,y) \approx f(0,0) + a x + by$ "to first order in small $|x|,|y|$".
What would that say in polar coordinates? Does your function satisfy that property?
8. Mar 10, 2016
### Samy_A
You had to prove that $\displaystyle \lim_{(u, v) \rightarrow (0, 0)} \frac {uv^2}{(u^2 + v^2)||(u, v)||}=0$.
As has been suggested, switching to polar coordinates makes live easier here.
If the limit is 0, your function is differentiable in (0,0). (That is assuming the partial derivatives in (0,0) are both 0. You stated this without proof, but I think it is indeed correct.)
But is the limit 0? Does the limit even exist?
9. Mar 11, 2016
### cacofolius
I'm sorry, that happens for hurrying. I forgot to divide by $||(u, v)||$ in the definition, which gives me an extra $r$ in the denominator:
$\frac {r^3 cos(\theta) sin(\theta)^2} {r^3} = cos(\theta) sin(\theta)^2$
which means it doesn't exist, therefore is not differentiable in (0,0). Thanks again for your patience.
10. Mar 11, 2016
### Ray Vickson
No: the limit DOES exist along any direction, but is not linear in $\sin(\theta)$ and $\cos(\theta)$. That is why the function is not differentiable: its directional derivative is not a linear function of the direction.
11. Mar 18, 2016
### cacofolius
Hi Ray Vickson, I was trying to use the theorem that states that if I can decompose the function like this $F(r,\theta)=H(r)G(\theta)$ and $\lim_{(r) \rightarrow (0)} H(r) = 0$ and $G(\theta)$ is bounded in [0,2pi] then the original function is differentiable, but I see now that since the r part is cancelled, all I have left is
$F(\theta)=cos(\theta)sin(\theta)^2$ and therefore, I cannot use this theorem.
Do you mean that is not linear in the variable $\theta$? Or, for example, if what I get is $F(\theta)=sin(\theta)$ I could say it is differentiable, because it is linear in the variable $sin(\theta)$ (Would this variable come from the definition in the transformation to polar coordinates?)
And another question, is there some theorem or text you can direct me to that states that if the directional derivative is not a linear function of the direction then the function is not differentiable? I didn't know that, and my notes don't mention a case where the $r$ part is gone. Thank you for your patience.
12. Mar 18, 2016
### Ray Vickson
NO, as in #10, I said it is not linear in $\cos(\theta)$ and $\sin(\theta)$. Look again at #7, where it says that we need $f(x,y) \approx f(0,0) + a x + by$ to first order in small $x,y$; now put $x = r \cos(\theta)$ and $y = r \sin(\theta)$. You do not have $f \approx a r \cos(\theta) + b r \sin(\theta)$ for constants $a,b$, so you do not satisfy that differentiability criterion.
I do not know how your textbook/notes/instructor defines the concept of "differentiability" in the multivariate case (which is trickier and more stringent than in the univariate case), so I don't know how to answer your general question in that regard.
13. Mar 19, 2016
### cacofolius
Thanks Ray, I understand now what you meant.
|
2017-08-19 00:52:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9047763347625732, "perplexity": 444.5689289077813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105195.16/warc/CC-MAIN-20170818233221-20170819013221-00032.warc.gz"}
|
https://collegephysicsanswers.com/openstax-solutions/two-manned-satellites-approaching-one-another-relative-speed-0250-ms-intending
|
Question
Two manned satellites approaching one another, at a relative speed of 0.250 m/s, intending to dock. The first has a mass of $4.00 \times 10^3 \textrm{ kg}$, and the second a mass of $7.50 \times 10^3 \textrm{ kg}$. (a) Calculate the final velocity (after docking) by using the frame of reference in which the first satellite was originally at rest. (b) What is the loss of kinetic energy in this inelastic collision? (c) Repeat both parts by using the frame of reference in which the second satellite was originally at rest. Explain why the change in velocity is different in the two frames, whereas the change in kinetic energy is the same in both.
a) $-0.163 \textrm{ m/s}$
b) $-81.6 \textrm{ J}$
c) $0.0870 \textrm{ m/s}$, $-81.5 \textrm{ J}$. A change in reference frame for the same event doesn't change the kinetic energy lost during the event.
Solution Video
|
2019-12-13 03:17:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6378813982009888, "perplexity": 252.24088899020103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540548537.21/warc/CC-MAIN-20191213020114-20191213044114-00321.warc.gz"}
|