url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://stacks.math.columbia.edu/tag/09F0 | Lemma 15.115.11. Let $A$ be a discrete valuation ring with fraction field $K$ of characteristic $p > 0$. Let $\xi \in K$. Let $L$ be an extension of $K$ obtained by adjoining a root of $z^ p - z = \xi$. Then $L/K$ is Galois and one of the following happens
1. $L = K$,
2. $L/K$ is unramified with respect to $A$ of degree $p$,
3. $L/K$ is totally ramified with respect to $A$ with ramification index $p$, and
4. the integral closure $B$ of $A$ in $L$ is a discrete valuation ring, $A \subset B$ is weakly unramified, and $A \to B$ induces a purely inseparable residue field extension of degree $p$.
Let $\pi$ be a uniformizer of $A$. We have the following implications:
1. If $\xi \in A$, then we are in case (1) or (2).
2. If $\xi = \pi ^{-n}a$ where $n > 0$ is not divisible by $p$ and $a$ is a unit in $A$, then we are in case (3)
3. If $\xi = \pi ^{-n} a$ where $n > 0$ is divisible by $p$ and the image of $a$ in $\kappa _ A$ is not a $p$th power, then we are in case (4).
Proof. The extension is Galois of order dividing $p$ by the discussion in Fields, Section 9.25. It immediately follows from the discussion in Section 15.112 that we are in one of the cases (1) – (4) listed in the lemma.
Case (A). Here we see that $A \to A[x]/(x^ p - x - \xi )$ is a finite étale ring extension. Hence we are in cases (1) or (2).
Case (B). Write $\xi = \pi ^{-n}a$ where $p$ does not divide $n$. Let $B \subset L$ be the integral closure of $A$ in $L$. If $C = B_\mathfrak m$ for some maximal ideal $\mathfrak m$, then it is clear that $p \text{ord}_ C(z) = -n \text{ord}_ C(\pi )$. In particular $A \subset C$ has ramification index divisible by $p$. It follows that it is $p$ and that $B = C$.
Case (C). Set $k = n/p$. Then we can rewrite the equation as
$(\pi ^ kz)^ p - \pi ^{n - k} (\pi ^ kz) = a$
Since $A[y]/(y^ p - \pi ^{n - k}y - a)$ is a discrete valuation ring weakly unramified over $A$, the lemma follows. $\square$
There are also:
• 2 comment(s) on Section 15.115: Eliminating ramification
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). | 2022-08-09 16:50:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9728410243988037, "perplexity": 90.55011643653394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571056.58/warc/CC-MAIN-20220809155137-20220809185137-00218.warc.gz"} |
https://yutsumura.com/jewelry-company-quality-test-failure-probability/ | # Jewelry Company Quality Test Failure Probability
## Problem 731
A jewelry company requires for its products to pass three tests before they are sold at stores. For gold rings, 90 % passes the first test, 85 % passes the second test, and 80 % passes the third test. If a product fails any test, the product is thrown away and it will not take the subsequent tests. If a gold ring failed to pass one of the tests, what is the probability that it failed the second test?
## Solution.
Let $F$ be the event that a gold ring fails one of the three tests. Let $F_2$ be the event that it fails the second test. Then what we need to compute is the conditional probability
$P(F_2 \mid F) = \frac{P(F_2 \cap F)}{P(F)}.$
The numerator is
$P(F_2 \cap F) = P(F_2) = 0.9 \cdot 0.15.$ (A gold ring passes the first test with probability $0.9$ and fails the second test with probability $1-0.85=0.15$.)
The complement $F^c$ of $F$ is the event that a gold ring passes all the tests. Thus
$P(F) = 1- P(F^c) = 1 – 0.9 \cdot 0.85 \cdot 0.8.$ It follows that the desired probability is
\begin{align*}
P(F_2 \mid F) &= \frac{0.9 \cdot 0.15}{1 – 0.9 \cdot 0.85 \cdot 0.8}
= \frac{135}{388} \approx 0.348
\end{align*}
Therefore, given that a gold ring failed to pass one of the tests, the probability that it failed the second test is about 34.8 %.
### More from my site
• What is the Probability that All Coins Land Heads When Four Coins are Tossed If…? Four fair coins are tossed. (1) What is the probability that all coins land heads? (2) What is the probability that all coins land heads if the first coin is heads? (3) What is the probability that all coins land heads if at least one coin lands […]
• Independent Events of Playing Cards A card is chosen randomly from a deck of the standard 52 playing cards. Let $E$ be the event that the selected card is a king and let $F$ be the event that it is a heart. Prove or disprove that the events $E$ and $F$ are independent. Definition of Independence Events […]
• Pick Two Balls from a Box, What is the Probability Both are Red? There are three blue balls and two red balls in a box. When we randomly pick two balls out of the box without replacement, what is the probability that both of the balls are red? Solution. Let $R_1$ be the event that the first ball is red and $R_2$ be the event that the […]
• Complement of Independent Events are Independent Let $E$ and $F$ be independent events. Let $F^c$ be the complement of $F$. Prove that $E$ and $F^c$ are independent as well. Solution. Note that $E\cap F$ and $E \cap F^c$ are disjoint and $E = (E \cap F) \cup (E \cap F^c)$. It follows that \[P(E) = P(E \cap F) + P(E […]
• Overall Fraction of Defective Smartphones of Three Factories A certain model of smartphone is manufactured by three factories A, B, and C. Factories A, B, and C produce $60\%$, $25\%$, and $15\%$ of the smartphones, respectively. Suppose that their defective rates are $5\%$, $2\%$, and $7\%$, respectively. Determine the overall fraction of […]
• Conditional Probability Problems about Die Rolling A fair six-sided die is rolled. (1) What is the conditional probability that the die lands on a prime number given the die lands on an odd number? (2) What is the conditional probability that the die lands on 1 given the die lands on a prime number? Solution. Let $E$ […]
• Independent and Dependent Events of Three Coins Tossing Suppose that three fair coins are tossed. Let $H_1$ be the event that the first coin lands heads and let $H_2$ be the event that the second coin lands heads. Also, let $E$ be the event that exactly two coins lands heads in a row. For each pair of these events, determine whether […]
• Probability Problems about Two Dice Two fair and distinguishable six-sided dice are rolled. (1) What is the probability that the sum of the upturned faces will equal $5$? (2) What is the probability that the outcome of the second die is strictly greater than the first die? Solution. The sample space $S$ is […]
#### You may also like...
This site uses Akismet to reduce spam. Learn how your comment data is processed.
##### What is the Probability that All Coins Land Heads When Four Coins are Tossed If…?
Four fair coins are tossed. (1) What is the probability that all coins land heads? (2) What is the probability...
Close | 2019-07-20 22:38:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8737984895706177, "perplexity": 205.8229890943316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526714.15/warc/CC-MAIN-20190720214645-20190721000645-00252.warc.gz"} |
https://math.stackexchange.com/questions/568903/prove-or-disprove-there-exists-a-group-g-and-a-normal-subgroup-n-such-that | # Prove or disprove: There exists a group $G$ and a normal subgroup $N$ such that $G$ is non-abelian, but both $N$ and $G/N$ are abelian.
Prove or disprove: There exists a group $G$ and a normal subgroup $N$ such that $G$ is non-abelian, but both $N$ and $G/N$ are abelian.
Can anyone give me some hint on this question, please? What theorem(s) in abstract algebra is(are) related to this question, please? Thank you!
• could you explain what are your thoughts on this question...
– user87543
Nov 16, 2013 at 4:52
• One way is to just start writing down non-abelian groups and checking if this holds. In fact, try writing down $S_3$....
– user61527
Nov 16, 2013 at 4:53
• Let's take $S_3$ as an example. Clearly $S_3$ is not abelian and its normal subgroup is $A_3$. It is true that $S_3/A_3$ is isomorphic to $\{1, -1\}$ and therefore, $G/N$ is abelian. Also $A_3$ is abelian, I think. So At least I get an example for this. That is we cannot dis-prove this statement. Nov 16, 2013 at 5:02
• Another example is $Q_8=\{1,-1,i,-i,j,-j,k,-k\}$ (the elementary quaternions) where all proper subgroups are normal and abelian, and also all quotients by proper subgroups are abelian. Nov 16, 2013 at 11:47
• You could look up the concept of a semidirect product. Nov 16, 2013 at 11:50
Hint If $H$ has prime order $p$ it is abelian. Moreover, if $G:H=2$ then $H$ is normal and $G/H$ is also abelian.
So look for a non-abelian group of order $2p$ where $p$ is prime.... | 2022-05-19 07:57:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7932837605476379, "perplexity": 135.89964885100326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662526009.35/warc/CC-MAIN-20220519074217-20220519104217-00067.warc.gz"} |
https://hermetic.com/beastbay/1016477915/index | The javascript bookmark tool appears to not be working or you have javascript disabled
The javascript metadata tool appears to not be working or you have javascript disabled
Join the
Hermetic Library discussions
at the
The Lost Teachings of Zarathustra: Part 9
Posted by Marc Cohen on March 18, 2002 @ 10:58 AM
from the infernal-combustion dept.
Part: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10
By this point, those who had been following Our story were too scared of becoming one more dupe of Zarathustra's parables to interject anything. But then one man arose from the muddle, and said what was on everyone's mind.
“The title of that section was 'The Will to Power,' and yet it has nothing to do with the Will to Power.”
Zarathustra chuckled:
“Those who have Will, yet have no Power, utilizeth not the Will to Power.
“Those who have Power, yet no Will, utilizeth not the Will to Power. For –
“A Dictionary and a Dance, both go into the Fire of Chance
and that which remains transcendeth everyday Chains
Music has a language, with which we create Ecstasy
So with madness, not rules but propensity; creativity
“Hey!” shouted one of the muddle, “Did you all hear that? Zarathustra says that we should throw Dictionaries and Dancers into fire!”
“Jawhol!” bellowed another. “Let us burn books unto Zarathustra!” The muddle broke their hitherto silence and a frenzied, rowdy shouting of perversities attributed “unto Zarathustra!” continued. Zarathustra re-checked the luminaries, and saw that he had been correct all along – he had come too soon. On the rare occasion that he was listened to, it appeared it had been better when his words had fallen on deaf ears.
As Zarathustra retreated towards his Cave in a somber mood, he could hear the muddle chanting, “Hail der Ubermensch!” and “Deutschland uber Alles!” Soon was Zarathustra being portrayed in Anti-Semitic literature, perverting grossly his own teachings.
Eternally careless of such simian affairs, Dionysus lay in wait to crush the body of the beloved so that the blood therefrom might revive the seeds on Earth of all that the beloved hath sewn. But Dionysus would have to wait.
Part: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10
< | >
Related Links Articles on Self Realization Also by Marc Cohen Contact
The Fine Print: The following comments are owned by whoever posted them.
**Re: The Lost Teachings of Zarathustra: Part 9**
by <iporshu> on Monday March 18, @08:46PM
I just want to tell you that reading your work sends chills up my spine,you are truly gifted. I love the series!
iporshu
• |Re: The Lost Teachings of Zarathustra: Part 9\\
by Marc Cohen on Wednesday March 20, @01:01AM\\|
93\\ \\ Thanks for the compliments \\ \\ 93 93/93\\ \\ Frater Zarathustra!\\
**Re: The Lost Teachings of Zarathustra: Part 9**
by <IORasputen> on Wednesday March 20, @12:18AM
93, 93/93
Here Here, Vevot Hazad, wonderful tale, onward and upward.
Namaste,
Rasputen
Re: The Lost Teachings of Zarathustra: Part 9\\ by Marc Cohen on Wednesday March 20, @01:07AM 93\\ \\ Rasputen, thanks for the props - I'll have to tell people that Rasputin is among my readers \\ \\ 93 93/93\\ \\ Frater Zarathustra!\\ \\
Re: The Lost Teachings of Zarathustra: Part 9\\ by on Monday March 25, @03:43AM CF\\ \\ 93\\ \\ Not only Rasputin, also Akhnaton…\\ \\ But, now on to my question:\\ From where, exactly, have those “Lost Teachings”\\ been re-discovered? Some east-coast basement\\ I guess?\\ \\ Let's make a Zarathustra cd in june, Marc.\\ I'll play smoke on the water on the geetah,\\ and you scream those Lost Teachings over it.\\ \\ Then we'll get on tour, make millions, and\\ get absolutely pissed.\\ \\ ;)\\ \\ 93 93/93\\ Fraternally,\\ Timo, being sorry for his sillyness\\ \\
• |Re: The Lost Teachings of Zarathustra: Part 9\\
by <Khu-en-aton> on Saturday May 11, @04:51AM
|
CF\\ \\ 93\\ \\ Not only Rasputin, also Akhnaton…\\ \\ But, now on to my question:\\ From where, exactly, have those “Lost Teachings”\\ been re-discovered? Some east-coast basement\\ I guess?\\ \\ Let's make a Zarathustra cd in june, Marc.\\ I'll play smoke on the water on the geetah,\\ and you scream those Lost Teachings over it.\\ \\ Then we'll get on tour, make millions, and\\ get absolutely pissed.\\ \\ ;)\\ \\ 93 93/93\\ Fraternally,\\ Timo, being sorry for his sillyness\\ \\
The Fine Print: The following comments are owned by whoever posted them.
“As St. Paul says, 'Without shedding of blood there is no remission,' and who are we to argue with St. Paul?” – Aleister Crowley All trademarks and copyrights on this page are owned by their respective companies. Comments are owned by the Poster. [ home | search ]
This is an official and authorized archive of The Beast Bay
Hosted by Hermetic.com
— fileinfo: path: '../hermetic.com/beastbay/1016477915/index.html' created: 2016-03-15 modified: 2016-03-15 … | 2022-12-02 15:22:20 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9989486336708069, "perplexity": 14028.789154660071}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710909.66/warc/CC-MAIN-20221202150823-20221202180823-00281.warc.gz"} |
https://puzzling.stackexchange.com/questions/65108/popularity-within-a-target-audience/65123 | # Popularity Within a Target Audience
This is a with four component puzzles and a final meta.
# Creations of CSIRO
The Australian research institute CSIRO is notable for producing inventions such as gene shears, the insect repellent Aerogard and more.
Three clues resolve to multi-word answers. Combine some clues and answers to form a message.
??? → _ _ _ _ _ _ _. _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
### Clues
ACROSS
1. Turned and twisted molecule
3. Cuddle giant briefly
5. One horribly intrusive, ultimately pointless computer program
10. Volume that man returned after initially borrowing "Introduction to Hunting Something Enormous"
15. Close to water content of snowflakes identified
17. Number, one in Spanish beginning to look like a name
18. Dry fruit enthusiast
19. With mad love, start to snuggle furry animals
21. Erroneously issue grant for parts of sheet music
23. Fools around without, initially, cotton garment
24. Hear tavern owner do some gardening work
26. Figures Oscar and Echo left broken promises
28. Characters in funnier role-playing game
32. Rations mostly held back by comrade in a cold manner
33. Support one in commercial
35. Beat without trouble, escaped
38. Called out celebrity drummer
39. Group's new in-joke?
40. Looking back, charge in Oktoberfest order's more substantial
41. Anguishes after sibling opens up
42. Defeat heartlessly to make disappear
43. Hoist last of cable and rise into the air without it
44. Sally's oddly crafty
45. Army groups not, for instance, prideful?
46. You heard, in Garsdale, of unusually sweet but toxic substance
DOWN
1. Food coach tossed icing and last bit of eclair
2. Other accounts reconstructed last
3. Try to sell bird of prey
4. Use up most of Lego set
6. Watch unconcluded contest
7. Notes legal thing
8. School group's last to go
9. Start to sell our tart
11. Participates in replacing indium with silver in car parts
12. Eats nearly all of meat and potatoes dish
13. Muppet's expressing love for trees
14. New chest built for woodworking tool
16. Sent back ten thousand fish
20. Child to harbour love in the near future
22. Start to record one very loud musical phrase
24. American cookies couple dropped from the top for big birds
25. Earnings from game company with backing
26. Styled plait worn by one, hanging at the back!
27. See alien following yours truly
29. Fashion types quietly leaving crafts market
31. Address protecting a Russian river
32. Japanese-American businessman inside car at an intersection
34. Residue in arid region
36. Sounded instrument for story teller
37. Pulled out dagger with bloodlust at heart
39. Australian company starts to give information openly
# Lunarite
Sometimes simulator games seem like they're just making up item names.
# Casino Jackpot
We have a real high roller in the house! Will they be able to make the lucky spin?
Spin!
2. Zeroing button on a stopwatch
3. "By treason’s tooth bare-___ and canker-bit." :: King Lear
5. ___ Zenrai-zenji (19th century zen master)
You won $15. Spin! 1. Spanish generator manufacturer acquired by Atlas Copco in 2011 2. Ancient Germanic alphabet characters 5. Relating to the cheek, anagram of ANGLE 7. Masculine Spanish word for "chalks" more commonly used in Mexico You won$13.
Spin!
1. Biological things inherited from one's parents
3. Indian agricultural worker, anagram of A SINK
5. "The Tomorrow Show with ___ Undergaro" (podcast)
6. The Invincible Barbarian, per a 1982 Italian film
7. Hebrew dirges or laments (var.)
You won $15. Spin! 1. French commune popular for sandstone bouldering, located roughly 80km from Nice 3. "The Dark Knight ___" (2012 Nolan film) 6. Penalties to pay, such as those from speeding 7. Step between "lather" and "repeat" You won$14.
Spin!
1. Lazy ___ (turntable)
4. Percussionist for Russian band Theodor Bastard
5. Former name for sthène, an obsolete unit of force
7. Parts to ignite for firecrackers
You won $13. Spin! 2. Political assistants 4. Actress Kendrick and ballerina Pavlova, say 7. Donates (to a charity) You won$11.
Spin!
2. ___ Porcelain (French company allegedly founded in 1768)
5. "___ Strength" (1999 album by reggae artist Garnett Silk)
6. Bakemonogatari opening "___ Circulation"
You won $11. Spin! 3. Catalysing enzyme in biology, anagram of SANER 4. Samoa's first female deputy prime minister ___ Naomi Mata'afa 5. Italian word for reams 6. Horizontal band on a heraldic shield (var.) You won$12.
Spin!
4. "___ the Funeral Parlour" (BBC comedy series)
5. Inaugural Sydney FC captain Mark
7. Strengthen with stone, as an embankment
You won $11. Spin! 4. "Happy Days" actor Williams 6. French for eldest, as in "ma sœur ___" 7. ___ Roses (American band with guitarist Slash) You won$12.
Spin!
5. Stitched lines on a baseball
6. Uncertain estimate
7. Diving equipment inventor Augustus
You won $12. # An Ad Hoc Akari This Akari was put together as a cheap excuse for a puzzle. # Meta: Popularity Within a Target Audience With ever-advancing technology enabling more and more creators, it can be hard to keep up with the latest trends. What can we use to quickly determine what's worth checking out? Note: There is technically an odd one out in this meta. Please use an appropriate alternative. • Working on this here! – Deusovi Apr 29 '18 at 15:52 ## 2 Answers This was a joint solve by Deusovi, ffao, Gareth McCaughan, M Oehm, noedne, and Sid (and probably at least one or two other people that I forgot). For details of the solutions to each puzzle, see this spreadsheet. # Creations of CSIRO This is a fairly standard cryptic crossword, but with two gimmicks hinted at by the flavortext: Since CSIRO makes insect repellent, several Across answers have to have insects removed from them before being entered into the grid. For instance, ANTIVIRUS is entered as IVIRUS, taking out an ANT. And SIGNATURES is entered as SIURES, taking out a GNAT. Since CSIRO produced gene shears, several Down answers have had DNA codons removed from them. (A DNA codon is made up of three nucleotides, which are abbreviated as A, C, G, and T.) For instance, PIGTAIL is entered as PIIL, and DRAGGED is entered as DRED (taking out GTA and AGG respectively). The finished grid looks like this: (Here, red marks words with a bug removed, and yellow marks words with a DNA codon removed.) Now, we can extract a message: The first letters of the clues with bugs removed spell OVERLAY. The extracted DNA codons' corresponding amino acids have single-letter abbreviations: these read AFIVER from left to right. So we want to OVERLAY A FIVER. Noticing that CSIRO is an Australian company, we overlay an Australian five-dollar note (which is partially transparent!) The completely uncovered letters spell GIVE SOL:VIRTUAL ORGANIZATION, so the answer is VIRTUAL ORGANIZATION. # Lunarite The first step to this puzzle is simply identifying things. For each item, the image and description represent two different things. Helpfully, the amount of each item is the length of the image's word, and the cost is the length of the answer's word. (For instance, the first is PATRON / HYPOCRITE.) Once that's done, you may notice that all of the descriptions' answers end in "-ite". With the flavortext, this hints at minerals of some sort: turns out the images are all the names of minerals! Patronite, sampleite, chambersite... Using the Mohs Hardness scale numbers for the real minerals and indexing into the corresponding "-ite" words gives the phrase YOUTUBERS LIFE, the answer to this puzzle. # Casino Jackpot Answering these (horribly obscure) clues, we can start to figure out how they are entered: We can enter each set of clues into a 3x5 grid. Numbers 1 through 7 all indicate different ways the answers are input, reminiscent of a slot machine: 1 is the middle row, 2 and 3 are the top and bottom rows, 4 and 5 are v/^ shapes, and 6 and 7 are diagonals going from one corner to another. Next, using the image: see that these grids are just segments of a bigger slot machine with 7 "symbols" (letters) on each reel. Finally, we notice the 7s in the image: we can rotate the reels to spell out SEVEN across the middle. This gives us the answer: BIG NAME GUEST. # An Ad Hoc Akari This puzzle is like a normal Akari, but with a twist: instead of counting the number of bulbs adjacent to them, each clue gives the sum of adjacent bulb numbers. We can find the unique working bulb placement: To extract an answer, we look at the squares lit by two bulbs. Here, I've marked each of these with the sum of the bulbs that light them. Reading these numbers off alphanumerically gives the answer, FIVE AND DIME. # Meta The four answers are: VIRTUAL ORGANIZATION YOUTUBERS LIFE BIG NAME GUEST FIVE AND DIME The first words of each answer spell "Virtual Youtubers Big Five". The first part, "virtual youtubers", refers to a recent trend of people making Youtube videos using a 3D animated character as an alias and icon. The "big five" is a nickname for five of the most prominent ones. In fact, these animals are associated with four of the Big Five. Dennou Shojo Siro is nicknamed "Dolphin", Kaguya Luna is nicknamed "Hamtaro" (a fictional hamster), Noja has fox ears, and Mirai Akari wears a butterfly ribbon. Also, each one of these names can be found in a puzzle title: Creations of CSIRO has "Siro", Lunarite has "Luna", Casino Jackpot has "Noja", and An Ad Hoc Akari has "Akari". This associates each puzzle with an animal icon. The remainder of each puzzle title is the exact same length as the remainder of the corresponding answer. Matching them up, letter-by-letter, and taking the letters that are the same gives us 2 or 3 letters per puzzle. Putting them into the blanks gives us the final answer: A TRAINED AI (a pun off Kizuna Ai, the fifth and most prominent of the "Big Five".) • Also maybe this is relevant - CSIRO invented rot13 cynfgvp cbylzre onax abgrf bs juvpu gur 'svire' vf bar. Apr 30 '18 at 3:15 # Wrap-up: The Making Of Popularity Within a Target Audience This is not a solution to the puzzle, but provides notes from its poser. This type of answer has been approved by the community. Caution: This post contains spoilers. My first draft of this wrap-up was far too long-winded because I had much to include. I've now tried to boil it down to the main things I wanted to say. • Puzzle creation started mid-March, one and a half months ago. Because of the nature of the metapuzzle, much time was spent trying to come up with a puzzle to fit a title rather than the other way around, which was a first for me. • Originally this was intended to be a single puzzle, but since the triple pun answer was suited to a meta and to celebrate the rapidly growing Virtual YouTuber (VTuber) community I decided to turn this into a mini set. • The substringed characters meta idea was partly due to Nijikawa Laki being embeddable in SOUVLAKI, which I found amusing. The characters ended up going in the title instead largely due to Akari being more suited as part of a title. • The embedded clue phrase uses "Big Five" since it's used in the most comprehensive fan chart. Confusingly the more established Japanese nickname is 四天王 ("Four Heavenly Kings", despite there being five characters), and at one point the meta had "Big Four" in the clue phrase instead. • The meta flavourtext doesn't actually hint at how to solve the meta since I didn't feel it was necessary, and is just a description of VTubers in general. In hindsight, perhaps a subtle hint or two would have made the initial aha and final extraction easier to spot. • VTubers are gaining popularity so rapidly that, over the course of puzzle construction, Nekomiya Hinata overtook Nekomasu as 5th most subscribed VTuber, with ~275k subs after just 2 months of activity when that happened. In response, this puzzle's title was changed to include _H IN A TA_ as an honorary sixth member (the previous title was the more direct "Unveil Energetic Characters", a tribute to Eilene). • The odd one out in the meta is Nekomasu, whose catch phrase and part of extended name NOJA was used instead. Mirai Akari was originally represented by 🔍, a search icon, to reference her nickname "Princess of Googling Yourself" (エゴサーの姫). This was changed since Nekomasu's fox technically wasn't a part of a nickname anyway, and it nicely makes the icons all animals (the butterfly motif is more obvious in Akari's first video, and the meta image uses Discord's blue butterfly emoji). • Creations of CSIRO went through 7 different gridding attempts due to minor mistakes like using GANTT CHART for the codon TTC when it also contains ANT, or codons unintentionally being formed within the completed grid. • The$5 note unfortunately doesn't align perfectly — this is particularly a problem in the lower right, where the IE above the AT is half-covered. It's also unfortunately ambiguous whether to take the cells half-covered by the golden wattle — in the end I decided to use them since they can actually be seen through and I didn't want to increase the grid size, as 8x16 provided a reasonable alignment without being too large.
• Enumeration was provided to assist with picking the intended cells and for parsing, especially for the abbreviation SOL. for SOLUTION. GIVE ANS was originally used, but this was changed to GIVE SOL so that GEL could be used as a grid entry.
• The deletions gimmick may feel slightly tacked on, but it served the practical purpose of allowing the obscure ARATANI to be used, to avoid a double column of mostly unchecked lights. Ironically, coming up with the banknote kicker actually happened last after the deletions gimmick was considered, although I'd been trying to work polymer banknotes in the whole time since it's one of CSIRO's most famous inventions.
• SUGAR OF LEAD got an objectively terrible clue due to needing to start with a Y, which turns out to be surprisingly difficult since few good cryptic indicators start with Y and the answer doesn't contain Y itself. A certain shrine would have worked, but I wanted to avoid needing anime knowledge in the component puzzles as much as possible.
• In hindsight, 41A's BROACHES is badly gridded from a crosswording perspective since only the ES can be gotten from crossings.
• Lunarite is the only puzzle which used an idea I had before construction on this started, though the answer was intended to be something else. Many hours were spent trying to come up with a Stardew Valley (in which lunarite is an item) and/or Harvest Moon puzzle to make the title less suspicious, but to no avail.
• NOJA was surprisingly hard to embed well, and I didn't want to use "Sino-Japanese", so I settled on "Casino Jackpot" relatively quickly. This makes it the only puzzle whose answer was decided from the title rather than the other way around.
• The above partly explains why Casino Jackpot has the only "green paint" answer that is not really a Thing. Thankfully this also helps hint that letters are important for the final meta extraction, rather than the overall answer meanings.
• Since "jackpot" evokes slot machines, I thought it'd be interesting to have a crossword filled using slot machine paylines. I quickly realised that it wasn't easy to get 3+ good answers in a single spin, leading to a copious amount of obscure cluing.
• 3 clues stooped to using anagrams. 2 are due to ambiguity (DNASE/RNASE, GENAL/MALAR), the other (KISAN) was out of fairness since it was harder to research and it was in a particularly useful set.
• Many traps were to be avoided when fact checking. Most notably, Gisan Zenrai-zenji's Wikipedia article is, as of writing, titled Gisan Zenkai which as far as I can tell is incorrect. Similarly, a search for GUAOL on Google currently gives me hits for "The Guaol", supposedly part of Titan A.E.'s soundtrack, but this is in fact a misspelling of "The Gaoul". In all honesty, I wouldn't be surprised if there was an inaccuracy in the final puzzle despite my best efforts.
• At least one set is completely unnecessary, but it was included as a contingency and to make the aha easier to spot/confirm. I've learnt that sometimes when you feel the difficulty is just right, it doesn't hurt and is sometimes better to go one notch easier.
• GIVES and GIVE I are both answers in Casino Jackpot. Duplication like this is usually best avoided in crosswords, but here I decided to turn a blind eye due to difficulty of construction. There was, however, a set I scrapped due to it completely overlapping with another on an answer.
• During meta construction, a number of Akari-related titles were passed through and considered (e.g. "Linked Data Akaris", "A Bland Akari") but none of them seemed workable. The final version uses the "Ad" in "Ad Hoc" as a hint, and I'm quite proud of it since grid deduction puzzles don't usually have good ways of extracting a single word of phrase answer.
• Despite coming up with the Akari idea, it took a fortnight before I got over the dread of constructing it. Construction took place over 5 hours one late night, starting with a reasonable placement of lights without regard for clues, solve difficulty or grid size, then tightening things up from there:
• The 0 light is necessary to produce an A in the answer. If this were a general puzzle genre, solve logic might be more interesting if all lights were 1+. | 2021-10-16 09:55:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3168352246284485, "perplexity": 7218.705732063661}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584554.98/warc/CC-MAIN-20211016074500-20211016104500-00275.warc.gz"} |
https://www.mail-archive.com/ntg-context@ntg.nl/msg94290.html | # Re: [NTG-context] mkiv digits/units zero padding not working
Hi Wolfgang,
you are (of course) right again. I realised that I wouldn’t get the expected
behaviour after checking the snippet isolated from my document’s context, where
it is embedded in a \startplacetable[…]{}{}. I’m still learning to get the gist
of the \doifs, the curly and square bracketed arguments and so on. Thanks for
the hint!
Seems like I’m going to make three cells and span the header column for now,
though I guess it would be a nice feature to have the padding working in the
other cases.
I’ll write a feature request for no 4.
Thanks!
> On 7 May 2020, at 20:00, Wolfgang Schuster
> <wolfgang.schuster.li...@gmail.com> wrote:
>
> Benjamin Buchmuller schrieb am 07.05.2020 um 19:41:
>> Hi Wolfang,
>> Thank you for your reply. I have indeed not explained my intended result
>> very clearly.
>> 1.
>> Primarily, I need to get the two values aligned at the digit separator of
>> the first and second number respectively and overall at the ± sign. I’m
>> working in an xtable, where I have entries such as
>> \startxcell \mpm{14.0==}{_1.5==} \stopxcell
>> \startxcell \mpm{_0.034}{_0.013} \stopxcell
>> and defined
>> \def\mpm#1#2{
>> \ifsecondargument
>> \digits{#1}\,±\,\digits{#2}%
>> \else
>> \digits{#1}%
>> \fi
>> }
>
> Is there something missing in here because the \ifsecondargument check here
> makes non sense because the second argument is mandatory and not optional.
>
> Is this what you want?
>
> \define[2]\mpm
> {\digits{#1}%
> \doifsomething{#2}{\,±\,\digits{#2}}}
>
>> Since I was hoping that I could exploit the zeropadding of \digits to get
>> the format right. Indeed, it would save a lot of typing, if I wouldn’t have
>> to specify the padding manually and I vaguely recall that there is somewhere
>> a ConTeXt solution that can make such alignments, but I simply can’t find it
>> any more …
>
> You can align number on the decimal point (comma) but this works only when
> you have only one number in a cell.
>
> \starttext
>
> \startxtable[aligncharacter=yes,alignmentcharacter=±]
> \startxrow
> \startxcell
> \digits {14.0} ± \digits {1.5}
> \stopxcell
> \stopxrow
> \startxrow
> \startxcell
> \digits {0.034} ± \digits {0.013}
> \stopxcell
> \stopxrow
> \stopxtable
>
> \stoptext
>
>> 2. + 3.
>> Absolutely right, this is my bad. I have badly mixed from Hans’ solution to
>> a similar problem,
>> https://www.mail-archive.com/ntg-context@ntg.nl/msg00724.html
>> which was actually \def\zeroamount{-} and the example in the source, I
>> didn’t read properly. Just skip that part. :)
>
> The message is from 2003!
>
>> 4.
>> Indeed,
>> \startxcell \mpm{14.==}{_1.5=} \stopxcell
>> \startxcell \mpm{_0.03}{_0.01} \stopxcell
>> aligns properly. But sometimes, I have the first digit specified, but not
>> the second and unfortunately this doesn’t work
>> \startxcell \mpm{14.5=}{_1.5=} \stopxcell
>> \startxcell \mpm{_0.03}{_0.01} \stopxcell
>> because = is not immediately preceded by .
>
> Can you write another mail with a request for this.
>
> Wolfgang
___________________________________________________________________________________ | 2020-09-23 03:59:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8447549939155579, "perplexity": 9860.023572559427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400209665.4/warc/CC-MAIN-20200923015227-20200923045227-00238.warc.gz"} |
https://citizendium.org/wiki/Talk:Acceleration_due_to_gravity | # Talk:Acceleration due to gravity
Main Article
Discussion
Related Articles [?]
Bibliography [?]
Citable Version [?]
To learn how to update the categories for this article, see here. To update categories, edit the metadata template.
Definition: The acceleration of a ponderable object, which is near the surface of the Earth, due to the Earth's gravitational force. [d] [e]
Checklist and Archives
Workgroup categories: Physics and Engineering [Editors asked to check categories] Talk Archive: none English language variant: American English
## Can we simplify it a bit?
This part of the article seems to repeat essentially the same equation twice:
" ... is given by:
${\displaystyle {\vec {g}}=-G{\frac {M}{r^{2}}}{\frac {\vec {r}}{r}}}$
The magnitude of the acceleration is ${\displaystyle g=GM/r^{2}}$, with SI units of meters per second squared.
Here G is the universal gravitational constant, G = 6.67428×10−11 Nm2/kg2,[1] ${\displaystyle {\vec {r}}}$ is the position of the test object in the field relative to the centre of mass M, and r is the magnitude (length) of ${\displaystyle {\vec {r}}}$."
I realize that the equation ${\displaystyle {\vec {g}}=-G{\frac {M}{r^{2}}}{\frac {\vec {r}}{r}}}$ includes all of the conventions used by physicists and mathematicians, but it simply confuses those of us who are not physicists and mathematicians. Can we not simplify it thus:
" ... is given by:
${\displaystyle g=GM/r^{2}}$, with SI units of meters per second squared.
G is the universal gravitational constant = 6.67428×10−11 Nm2/kg2,[2] and r is the distance between the test object and the centre of mass M."
Those of us who are not physicists or mathematicians would find it much easier to understand if it were simplified as proposed. - Milton Beychok 03:18, 26 February 2008 (CST)
## Value of g
As far as I remember g varies by a percent or so over the earth. How can we then give so many decimals? Is there some sort of standard value?--Paul Wormer 03:30, 26 February 2008 (CST)
That is the value agreed upon by the Conférence Générale des Poids et Mesures, CGPM in 1901 as referenced in the article. I assume that it is a sea level value so it is not affected by the altitude of any locations. - Milton Beychok 04:08, 26 February 2008 (CST)
I looked around on the internet and I get the impression that gn = 9.80656 m/s2 is defined by the CGPM as the standard acceleration (a fictituous value) and that g is the local acceleration (which is the real physical value that varies by almost a percent over the globe). I read the first few sentences of the article slightly different.--Paul Wormer 09:19, 26 February 2008 (CST)
Paul, I have no objection to your re-write of the first few sentences. I just want to say that here we have a good example of the classical difference between a physcist and an engineer. Most engineers simply use g = 9.807 or even 9.8 when doing their fluid dynamics calculations and, in 99% of an engineer's work, a possible 1% error is totally insignificant. In fact, we would be very happy if the rest of our work was as good as that.
Changing the subject, I am moving the "see also" link to the "Related articles" subpage. It is my understanding that is where such links belong. - Milton Beychok 12:39, 26 February 2008 (CST)
I agree completely with your remark about accuracy, but it was not I who put all decimals of g into the article :-) --Paul Wormer 02:09, 27 February 2008 (CST)
## Notation
I'm not happy with the use of g for the exact (1/r2, non-linearized) attraction. As far as I'm aware g is used only for the linear (in height h) form of gravitation. For the time being I changed it to f, but please feel free to change it to something else.--Paul Wormer 02:17, 27 February 2008 (CST)
Paul, I like you and admire your erudition ... and I can only hope that this lengthy posting does not offend you.
• About 2 weeks ago, I wrote an article on flue gas stacks which included a section on flue gas draft (or draught) that included some equations for approximate estimation of what I call the "stack effect" or "chimney effect". The local gravitational acceleration (g = 9.807 m/s) was included in those equations and there was no article in CZ to link it to. I read Gravitation and found no mention of g at that time (although it has since been added into that article by 'Dragon' Dave McKee) ... instead Gravitation gets involved in time-space and general relativity. So I thought that once CZ attracts more engineers and more equations involving g begin to be written, we will need an article that simply defines g.
• On February 22, I posted a message in the Physics Workgroup's mailing list and asked if anyone would please write an article defining the local gravitational acceleration g in "... very simple plain English ...". You can find that message in the mailing list archive for February.
• On February 23, 'Dragon' Dave McKee created this article. Shortly thereafter, I added in the bit about gn as defined by the CGPM in 1901 ... over a century ago ... which now appears in many, many textbooks and even more technical journal articles.
• On February 26, on this Talk page,I asked that this article be simplified, and you graciously did so.
However, even now, the article seems to be focused more on ${\displaystyle f}$ than on gn. Can we not have an article that simply defines gn? After all, anyone wanting to read about ${\displaystyle f}$ can read Gravitation where it is referred to as ${\displaystyle F}$.
In essence, I think that everything after " ... different locations around the world." could be deleted since it is already in the Gravitation article. - Milton Beychok 12:44, 27 February 2008 (CST)
Milton, I commented out the two last paragraphs. I did not delete them in case somebody wants to restore them.
You give me too much credit:
• The article gravitation originates for the largest part from WP, I only added the section depending on height h (which BTW contains the term gravitational acceleration in bold together with its definition and value in three digits). Last weekend 'Dragon' Dave McKee added a value of g which was off by a factor of 10. General relativity and such comes from WP.
• The last two paragraphs (which I commented out) in the present article were written by Roger Moore in reaction to the flawed discussion by 'Dragon' Dave McKee. The Dragon drew an ellipse with the force center in the middle of the ellipse instead of in one of the foci. His discussion did not emphasize g, either. All I did was modifying the discussion of Roger three times (the first modification was on your request, the second was that I didn't like the same symbol for the exact and the linearized force, and now the third is again on your request).
• I was not aware of the history of the article, for me the contribution of the Dragon came out of the blue. I cannot find your msg in the physics mailing list, I don't know why.
--Paul Wormer 02:17, 28 February 2008 (CST)
Paul, thanks for your response and I appreciate your having commented out the last two paragraphs. Here is the url for my message in the physics mailing list: http://mail.citizendium.org/pipermail/cz-physics/2008-February/000002.html - Milton Beychok 10:41, 28 February 2008 (CST)
Milton, thank you for the link, this was clarifying. I see now that the "Dragon" only translated into Wiki some (Word/pdf) text offered by a Rumanian professor.
I still don't understand why I cannot find the physics mailing list on my own, Rumanian professors and Californian engineers are much smarter than I. I posted a help request on the forum.--Paul Wormer 11:05, 28 February 2008 (CST)
Paul,simply go to CZ:Physics Workgroup and you will see on the right-hand side "Mailing list" and underneath it "cz-physics". Just click on "cz-physics". When you get there, click on "Cz-physics Archives". - Milton Beychok 13:50, 28 February 2008 (CST)
## disambiguation pages are needed
It seems to me that you guys need a gravity (disambiguation) page, which could include something like the following:
• gravity (space-time)
• gravity (Classical)
• gravity (Earth)
• G (universal gravity constant)
• g (acceleration due to gravity) - not the letter "g"
• gn (Earth's gravity constant)
David E. Volk 11:03, 28 February 2008 (CST) and so on.
## Acceleration due to gravity
The gravitational field given is that of a point mass (or a spherical mass outside the radius of the object). The field of an oblate spheroid is not the same as that of a sphere and can cannot depend solely on the distance from the centre of the spheroid since the distribution of mass inside the spheroid (which generates the field) is important so there must be some dependence on the major and minor axes e.g. if I sit on the major axis (theta=0) and increase the minor axis there will be zero change in the gravitational field according to the article yet clearly the mass is now distributed at a greater distance from my location so the field should reduce.
I don't have time to calculate the correct field (and can't find it easily on the net) so I have edited the start of the article to correct a few things there and removed the incorrect part at the end. Some of this may want to be restored when the correct field can be added (or possibly assumptions about near sphericity explicitly stated?) but I thought it best not to leave text that is wrong remain. Roger Moore 17:16, 24 February 2008 (CST)
I've fixed a few more errors in the article which have cropped up:
• 'g' is not a constant, it is the value of the local gravitational field anywhere (not just on the Earth or other planets).
• 'g' is a vector so technically it describes both the magnitude and direction of the field but I thought it easiest just to omit 'magnitude' rather than give the details.
• the potential actually goes as ${\displaystyle 1/r}$: it is the integral of the force which goes as ${\displaystyle 1/r^{2}}$.
Roger Moore 02:48, 19 March 2008 (CDT)
## Please look at the last edit of Acceleration due to gravity
The following is a copy of talk page of Paul Wormer. --Paul Wormer 13:44, 25 March 2008 (CDT)
Paul, would you please look at last edit (by Richard Moore) of the Acceleration due to gravity article. Is it correct? And again, can it be simplified? At the very least, the parameters should be defined:
• Why not define ${\displaystyle V_{G}}$?
• Why not also define ${\displaystyle G}$ ?
• Why not use ${\displaystyle m}$ instead of ${\displaystyle M}$ and also define it as the Earth's mass?
That last editor assumes that all of the readers will be physicists and will understand his equation without any explanation.
Would it be incorrect to simply replace his equation with:
${\displaystyle \quad g\equiv {\frac {Gm_{e}}{r_{e}^{2}}}}$ where ${\displaystyle G}$ is the universal gravitational constant, ${\displaystyle m_{e}}$ is Earth's mass and ${\displaystyle r_{e}}$ is the Earth's radius
which is as written in the article Gravitation#Gravitational potential. Or am I completely incorrect? Regards, - Milton Beychok 18:30, 24 March 2008 (CDT)
Milton, I wrote the section Gravitation#Gravitational potential and I believe it to be correct. Roger (not Richard) Moore made the point that acceleration is a vector, which is why he wrote fat g. This is true, but in the approximation that the Earth is a non-spinning, perfect, homogeneous sphere the vector character is not so relevant, because then g is a vector with one component only (directed to the center of the sphere).
Altogether the article now suffers a severe case of Wikipeditis. It started out completely wrong with an erroneous Kepler orbit, and then different people (including myself) tried to clean it up as politely as possible. By polite I mean trying to save as much as possible of the work of the previous author. Your points are well-taken, VG, G, and M must be defined. There is some duplication in the article that should be removed as well. Maybe I will write a new version from scratch tomorrow, discarding politeness. The only thing I don't understand is, why is there a standard value with 6 decimal figures, this is unphysical as the variation in the value is in the third decimal figure. Is the standard perhaps meant to enable checking of computations?--Paul Wormer 21:06, 24 March 2008 (CDT)
Paul, thanks for offering to rewrite the article. I don't care if g is expressed to 6 decimal figures or 3 decimal figures. As I have said before, g is used a great deal by engineers dealing with fluid dynamics and we need a straightforward article about g which is clearly written in understandable language that we can link to when writing articles. In other words, an article that can be understood by reasonably intelligent people who are not advanced physicists. I look forward to seeing your rewrite. Regards, - Milton Beychok 22:18, 24 March 2008 (CDT)
End copy from talk page of Paul Wormer
I think the article can do both. Yes, the introductory section, and perhaps the first section after that, should clearly and simply explain the basic, 'simple' form. Then later sections could tackle the more advanced issues. J. Noel Chiappa 13:58, 25 March 2008 (CDT)
## Removed material
The following content was commented out, and was removed; putting a copy here so people have easy access to it:
In the sciences, the term acceleration due to gravity refers to a quantity g describing the strength of the local gravitational field. The quantity has dimension of acceleration, i.e., m/s2 (length per time squared) whence its name.
In the article on gravitation it is shown that for a relatively small altitude h above the surface of a large, homogeneous, massive sphere (such as a planet) Newton's gravitational potential V is to a good approximation linear in h: V(h) = g h, where g is the acceleration due to gravity. This aproximation relies on h << Rsphere (where Rsphere is the radius of the sphere). The exact gravitational potential is not linear, but is inversely proportional to the distance, r, from the centre of the Earth:
${\displaystyle V_{G}={\frac {GM}{r}}}$.
On Earth, the term standard acceleration due to gravity refers to the value of 9.80656 m/s2 and is denoted as gn. That value was agreed upon by the 3rd General Conference on Weights and Measures (Conférence Générale des Poids et Mesures, CGPM) in 1901.[3][4] The actual value of acceleration due to gravity varies somewhat over the surface of the Earth; g is referred to as the local gravitational acceleration .
Any object of mass m near the Earth (for which the altitude h << REarth) is subject to a force m g in the downward direction that causes an acceleration of magnitude gn toward the surface of the earth. This value serves as an excellent approximation for the local acceleration due to gravitation at the surface of the earth, although it is not exact and the actual acceleration g varies slightly between different locations around the world.
More generally, the acceleration due to gravity refers to the magnitude of the force on some test object due to the mass of another object. Under Newtonian gravity the gravitational field strength, due to a spherically symmetric object of mass M is given by:
${\displaystyle f=G{\frac {M}{r^{2}}}.}$
The magnitude of the acceleration f is expressed in SI units of meters per second squared. Here G is the universal gravitational constant G = 6.67428×10−11 Nm2/kg2 [5] and ${\displaystyle r}$ is the distance from the test object to the centre of mass of the Earth and M is the mass of the Earth.
In physics, it is common to see acceleration as a vector, with an absolute value (magnitude, length) f and a direction from the test object toward the center of mass of the Earth (antiparallel to the position vector of the test object), hence as a vector the acceleration is:
${\displaystyle {\vec {f}}=-G{\frac {M}{r^{2}}}{\vec {e}}_{r}\quad {\hbox{with}}\quad {\vec {e}}_{r}\equiv {\frac {\vec {r}}{r}}.}$ | 2022-06-26 11:12:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 23, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7544437646865845, "perplexity": 725.959029938155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103205617.12/warc/CC-MAIN-20220626101442-20220626131442-00129.warc.gz"} |
https://forum.math.toronto.edu/index.php?PHPSESSID=a822j273a85q6l7745l44ouus5&topic=2483.0;prev_next=next | Author Topic: Quiz 2 Section 6101 (Read 605 times)
Xuefen luo
• Full Member
• Posts: 18
• Karma: 0
Quiz 2 Section 6101
« on: October 06, 2020, 06:50:19 AM »
Find the limit of the function at the given point, or explain why it does not exist.
\begin{align*}
f(z)&=\frac{z^3-8i}{z+2i} \ (z \neq -2i) \ at \ z_0=-2i\\
\end{align*} | 2022-05-23 02:43:37 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9875841736793518, "perplexity": 11544.42845283327}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662552994.41/warc/CC-MAIN-20220523011006-20220523041006-00381.warc.gz"} |
http://math.stackexchange.com/questions/134482/solving-a-differential-equation-involving-y-and-its-exponential | # Solving a differential equation involving $y$ and its exponential
Hi all I have a question Ive been asked to solve. But I have no idea where to begin.
The equation is $y'=\dfrac{y+e^x}{x+e^y}$.
I think this is homogeneous but I have no idea as to how to manipulate this to get it into the required form.
-
Maybe you can use the fact that you have an equation where $$y' = f\left( {x,y} \right) = \frac{1}{{f\left( {y,x} \right)}}$$ – Pedro Tamaroff Apr 20 '12 at 18:56
I do not think it it homogeneous. – Fabian Apr 20 '12 at 19:04
A solution is given by $y(x)=x$ . – Fabian Apr 20 '12 at 19:14
In general, if you have a solution $y=f(x)$ the inverse function $y=f^{-1}(x)$ is also a solution. – Fabian Apr 20 '12 at 19:50
Even if the diff-equation cannot be solved, you can get a pretty good understanding of the solution by noting that the function is monotonously increasing and approaches $y=x$ for large $x$. – Fabian Apr 20 '12 at 19:55
Maple 16 does not find a closed-form solution, or any symmetries. This strongly suggests that there is no closed-form solution. Almost certainly there are no closed-form solutions that can be found by elementary techniques.
-
How about $y(x)=x$ suggested by @Fabian ? – Sasha Apr 20 '12 at 19:47
Neither does Mathematica 8. – Ayman Hourieh Apr 20 '12 at 19:49
Maybe the OP made a typo and forgot a minus sign? – Fabian Apr 20 '12 at 19:50
@Fabian Maybe. If so, it'd be exact and everthing would be OK. – Pedro Tamaroff Apr 20 '12 at 19:52
@Sasha: ok, that's one closed-form solution, but not a general solution. – Robert Israel Apr 21 '12 at 1:00
We can write the ode in the form $\omega=Mdx+Ndy=0,$ where $M=-(y+e^x)$ and $N=x+e^y.$
This means that we replace the search for solutions of the ode with the search for curves $\gamma(t)=(x(t),y(t))$ such that $\gamma^\ast\omega=0.$
As showed in Peter Tamaroff's answer $\omega$ is not closed $d\omega\neq 0.$ However it can be showed (invoking Frobenius'theorem) that there exists a function $\mu$ not vanishing s.t. $\mu\omega$ is exact, i.e. $d(\mu\omega)=0.$
To find $\mu$ we need a solution for the $1^{\textrm{st}}$-order linear pde $$0=\frac{\partial \mu N}{\partial x}-\frac{\partial\mu M}{\partial y}\equiv(x+e^y)\partial_x\mu+(y+e^x)\partial_y\mu+2\mu.$$
-
... which is as hard to solve as the original ode. – Robert Israel Apr 20 '12 at 19:47
I posted it just to make explicit my difficulty. – Giuseppe Tortorella Apr 20 '12 at 19:51 | 2015-05-27 10:47:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9329864978790283, "perplexity": 384.1253590310569}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928923.85/warc/CC-MAIN-20150521113208-00050-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://shtools.oca.eu/shtools/public/pymakegradientdh.html | Compute the gradient of a scalar function and return grids of the two horizontal components that conform with Driscoll and Healy’s (1994) sampling theorem.
## Usage
theta, phi = MakeGradientDH (cilm, [lmax, sampling, lmax_calc, extend])
## Returns
theta : float, dimension (nlat, nlong)
A 2D map of the theta component of the horizontal gradient that conforms to the sampling theorem of Driscoll and Healy (1994). If sampling is 1, the grid is equally sampled and is dimensioned as (n by n), where n is 2lmax+2. If sampling is 2, the grid is equally spaced and is dimensioned as (n by 2n). The first latitudinal band of the grid corresponds to 90 N, the latitudinal sampling interval is 180/n degrees, and the default behavior is to exclude the latitudinal band for 90 S. The first longitudinal band of the grid is 0 E, by default the longitudinal band for 360 E is not included, and the longitudinal sampling interval is 360/n for an equally sampled and 180/n for an equally spaced grid, respectively. If extend is 1, the longitudinal band for 360 E and the latitudinal band for 90 S will be included, which increases each of the dimensions of the grid by 1.
phi : float, dimension (nlat, nlong)
A 2D equally sampled or equally spaced grid of the phi component of the horizontal gradient.
## Parameters
cilm : float, dimension (2, lmaxin+1, lmaxin+1)
The real 4-pi normalized spherical harmonic coefficients of a scalar function. The coefficients c1lm and c2lm refer to the cosine and sine coefficients, respectively, with c1lm=cilm[0,l,m] and c2lm=cilm[1,l,m].
lmax : optional, integer, default = lmaxin
The maximum spherical harmonic degree of the coefficients cilm. This determines the number of samples of the output grids, n=2lmax+2, and the latitudinal sampling interval, 90/(lmax+1).
sampling : optional, integer, default = 2
If 1 (default) the output grids are equally sampled (n by n). If 2, the grids are equally spaced (n by 2n).
lmax_calc : optional, integer, default = lmax
The maximum spherical harmonic degree used in evaluating the functions. This must be less than or equal to lmax.
extend : input, optional, bool, default = False
If True, compute the longitudinal band for 360 E and the latitudinal band for 90 S. This increases each of the dimensions of griddh by 1.
## Description
MakeGradientDH will compute the horizontal gradient of a scalar function on a sphere defined by the spherical harmonic coefficients cilm. The output grids of the theta and phi components of the gradient are either equally sampled (n by n) or equally spaced (n by 2n) in latitude and longitude. The gradient is given by the formula
Grad F = 1/r dF/theta theta-hat + 1/(r sin theta) dF/dphi phi-hat.
where theta is colatitude and phi is longitude. The radius r is taken from the degree zero coefficient of the input function.
The default is to use an input grid that is equally sampled (n by n), but this can be changed to use an equally spaced grid (n by 2n) by the optional argument sampling. The redundant longitudinal band for 360 E and the latitudinal band for 90 S are excluded by default, but these can be computed by specifying the optional argument extend.
## Reference
Driscoll, J.R. and D.M. Healy, Computing Fourier transforms and convolutions on the 2-sphere, Adv. Appl. Math., 15, 202-250, 1994.
Tags: | 2021-02-26 15:29:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7455140948295593, "perplexity": 2363.341525244234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357929.4/warc/CC-MAIN-20210226145416-20210226175416-00631.warc.gz"} |
https://www.math.sci.hokudai.ac.jp/en/seminar-index/seminar_2638.php | ## PDE Seminar Regularity Criterion for Weak Solutions to the Navier-Stokes Equations in Terms of the Gradient of the Pressure
Date
2008-12-15 16:30 - 2008-12-15 17:30
Place
Faculty of Science Building #5 Room 302
Speaker/Organizer
Jishan Fan (Hokkaido University)
We prove a regularity criterion $\nabla \pi \in L^{2/3}(0,T;BMO)$for week solutions to the Navier-Stokes equations in the three-spacedimensions.This improve the available result with $L^{2/3}(0,T;L^\infty)$.
http://coe.math.sci.hokudai.ac.jp/sympo/pde/ | 2020-06-04 05:35:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2801387310028076, "perplexity": 2375.821991737014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347439019.86/warc/CC-MAIN-20200604032435-20200604062435-00244.warc.gz"} |
https://mathspace.co/textbooks/syllabuses/Syllabus-409/topics/Topic-7251/subtopics/Subtopic-96877/?activeTab=interactive | NZ Level 6 (NZC) Level 1 (NCEA)
Factorise algebraic factors
Interactive practice questions
Fill in the boxes to complete the equality:
$11u-19u^2=u\left(\editable{}-\editable{}\right)$11u19u2=u()
Easy
Less than a minute
Factorise the following expression by taking out the highest common factor:
$42x-x^2$42xx2
Factorise the following expression by taking out the highest common factor:
$4pqr-9pst$4pqr9pst
Factorise the following expression:
$2yz-16xy+16xy^2z$2yz16xy+16xy2z
Outcomes
NA6-6
Generalise the properties of operations with rational numbers, including the properties of exponents
91027
Apply algebraic procedures in solving problems | 2021-09-21 23:20:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4625520706176758, "perplexity": 2564.305811548038}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057274.97/warc/CC-MAIN-20210921221605-20210922011605-00394.warc.gz"} |
https://www.investopedia.com/terms/u/uniform-distribution.asp | • General
• Personal Finance
• Reviews & Ratings
• Wealth Management
• Popular Courses
• Courses by Topic
# Uniform Distribution
## What Is Uniform Distribution?
In statistics, uniform distribution refers to a type of probability distribution in which all outcomes are equally likely. A deck of cards has within it uniform distributions because the likelihood of drawing a heart, a club, a diamond, or a spade is equally likely. A coin also has a uniform distribution because the probability of getting either heads or tails in a coin toss is the same.
The uniform distribution can be visualized as a straight horizontal line, so for a coin flip returning a head or tail, both have a probability p = 0.50 and would be depicted by a line from the y-axis at 0.50.
### Key Takeaways
• Uniform distributions are probability distributions with equally likely outcomes.
• In a discrete uniform distribution, outcomes are discrete and have the same probability.
• In a continuous uniform distribution, outcomes are continuous and infinite.
• In a normal distribution, data around the mean occur more frequently.
• The frequency of occurrence decreases the farther you are from the mean in a normal distribution.
## Understanding Uniform Distribution
There are two types of uniform distributions: discrete and continuous. The possible results of rolling a die provide an example of a discrete uniform distribution: it is possible to roll a 1, 2, 3, 4, 5, or 6, but it is not possible to roll a 2.3, 4.7, or 5.5. Therefore, the roll of a die generates a discrete distribution with p = 1/6 for each outcome. There are only 6 possible values to return and nothing in between.
The plotted results from rolling a single die will be discretely uniform, whereas the plotted results (averages) from rolling two or more dice will be normally distributed.
Some uniform distributions are continuous rather than discrete. An idealized random number generator would be considered a continuous uniform distribution. With this type of distribution, every point in the continuous range between 0.0 and 1.0 has an equal opportunity of appearing, yet there is an infinite number of points between 0.0 and 1.0.
There are several other important continuous distributions, such as the normal distribution, chi-square, and Student's t-distribution.
There are also several data generating or data analyzing functions associated with distributions to help understand the variables and their variance within a data set. These functions include probability density function, cumulative density, and moment generating functions.
## Visualizing Uniform Distributions
A distribution is a simple way to visualize a set of data. It can be shown either as a graph or in a list, revealing which values of a random variable have lower or higher chances of happening. There are many different types of probability distributions, and the uniform distribution is perhaps the simplest of them all.
Under a uniform distribution, each value in the set of possible values has the same possibility of happening. When displayed as a bar or line graph, this distribution has the same height for each potential outcome. In this way, it can look like a rectangle and therefore is sometimes described as the rectangular distribution. If you think about the possibility of drawing a particular suit from a deck of playing cards, there is a random yet equal chance of pulling a heart as there is for pulling a spade—that is, 1/4 or 25%.
The roll of a single dice yields one of six numbers: 1, 2, 3, 4, 5, or 6. Because there are only 6 possible outcomes, the probability of you landing on any one of them is 16.67% (1/6). When plotted on a graph, the distribution is represented as a horizontal line, with each possible outcome captured on the x-axis, at the fixed point of probability along the y-axis.
## Uniform Distribution vs. Normal Distribution
Probability distributions help you decide the probability of a future event. Some of the most common probability distributions are discrete uniform, binomial, continuous uniform, normal, and exponential. Perhaps one of the most familiar and widely used is the normal distribution, often depicted as a bell curve.
Normal distributions show how continuous data is distributed and assert that most of the data is concentrated on the mean or average. In a normal distribution, the area under the curve equals 1 and 68.27% of all data falls within 1 standard deviationhow dispersed the numbers arefrom the mean; 95.45% of all data falls within 2 standard deviations from the mean, and approximately 99.73% of all data falls within 3 standard deviations from the mean. As the data moves away from the mean, the frequency of data occurring decreases.
Discrete uniform distribution shows that variables in a range have the same probability of occurring. There are no variations in probable outcomes and the data is discrete, rather than continuous. Its shape resembles a rectangle, rather than the normal distribution's bell. Like a normal distribution, however, the area under the graph is equal to 1.
## Example of Uniform Distribution
There are 52 cards in a traditional deck of cards. In it are four suits: hearts, diamonds, clubs, and spades. Each suit contains an A, 2, 3, 4, 5, 6, 7, 8, 9, 10, J, Q, K, and 2 jokers. However, we'll do away with the jokers and face cards for this example, focusing only on number cards replicated in each suit. As a result, we are left with 40 cards, a set of discrete data.
Suppose you want to know the probability of pulling a 2 of hearts from the modified deck. The probability of pulling a 2 of hearts is 1/40 or 2.5%. Each card is unique; therefore, the likelihood that you will pull any one of the cards in the deck is the same.
Now, let's consider the likelihood of pulling a heart from the deck. The probability is significantly higher. Why? We are now only concerned with the suits in the deck. Since there are only four suits, pulling a heart yields a probability of 1/4 or 25%.
## Uniform Distribution FAQs
### What Does Uniform Distribution Mean?
Uniform distribution is a probability distribution that asserts that the outcomes for a discrete set of data have the same probability.
### What Is the Formula for Uniform Distribution?
The formula for a discrete uniform distribution is
\begin{aligned}&P_x = \frac{ 1 }{ n } \\&\textbf{where:} \\&P_x = \text{Probability of a discrete value} \\&n = \text{Number of values in the range} \\\end{aligned}
As with the example of the die, each side contains a unique whole number. The probability of rolling the die and getting any one number is 1/6, or 16.67%.
### Is a Uniform Distribution Normal?
Normal indicates the way data is distributed about the mean. Normal data shows that the probability of a variable occurring around the mean, or the center, is higher. Fewer data points are observed the farther you move away from this average, meaning the probability of a variable occurring far away from the mean is lower. The probability is not uniform with normal data, whereas it is constant with a uniform distribution. Therefore, a uniform distribution is not normal.
### What Is the Expectation of a Uniform Distribution?
It is expected that a uniform distribution will result in all possible outcomes having the same probability. The probability for one variable is the same for another.
Article Sources
Investopedia requires writers to use primary sources to support their work. These include white papers, government data, original reporting, and interviews with industry experts. We also reference original research from other reputable publishers where appropriate. You can learn more about the standards we follow in producing accurate, unbiased content in our editorial policy.
1. National Institute of Standards and Technology. "What do we mean by normal data?" Accessed April 2, 2021.
Take the Next Step to Invest
×
The offers that appear in this table are from partnerships from which Investopedia receives compensation. This compensation may impact how and where listings appear. Investopedia does not include all offers available in the marketplace.
Service
Name
Description | 2023-02-03 23:16:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.617511510848999, "perplexity": 402.50781993639174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500076.87/warc/CC-MAIN-20230203221113-20230204011113-00199.warc.gz"} |
https://doc.simo.com.hk/language_basics/ | # Basics
## Array
• An array is a set of numbers, characters, or logical values organized as a rectangular form.
• An array can be a scalar, vector, matrix, or multi-dimensional.
## Length
• For an array, the length of its k-th dimension is the number of elements along that dimension.
• For examples, for each of the following arrays, the length of the second dimension is 2:
ones(3,2,1)
[1 2;3 4]
zeros(1,2,3,4)
## Scalar
• A scalar is an array having only one number, character, or logical value (i.e., all of its dimensions have length 1).
• The following are examples of scalar:
ones(1,1,1,1)
zeros(1,1)
1
-0.05
[2]
'a'
'?'
true
false
cos(90)
2-0.4
## Vector
• A vector can be a row vector or a column vector.
• A row vector is an array that each of its dimensions has unit length, except the second dimension.
• The following are examples of row vectors:
'abcd1234'
[true false true false]
zeros(1,3,1,1,1)
[1 2 3]
linspace(1,2)
2:2:20
• A column vector is an array that each of its dimensions has unit length, except the first dimension.
• The following are examples of column vectors:
['abcd1234']'
[true false true false]'
zeros(3,1,1,1)
[1; 2; 3]
linspace(1,2)'
(2:2:20)'
## Matrix
• A matrix is an array that each of its dimensions has length 1 except the first two dimensions.
• The following are examples of matrices:
ones(4,5,1,1)
diag(1:10)
['abc'; 'def']
[1 2;3 4]
## Multi-dimensional array
An array is said to be multi-dimensional if its k-th dimension, where k > 2, has length greater than 1. The following are examples of multi-dimensional arrays:
ones(1,1,4,5)
true(3,3,3,3,1,1,1)
zeros(3,4,3,4,1,1)
## Size
• The size of an array can be obtained by the function size().
• For a scalar, its size is the vector [1 1].
• For a row vector, its size is [1 s], where s is the vector's length.
• For a column vector, its size is [s 1], where s is the vector's length.
• For a matrix, its size is [s1 s2], where s1 and s2 are the lengths of its 1st and 2nd dimensions, respectively.
• For a multi-dimensional array, its size is the row vector of elements s1, s2, ..., sn, representing the lengths of the array's dimensions. Here n is greater than 2, and is the highest dimension with length greater than 1.
## Number of Dimensions
• The number of dimensions of an array is the length of its size vector.
## Singleton Dimension
• If a dimension of an array has length 1, it is called a singleton dimension.
• A non-singleton dimension is a dimension of length > 1.
• The first non-singleton dimension is a non-singleton dimension with the smallest dimension number.
• For example, the first dimension of ones(1,1,3,4) is singleton, while the first non-singleton dimension is 3.
## Argument Size
• Some built-in functions and operators perform elementwise operations of its input arguments, for examples, +, &, expcdf(), and times().
• For these functions or operators, the input arguments are expected to have the same sizes. For example, in times(A, B), A and B are expected to have the same sizes.
• However, if an argument is a scalar whereas the other argument is an array, the scalar would be expanded to match the size of the array.
• For examples, each of the following statements performs elementwise operations on a constant array and a randomly generated 3-by-3 array:
1 + rand(3)
0.8 & rand(3)
poissinv(0.2, randi(3,3))
• In the first statement above, 1 would be expanded into a 3-by-3 array of all 1s before added to rand(3).
• For some functions, if two arguments are vectors of the same lengths, they need not be both row vectors or both column vectors. In fact, one of them can be a row vector and the other a column vector. | 2018-12-14 01:56:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7293351888656616, "perplexity": 957.6030283673757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825123.5/warc/CC-MAIN-20181214001053-20181214022553-00082.warc.gz"} |
http://math.stackexchange.com/questions/273077/why-limit-eulers-partition-function-p-to-k-leq-sqrt-n-instead-of-k-leq-n | # Why limit Euler's Partition function P to $k\leq\sqrt n$ instead of $k\leq n$?
I solved a Project Euler problem (I won't say which one) involving the Partition Function P.
I used equation #11 from the above link:
$$P(n) = \sum_{k=1}^n (-1)^{k+1}\bigg(P\Big(n-{1\over 2}(3k-1)\Big)+P\Big(n-{1\over 2}(3k+1)\Big)\bigg)$$
As I was looking through the answer thread for the problem, I saw one person's comment that said:
[My algorithm] looks almost identical to [another user]'s, except my loop runs while $k \leq \sqrt n$ instead of $k \leq n$.
When $k > \sqrt n$, the values of $n_1$ and $n_2$ will both always be less than $0$.
-> $P(n_1)$ and $P(n_2)$ will always result in $0$
-> the value of $Pn$ will not change any more
It was established in an earlier post that:
• $n_1 = n-{1\over 2}(3k-1)$
• $n_2 = n-{1\over 2}(3k+1)$.
• $Pn$ is the running sum of all the calls to $P(n_1) + P(n_2)$.
My question is, why will the values of $n_1$ and $n_2$ be less than $0$ when $k \gt \sqrt n$?
-
Question: if $n < 0$, is $P(n) < 0$ by definition? If so then I think you can prove by induction. If not, then perhaps they mean that $n_1 = n_2 = 0$ when $k > \sqrt{n}$, which based on my proof sketch I believe you can also prove by induction. This is all assuming that $n_1$ and $n_2$ are indeed defined as you wrote. – august Jan 8 '13 at 21:12
@august Shoot... My mistake. I did define $n_1$ and $n_2$ incorrectly. Even so, if the summation goes from $k=1$ to $n$, and if $n$ is less than $1$, then, as he said, the summation will not change any more... Right? – Matthew D Jan 8 '13 at 21:37
I'm not exactly sure what you're asking here. In what scenario would $n$ be less than 1? Then there is no summation, and $P(n) = 0$ by definition. In any case, in your updated definition of $n_1$ and $n_2$, they will not always be less than 0 when $k > \sqrt{n}$, and I think it is easy to see that just from plugging in a few numbers for $k$ and $n$ as counterexamples. – august Jan 9 '13 at 20:34 | 2014-07-22 16:10:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9237893223762512, "perplexity": 232.40877743371766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997859240.8/warc/CC-MAIN-20140722025739-00199-ip-10-33-131-23.ec2.internal.warc.gz"} |
https://www.futurelearn.com/info/courses/how-to-start-run-your-own-seo-business-from-home-sc/0/steps/165588 | 0.3
35.6
And the quality is typically defined by what we call buyer intent. Which is a measure of how interested someone is in purchasing from you. So if someone sends you a message and they say, “Hey, I’m looking to hire someone to help improve my SEO.” Then this is going to be high buyer intent, as a person knows what they’re looking for, which is SEO and they also have some money to play with, hence when they said, “I’m looking to hire someone.” so in this example, this would be high buyer intent. In the second example, this would be medium to high buyer intent. If someone says to you, “How can you help me?”
69.8
So, you can see the buyer intent behind this isn’t as high as the first sentence, as the person isn’t really sure exactly how you can help them. However, from the sentence it’s clear that they are open to ideas, so this could potentially lead to a sale. So this one we would call medium to high buyer intent. So for the third example of “How much is it?” I would classify this one as medium buyer intent. The reason why I would classify this one as medium and not high, is because the person is solely focusing on the price and not the value you can deliver.
98.6
So as a result, you’re not really sure if they’re going to be a confident buyer and if they’re really that serious. The deciding factor for them could simply be the price and not actually growing their business. So in the fourth example, we have someone saying, “What is it you do?” So this is what I would classify as a low buyer intent, as the person isn’t even sure what we do in the first place. Which means they don’t even have a good understanding about SEO, how it all works and so on. So in this case, this would be classified as low buyer intent.
126.2
And then lastly, we have zero buyer intent, where someone literally replies to you and says, “No interested, please don’t contact me again.” And so on and so on. So these people are a complete waist of your time. They are never going to be clients, so don’t even bother trying to reply to them and try and convince them. Just save your energy and focus your energy on where it matters, which is the high and medium buyer intents. So when it comes to leads, there should only be one main goal. Our main objective is to get the lead on the phone with us.
155.6
So many people get this wrong and they try and sell to the lead straight away, which is just asking for problems. The best thing you can do to land the client, is to try and get them on the phone and figure out exactly what they’re struggling with. However, what we need to do first is to get the lead on the phone. So let’s look at an overview of the whole sales process. So first, we have the lead. And then the next goal is to get them on the phone, which is what we actually call a discovery call.
180
The discovery call is basically where we find out all the information about the client, their goals, how many enquiries they like to generate and so on and so on, but don’t worry about that bit for now, as in the next video I’m going to walk you through all the questions you should be asking to find out all the information you need. But for now, what we need to know is that getting them on the phone is super important. Now, not every lead you message will happily jump on the phone with you. Some of them won’t even reply to your messages. So as a result, you may need to send follow up messages.
208.4
Once you do eventually speak to that lead and you get them on the phone for a discovery call, the next step and final step is to present your proposal, which is basically where you pitch your services. So now we understand the sales process, it’s time to discuss how we can get the lead on the phone for the actual discovery call. So to get the lead on the phone, you can either schedule a time with them through email by suggesting a couple of times and dates, schedule it in and seeing what’s a good fit for them.
233.8
It’s a bit more manual and it may take two or three messages or emails to get a schedule, but that’s fine and it works really well and a lot of people like this method. Or we can speed up things and use a service like Calendly, which automatically checks your calender, it see’s when you’re available and lets them schedule directly into your calender. This is a huge time saver for both of you. Now not everyone understand how this works, so if you’re working with younger people who are really tech savvy, then it shouldn’t be an issue at all. However, do just bare that in mind, it does involve using a service.
264.1
So, if you are talking to a client who operates in a old fashion industry, then you might want to play it safe and just schedule the email manually. The discovery call should last for roughly 20 minutes. 20 minutes is a sufficient time to run through everything. Listen to the clients needs and get all the answers you need later on for the proposal. So here is the exact template that I use to send to clients when I need to schedule in calls. So my email template is very straight forward. It says, “Hi Sam, do you have 15 minutes “in the next few days to jump on the phone “and discuss exactly what you’re looking to achieve?
296.8
“Once we know this we’ll be able to let you know “if we’re a good fit to work together. “How’s Friday at 10:30AM? “Otherwise you can schedule anytime directly here. “Kind regards, Joshua.” So as you can see my email template is providing tons of value. I’m just trying to get them on the phone. I’m asking if they’re free and the reason why we need to jump on the phone, is because I need to figure out exactly what they’re looking to achieve. If I don’t know what they’re looking to achieve, then I can’t actually help them.
321.4
So most businesses will jump to get on the phone to you, just so they can tell you about their business, how amazing it is, what they’re struggling with, it’s all about them. From a selfish point of view, people love talking about themselves. This is why the whole email I send is focused on them and not me. And this is exactly how it should be. And another good thing about this email, is it doesn’t come across as I’m desperate at all. As my second sentence says, “Once we know this we’ll be able to let you know “if we’re a good fit to work together.” So as you can see, I’m not really applying pressure at all.
351.8
And I’m kind of letting the client know that, hey, just because we’re jumping on the phone together, it doesn’t mean we’re gonna work together. We need to see if we’re a good fit first. Which is the right angle you want to go about this, as you don’t want to seem desperate. And lastly, the last line says, “Otherwise you can schedule any time directly here.” And then it has a link to my calender. So if someone clicks that link it’s going to open up a calender page, which looks like this. And essentially they can book anytime into my calender. For example, this is my calender for the next few days. You can click any dates.
381
For example, Friday the first, you can go for Monday the fourth, Tuesday the fifth, so on. Give that a click and it will tell you all the times I’m free based on what’s booked out in my calender. And then clients can then go through and pick a time which suits them. So as you can see, it makes the whole booking process so much more easier. However, you don’t have to use this. And when I first started my SEO business, I actually did everything manually. But essentially, the main goal is to get the person on the phone. I thought it would be useful to include a section on this. So how to respond to leads that aren’t high buyer intent?
412.6
And knowing how to reply to these leads is going to help you a lot. I remember when I first started my SEO business, I literally had no idea what to say back to these people. And as a result, I was throwing tons of leads down the drain, which actually resulted in my agency growing slower than it actually should’ve been. So if someone sends you a message and they say, “How much is it?” I would respond saying this. “Honestly, I couldn’t give you a price “without specifically knowing what you’ll need. “Some companies don’t blink at investing 5K per month “as they know they’ll make 10 times that back “every single month.
443.2
“It entirely depends on your objectives, “but either way if we work with you “it’s not a cost because we won’t work with you “if we’re not confident we’ll get you a good return “on your investment. “With that said, just so we don’t waist each others time “we have a minimum budget of $500 per month. “If you would like a specific price “and to see if we can help you “then we can schedule you in for a call this week?” So as you can see, I’m not really giving them an answer. I’m just trying to explain to them that the price varies a lot. 470.3 However, we need to get on the phone just to figure out exactly what it is you need and then we can quote you accordingly. However, I do check or qualify in that email as well, just to say that our prices start from$500 per month, as if the client only has $100 or$50 per month, then I don’t want to waist my time getting on the phone. So focus your efforts on the clients that have budgets, as those are the ones who are going to pay you. Here’s another example of a lead that isn’t higher buyer intent. So someone can reach out to you and say, “How can you help me?”
498.1
530.7
Get them on the phone, as selling over the phone is way more effective than selling over email. In the next video, I’m going to go through all the questions and things you should be answering the client once you’re on the phone to them. I’ll see you there.
Converting your leads into phone calls is important as its easier to sell over the phone than it is to sell over email.
Leads can come from any channel, whether thats inbound, cold email, Upwork etc. | 2023-03-21 12:09:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28783488273620605, "perplexity": 574.8220431002139}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943695.23/warc/CC-MAIN-20230321095704-20230321125704-00216.warc.gz"} |
https://socratic.org/questions/how-do-you-solve-x-2x-3-3x-50 | # How do you solve x(2x + 3) = 3x + 50?
Apr 9, 2016
$x = \pm 5$
#### Explanation:
$x \left(2 x + 3\right) = 3 x + 50$
$\rightarrow 2 {x}^{2} + \cancel{3 x} = \cancel{3 x} + 50$
$\rightarrow {x}^{2} = 25$
$\rightarrow x = \pm 5$ | 2019-12-12 11:00:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9913651347160339, "perplexity": 7144.801595492508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540543252.46/warc/CC-MAIN-20191212102302-20191212130302-00341.warc.gz"} |
http://daczyszyn.laszczow.pl/334ph/37655b-pbo-molar-mass | Element Symbol Atomic Mass Number of Atoms Mass Percent; Plumbum: Pb: 207.2: 1: 86.623%: Oxygenium: O: 15.9994: 2: 13.378%: Notes on using the Molar Mass Calculator. ... + 1/2 O2(g) → PbO(s). PDF | Glasses based on SiO2-PbO-CdO-Ga2O3 system have been studied for the first time for fabrication of mid-infrared optical elements. 500. 2 + 7 PbO 2 + HNO 3 ⇒2 HMnO 4 + 2 Cl 2 + 7 Pb ... im Nenner das Vielfache der molaren Masse der Verbindunglaut Reak-tionsgleichung enthält. Similarly, the number of moles of oxygen can be calculated by substituting the respective values in equation (I). However, results show slightly decrease from 24.5709 to 24.3634 m 3 mol … Convert grams PbO to moles or moles PbO to grams, Molecular weight calculation: Gamma-ray attenuation coefficients of the Calculate the mass in grams of 8.0 mol lead oxide (PbO) 1785.6 g/mol. Peso computação molecular (massa molecular) 2. The mass of a mole of NaCl is the ____. Example Reactions: • 2NaNO3 + PbO = Pb(NO3)2 + Na2O. The influence of additions of excess PbO to Pb(Mg1/3Nb2/3)O3–35 mol% PbTiO3 (PMN–35PT) on {111} single-crystal growth by seeded polycrystal conversion was studied in the range of 0–5 vol% PbO. Molecular mass (molecular weight) is the mass of one molecule of a substance and is expressed in the unified atomic mass units (u). What is molar mass. ››More information on molar mass and molecular weight. PbO volatilization and hence weight loss during annealing Browse the list of Mass PbO = 8.00 mol x 223.20 g/mol =1785.6 g molar mass and molecular weight. Give the reaction: 2pbs(s)+3o2(g)→2pbo(s)+2so2(g) If the formula used in calculating molar mass is the molecular formula, the formula weight computed is the molecular weight. The converter uses simple … Molare Masse of PBO(l) is 57.7842 g/mol Berechnen Sie das Gewicht von PBO(l) oder Mol. This is how to calculate molar mass (average molecular weight), which is based on isotropically weighted averages. How many ... What is PbO. A common request on this site is to convert grams to moles. TutorsOnSpot.com. (1 u is equal to 1/12 the mass of one atom of carbon-12) Molar mass (molar weight) is the mass of one mole of a substance and is expressed in g/mol. Example Reactions: • PbS + 4 H2O2 = PbSO4 ↓ + 4 H2O. Do the same for mols NaCl to mols phosgenite. Four kinds of synthesized P (S- co -BCB- co -MMA) with different number-average molar mass (Mn) were well controlled and possessed narrow dispersity. The atomic weights used on this site come from NIST, the National Institute of Standards and Technology. from 23.9757 cm 3 mol-1 to 24.5709 cm 3 mol-1 with the gradual i ncrease of the PbO content in the borate glasses. Solution: A To determine the energy released by the combustion of palmitic acid, we need to calculate its $$ΔH^ο_f$$. Browse the list of 2 PbO 2 (s) → 2 PbO(s) + O 2 (g) (PbO 2 = 239.2, O 2 = 32.00) 49 Density at Standard Conditions • Density is the ratio of mass to volume • Density of a gas is generally given in g/L • The mass of 1 mole = molar mass • The volume of 1 mole at STP = 22.4 L 51 Calculate the mass in grams of each of the following: a. In all cases, the radioactivity was rapidly excreted with 87-99% being found in the 0-48-hr excreta and the majority of the dose (64.1-85.0%) being eliminated in feces. Mass PbO … PBO Using the coefficients in the balanced equation, convert mols PbO to mols phosgenite. Die so gebildeten Brüche sind gleichzusetzen und die jeweils unbekannten Größen zu berechnen. Molecular weight: 223.19: Form: solid: Appearance: yellow to orange powder: Sensitivity: heat: Melting point: 600C: Boiling point: 600C: Molecular formula: PbO: Linear formula: PbO: Download Specification PB7368. Mol Pb in 451.4g = 451.4/207.19 = 2.179mol . The reason is that the molar mass of the substance affects the conversion. The molar mass of PbO is 223.2 g/mol. This will produce 2.179 mol PbO . Compare this value with the value calculated in Equation $$\ref{7.8.8}$$ for the combustion of glucose to determine which is the better fuel. When calculating molecular weight of a chemical compound, it tells us how many grams are in one mole of that substance. The density (ρ), molar volume (V m) and the OPD for the present glass system are collected in Table 1.Density of the glasses studied increased from 4930 to 6231 (kg/m 3), while molar volume decreased from 32.37 to 28.67 (cm 3 /mol) as shown in Fig. Unbalanced equation = Pb(NO₃)₂ → PbO + NO₂ +O₂ ... A sample with a molar mass of 34.00 g is found to consist of 0.44g H and 6.92g O. Give the name and molar mass of the foilowing ionic compounds: Name 1) NazCOg sodium carbonate 2) NaOH sodium hydroxide 3) MgBr2 magnesium bromide 4) KCI potassium chloride 5) FeClz iron {il} chtoride 6) FeCls iron {lll} chtoride 7) Zn(OH)2 xinc hydroxide B) BezSO+ beryllium sulfate 9) CrFz chromium (lli fluoride 10) Al2S3 aluminum sulfide 11) PbO lead ill) oxide molar mass: PbO 2 HNO 3 ammonium phosphate percentage composition: the mass % of each element in a compound x 100 molar mass of compound g element % of element Find % composition. Lead(II) Oxide. Divide this value by the molar mass of palmitic acid to find the energy released from the combustion of 1 g of palmitic acid. Molar Mass of Frequently Calculated Chemicals: (C2H5)2O Ether (NH4)2C2O4 Ammonium Oxalate (NH4)2CO3 Ammonium Carbonate (NH4)2CrO4 Ammonium Chromate (NH4)2HPO4 Di-Ammonium Phosphate (NH4)2S Ammonium Sulfide (NH4)2SO4 Ammonium Sulfate (NH4)3PO3 Ammonium Phosphite (NH4)3PO4 Ammonium Phosphate Ag2O Silver(I) Oxide Ag2S Silver Sulfide Ag2SO4 Silver … TutorsOnSpot.com. molar mass and molecular weight. common chemical compounds. Molar mass calculator also displays common compound name, Hill formula, elemental composition, mass percent composition, atomic percent compositions and allows to convert from weight to number of moles and vice versa. 2 Finding an Empirical Formula from Experimental Data 1. The exact mass and the monoisotopic mass of Lead dioxide is 239.966 g/mol. Composition Zylon® PBO is a rigid-rod isotropic crystal polymer that is spun by a dry-jet wet spinning process. For bulk stoichiometric calculations, we are usually determining molar mass, which may also be called standard atomic weight or average atomic mass. Convert grams PBO2 to moles or moles PBO2 to grams, Molecular weight calculation: This is not the same as molecular mass, which is the mass of a single molecule of well-defined isotopes. PbO2. Molar Mass, Molecular Weight and Elemental Composition Calculator Enter a chemical formula to calculate its molar mass and elemental composition: Molar mass of PbO2 is 239.1988 g/mol Density of the glasses studied increased from 4930 to 6231 (kg/m 3), while molar volume decreased from 32.37 to 28.67 (cm 3 /mol) as shown in Fig. mols = grams/molar mass Do the same for NaCl. The glass density increase may be due to the high PbO molecular weight (223.1994) which is more than that of TeO 2 (159.6) and hence, the Molar mass PbO = 223.20g/mol . Search results for PbO at Sigma-Aldrich. 4. Did you mean to find the molecular weight of one of these similar formulas? Schmelzpunkt: 500 °C. Molecular weight calculation: 207.2 + 15.9994 ›› Percent composition by element Formula: PbO Molar Mass: 223.199 g/mol 1g=4.48030681141044E-03 mol Percent composition (by mass): Element Count Atom Mass %(by mass) Pb 1 207.2 92.83% O 1 15.999 7.17% Give the reaction: 2pbs(s)+3o2(g)→2pbo(s)+2so2(g) check_circle Expert Answer. Identify the limiting reagent and determine the maximum mass of lead (molar mass 207.2 g/mol) that can be obtained by the reaction. 8.0 mol lead oxide (PbO) b. Ereztech manufactures and sells this product in small and bulk volumes. Calculate the molecular weight 1. Die in den Aufgaben benutzte Angabe »Prozent« entspricht nach den Aus-führungen DIN 1310.6 dem »Massenanteil in Prozent«. A sample of a compound analyzed in a chemistry laboratory consists of 5.34 g of carbon, 0.42 g of hydrogen, and 47.08 g of chlorine. Asked By adminstaff @ 11/08/2019 … 2PbO(s) + PbS(s) → 3Pb(s) + SO 2 (g) A) PbS limiting; 30.65 g Pb obtained. PBO2 Convert grams PbO to mols. Balance the reaction of Pb(NO3)2 = PbO + NO2 + O2 using this chemical equation balancer! cm-3: Schmelzpunkt: 500 °C: Siedepunkt: Nicht anwendbar, da Zersetzung Dampfdruck: Nicht anwendbar Löslichkeit: praktisch unlöslich in Wasser, löslich in heißer, konz. Cu: 1 What is the molar mass for: Cu: 63.55 (rounded to 2 decimal places) Molar mass for Cu = 63.55g/mol Copper’s molar mass is 63.546 g/mol. • Pb (NO3)2 + Na2S = PbS + 2 NaNO3. Molar mass: 223.2 17.02 207.2 28.0 18.02 3 PbO (s) + 2 NH 3 (g) → 3 Pb(s) + N 2 (g) + 3 H 2 O (l) a) How many grams of NH 3 are consumed in a reaction with 75.0 g PbO? Given the reaction 2PbS(s) + 3O2(g) ®… 3.4 Thermal stability of 6FPBO fibers TGA curves of 6FPBO and PBO fibers in nitrogen at-mospheres were shown in Figure 3. This is how to calculate molar mass (average molecular weight), which is based on isotropically weighted averages. The formula weight is simply the weight in atomic mass units of all the atoms in a given formula. Modern applications for PbO are mostly in lead-based industrial glass and industrial ceramics, including computer components. Formula weights are especially useful in determining the relative weights of reagents and products in a chemical reaction. Calculate the mass in grams of 1.50 x 10^-2 mol molecular oxygen (O2) .48 g/mol. Lead(II) oxide, also called lead monoxide, is the inorganic compound with the molecular formula PbO. Calculate the molecular weight Want to see this answer and more? *Please select more than one item to compare 1 mol Pb produces 1 mol PbO . Order Your Homework Today! The percentage by weight of any atom or group of atoms in a compound can be computed by dividing the total weight of the atom (or group of atoms) in the formula by the formula weight and multiplying by 100. PbO occurs in two polymorphs: litharge having a tetragonal crystal structure, and massicot having an orthorhombic crystal structure. The atomic weights used on this site come from NIST, the National Institute of Standards and Technology. Give the name and molar mass of the foilowing ionic compounds: Name 1) NazCOg sodium carbonate 2) NaOH sodium hydroxide 3) MgBr2 magnesium bromide 4) KCI potassium chloride 5) FeClz iron {il} chtoride 6) FeCls iron {lll} chtoride 7) Zn(OH)2 xinc hydroxide B) BezSO+ beryllium sulfate 9) CrFz chromium (lli fluoride 10) Al2S3 aluminum sulfide 11) PbO lead ill) oxide Chemical formulas are case-sensitive. If the molecular mass of the compound is 60.0 g/mol, what is the molecular formula? I T Giftig. A compound is contains 46.7% nitrogen and 53.3% oxygen. Molar Mass: 239.265. Related Questions in Chemistry. | 2021-04-16 08:22:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5923146605491638, "perplexity": 9210.869896940616}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038088731.42/warc/CC-MAIN-20210416065116-20210416095116-00594.warc.gz"} |
http://www.mathnet.ru/php/contents.phtml?wshow=issue&jrnid=im&year=2010&volume=74&issue=2&series=0&option_lang=eng | RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PERSONAL OFFICE
General information Latest issue Forthcoming papers Archive Impact factor Subscription Guidelines for authors License agreement Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
Izv. RAN. Ser. Mat.: Year: Volume: Issue: Page: Find
Valerii Vasil'evich Kozlov (congratulation) Kolmogorov inequalities for functions in classes $W^rH^\omega$ with bounded $\mathbb L_p$-normS. K. Bagdasarov 5 One-dimensional Fibonacci tilings and induced two-colour rotations of the circleV. G. Zhuravlev 65 Stabilization of solutions of pseudo-differential parabolic equations in unbounded domainsL. M. Kozhevnikova 109 On the topological stability of continuous functions in certain spaces related to Fourier seriesV. V. Lebedev 131 Homogenization of a mixed boundary-value problem in a domain with anisotropic fractal perforationS. A. Nazarov, A. S. Slutskii 165 Rationality of the Poincaré series in Arnold's local problems of analysisR. A. Sarkisyan 195 | 2019-04-20 07:59:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1872309297323227, "perplexity": 12340.291852201497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578528702.42/warc/CC-MAIN-20190420060931-20190420082843-00030.warc.gz"} |
https://mathoverflow.net/questions/271702/about-gpy-conjecture-generalization | # About GPY conjecture generalization?
This post Density of prime pairs whose gap is less than the average gap says that it is conjectured https://arxiv.org/abs/1103.5886 that (in case # means the cardinal of the set) $$\#\{p_n<x|\alpha < \frac{p_{n+1} - p_n}{\log(p_n)}<\beta \} \sim \pi(x) \int_{\alpha}^{\beta} \frac{1}{e^t}dt$$ for $0 \leq \alpha < \beta$. Let $p_n$ denote the $n$'th prime number. Define the multiset $$A_1(p_N) = \{g_n = p_{n+1} - p_n | p_{n+1} < p_N\}$$ We make the following notations: $|A_1(p_N)|$ is the number of elements in $A_1(p_N)$ counted with multiplicity and $\sum A_1(p_N)$ is the sum thereof. It is known from the prime number theorem that $|A_1(p_N)| \sim \frac{p_N}{\log(p_N)}$ and obviously $\sum A_1(p_N) \sim p_N$. Define $$A_2(p_N) = \left\{g_n \in A_1(p_N)| g_n < \frac{\sum A_1(p_n)}{|A_1(p_n)|}\right\}$$ The quantity on the right of the inequality $\frac{\sum A_1(p_n)}{|A_1(p_n)|} \sim \log(p_n)$ It is conjectured (I think GPY applies here too ) that $|A_2(p_N)| \sim N \int_{0}^1 \frac{1}{e^t}dt = c_2\cdot |A_1(p_N)|$ where $c_2$ is a strictly positive constant. Further more define in a similar fashion: $$A_3(p_N) = \left\{g_n \in A_{2}(p_N)| g_n < \frac{\sum A_{2}(p_n)}{|A_{2}(p_n)|}\right\}$$ the process can continue of course.
Can someone make an estimation of $\frac{\sum A_2(p_N)}{|A_2(p_N)|}$ ? Is it $\sim \log(p_N)$ ?
Also one can see that $$|A_1(p_N)| \sim \int_1^N \left( \frac{1}{\log(x)} - \frac{1}{\log^2(x)}\right)dx \hspace{0.5cm} |A_2(p_N)| \sim \int_{\frac{N}{e}}^N \left( \frac{1}{\log(x)} - \frac{1}{\log^2(x)}\right)dx$$ then how is intuitively $|A_k(p_N)| \sim \int_{?}^{N} \left( \frac{1}{\log(x)} - \frac{1}{\log^2(x)}\right)dx$ Where does intuitively $e$ arise from, in that integral?
• Can somebody explain the down votes? Why no explanation? – C Marius Jun 8 '17 at 14:04
• Do you consider $A_{1}(p_{N})$ as a set or as a multiset? – Sylvain JULIEN Jun 8 '17 at 16:04
• $A_1(p_N)$ is a set $A_1(p_M)$ another set – C Marius Jun 8 '17 at 16:06
• So if $g$ is a given prime gap that appears several times, you only count it once? – Sylvain JULIEN Jun 8 '17 at 16:13
• No, it is counted every time it appears ... please edit the post to make a better exposition... – C Marius Jun 8 '17 at 16:17 | 2019-03-21 16:35:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.815141499042511, "perplexity": 438.3704039959029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202526.24/warc/CC-MAIN-20190321152638-20190321174638-00152.warc.gz"} |
https://researchseminars.org/talk/agstanford/24/ | # Square root Euler classes and counting sheaves on Calabi-Yau 4-folds
### Richard Thomas (Imperial College London)
Fri Sep 25, 19:00-20:00 (3 days from now) Login required for livestream access
Abstract: I will explain a nice characteristic class of $SO(2n,\mathbf{C})$ bundles in both Chow cohomology and K-theory, and how to localise it to the zeros of an isotropic section. This builds on work of Edidin-Graham, Polishchuk-Vaintrob, Anderson and many others.
This can be used to construct an algebraic virtual cycle (and virtual structure sheaf) on moduli spaces of stable sheaves on Calabi-Yau 4-folds. It recovers the real derived differential geometry virtual cycle of Borisov-Joyce but has nicer properties, like a torus localisation formula. Joint work with Jeongseok Oh (KIAS).
algebraic geometry
Audience: researchers in the topic
Comments: The discussion for Richard Thomas’s talk is taking place not in zoom-chat, but at tinyurl.com/2020-09-25-rt (and will be deleted after 3-7 days).
Series comments: This seminar requires both advance registration, and a password. If you have registered once, you are always registered. Register at stanford.zoom.us/meeting/register/tJEvcOuprz8vHtbL2_TTgZzr-_UhGvnr1EGv Password: 362880
More seminar information (including slides and videos, when available): agstanford.com
Organizer: Ravi Vakil* *contact for this listing
Export talk to | 2020-09-22 20:04:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5201232433319092, "perplexity": 4986.10964358562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400206763.24/warc/CC-MAIN-20200922192512-20200922222512-00388.warc.gz"} |
http://www.chegg.com/homework-help/definitions/infinite-limits-29 | # Definition of Infinite Limits
Infinite limits are those that have a value of ±∞, where the function grows without bound as it approaches some value a. For f(x), as x approaches a, the infinite limit is shown as . If a function has an infinite limit at , it has a vertical asymptote there. For two functions f(x) and g(x), where and and c and L are real numbers, the limit of the sum or difference of the functions is ∞: ; if , then the product of their limits is ∞: ; if , then the product of their limits is − ∞: ; and the limit of the quotient of g and f is 0: .
# Related Questions (10)
### Get Definitions of Key Math Concepts from Chegg
In math there are many key concepts and terms that are crucial for students to know and understand. Often it can be hard to determine what the most important math concepts and terms are, and even once you’ve identified them you still need to understand what they mean. To help you learn and understand key math terms and concepts, we’ve identified some of the most important ones and provided detailed definitions for them, written and compiled by Chegg experts. | 2015-12-01 02:10:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9360764622688293, "perplexity": 382.159557684838}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398464396.48/warc/CC-MAIN-20151124205424-00092-ip-10-71-132-137.ec2.internal.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/168028-probability-balls-falling-bins-print.html | # probability of balls falling to bins
Printable View
• January 10th 2011, 10:36 PM
asura
probability of balls falling to bins
Hi everyone, here is my question:
Suppose there are "b" balls and n bins, we like to put all balls into "n" bins and the probability that every ball falls into any bin is same(1/n).
Is there a simple equation/series of equations to calculate the probability that there are x bins (0<x<=n) with at least one ball in it?
I know this can be done by counting or permuting all possible outcomes but this is not feasible when b and n are large.
Please advise.
• January 11th 2011, 08:59 AM
snowtea
Number of ways to split b balls into n bins is:
$\binom{b+n-1}{b}$
Why? Think about this diagram ***|**||*** for dividing 8 balls into 4 bins as (3,2,0,3)
Number of ways to split b balls into n bins with at least one ball in each bin:
$\binom{b - 1}{b - n}$
Why? Think about this diagram o*|o**|o*|o** for dividing 10 balls into 4 bins as (2,3,2,3)
If you want to calculate exactly x bins filled, then this is easy
$\binom{n}{x}\binom{b - 1}{b - x}$
Why? Pick x bins and fill at least 1 balls in each bin.
If you want at least x bins filled, then it is probably easiest to do a summation:
$\sum_{k=x}^n \binom{n}{k}\binom{b-1}{b-k}$
• January 11th 2011, 05:50 PM
awkward
Snowtea,
The difficulty with counting the number of ways to distribute the balls in the bins and using this to compute the probability is that not all distributions are equally likely. For example, suppose there are only 2 balls and 2 bins. The distributions 2-0 and 0-2 each occur with probability 1/4, but the probability of 1-1 is 1/2.
This problem is the Coupon Collector's Problem, slightly disguised. See, for example,
Coupon collector's problem - Wikipedia, the free encyclopedia
• January 20th 2011, 06:14 AM
asura
Quote:
Originally Posted by awkward
Snowtea,
The difficulty with counting the number of ways to distribute the balls in the bins and using this to compute the probability is that not all distributions are equally likely. For example, suppose there are only 2 balls and 2 bins. The distributions 2-0 and 0-2 each occur with probability 1/4, but the probability of 1-1 is 1/2.
This problem is the Coupon Collector's Problem, slightly disguised. See, for example,
Coupon collector's problem - Wikipedia, the free encyclopedia
Hi awkward,
I have difficulty mapping the ball-bin problem to the coupon collector's problem. Could you please elaborate?
• January 20th 2011, 02:02 PM
awkward
Suppose there are n different types of coupons, all equally likely, and you have b coupons. The problem you have posed is the same as determining the probability that you have x different types. Usually the question asked is if you have a complete set (x=n), so maybe this isn't exactly the classic Coupon Collector problem, but it's very close. | 2014-09-16 08:25:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7824087738990784, "perplexity": 466.384168297514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657114105.77/warc/CC-MAIN-20140914011154-00238-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
https://www.physicsforums.com/threads/what-is-the-potential-energy-of-this-group-of-electron-charges.217758/ | # Homework Help: What is the potential energy of this group of electron charges?
1. Feb 25, 2008
### yo_man
1. The problem statement, all variables and given/known data
Three electrons form an equilateral triangle 1.10 nm on each side. A proton is at the center of the triangle.
What is the potential energy of this group of charges?
2. Relevant equations
potential energy = U = kq1q2/r
3. The attempt at a solution
So, I was assuming that the three electrons, since they form an equilateral triangle, their charges will cancel out and I should just find the potential energy based on the positive charge.
so then I substituted q1 for the proton, but then since there is a q2, i also tried to use the charge of an electron.
also, I found the radius from the center proton of the triangle to the corner electron of the triangle to be 0.635 m
I'm just not sure what to do anymore I only have a few attempts left
2. Feb 25, 2008
?? meter ??
I will take it as 0.635 nm.
potential energy of the system=$$k[\frac{3e^2}{1.10X10^-9}-\frac{3e^2}{0.635X10^-9}]$$
We can make three electron-electron pairs and three proton-electron pairs.
Last edited: Feb 25, 2008
3. Sep 20, 2011
### Rusty Shackle
So what am I supposed to plug in for e? +1, -1 ??????? | 2018-06-23 08:38:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6074271202087402, "perplexity": 727.6265933245764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864953.36/warc/CC-MAIN-20180623074142-20180623094142-00626.warc.gz"} |
http://gmatclub.com/forum/the-price-of-lunch-for-15-people-was-207-00-including-a-131794.html | Find all School-related info fast with the new School-Specific MBA Forum
It is currently 08 Mar 2014, 22:16
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# The price of lunch for 15 people was $207.00, including a 15 Question banks Downloads My Bookmarks Reviews Important topics Author Message TAGS: Director Status: Preparing for the 4th time -:( Joined: 25 Jun 2011 Posts: 547 Location: United Kingdom Concentration: International Business, Strategy GMAT Date: 06-22-2012 GPA: 2.9 WE: Information Technology (Consulting) Followers: 10 Kudos [?]: 149 [1] , given: 215 The price of lunch for 15 people was$207.00, including a 15 [#permalink] 02 May 2012, 14:55
1
KUDOS
00:00
Difficulty:
25% (low)
Question Stats:
67% (02:27) correct 32% (01:19) wrong based on 177 sessions
The price of lunch for 15 people was $207.00, including a 15 percent gratuity for service. What was the average price per person, EXCLUDING the gratuity? (A)$11.73
(B) $12.00 (C)$13.80
(D) $14.00 (E)$15.87
[Reveal] Spoiler:
How come the answer is B?
This is how I solved it and got A, which is incorrect.
$207 includes 15% gratuity so 0.15 * 207 = 31.05 Price EXCLUDING gratuity = 207 - 31.05 = 175.95 Average price per person = 175.95/15 = 11.73 [Reveal] Spoiler: OA _________________ Best Regards, E. MGMAT 1 --> 530 MGMAT 2--> 640 MGMAT 3 ---> 610 Director Status: Joined: 24 Jul 2011 Posts: 501 GMAT 1: 780 Q51 V48 GRE 1: 1540 Q800 V740 Followers: 53 Kudos [?]: 216 [1] , given: 9 Re: Price of lunch [#permalink] 02 May 2012, 20:23 1 This post received KUDOS The mistake you have made is that you have calculated the gratuity on the final amount, instead of finding the amount on which gratuity was charged to get the final amount. If the amount spent excluding the gratuity was x, then x(1.15) = 207 => x = 207/1.15 = 180 Then the average price per person = 180/15 = 12 Option (B) _________________ Free profile evaluation by top b-school alumni: email us at info@gyanone.com B-school application service http://www.gyanone.com/mba-essay-editing/ Visit our blog: http://www.gyanone.com/blog Manager Joined: 02 Jun 2011 Posts: 159 Followers: 1 Kudos [?]: 12 [0], given: 11 Re: Price of lunch [#permalink] 03 May 2012, 01:48 GyanOne wrote: The mistake you have made is that you have calculated the gratuity on the final amount, instead of finding the amount on which gratuity was charged to get the final amount. If the amount spent excluding the gratuity was x, then x(1.15) = 207 => x = 207/1.15 = 180 Then the average price per person = 180/15 = 12 Option (B) gratuity is charged on final amount then is it not obvious to calculate such as! just wish to have clarification on the two : 1.) 270 - 15% of 270 = 175.95/ 15 = 11.73 2.) if 115 is 270 then 100 = 180 and 180/15 = 12 is it not the same thing ? Intern Joined: 13 Nov 2010 Posts: 42 Followers: 0 Kudos [?]: 11 [1] , given: 2 Re: The price of lunch for 15 people was$207 including a 15% [#permalink] 03 May 2012, 06:51
1
KUDOS
enigma123 wrote:
The price of lunch for 15 people was $207 including a 15% gratuity for service. What was the average price per person, EXCLUDING the gratuity? A. 11.73 B. 12 C. 13.8 D. 14 E. 15.87 How come the answer is B? This is how I solved it and got A, which is incorrect.$207 includes 15% gratuity so 0.15 * 207 = 31.05
Price EXCLUDING gratuity = 207 - 31.05 = 175.95
Average price per person = 175.95/15 = 11.73
I'll show You perhaps the most easiest way to solve such problem
As it is a percentage problem so just take the initial price before the gratuity is 100 the as the gratuity is calculated on the final price, so as we assumed the final bill before adding gratuity is 100 so gratuity is 15% of 100 is 15 so the total price of meals is 115 so the given amount i.e 207 is for 115 then we have to calculate for 100 here we go
for 115 207
for 100 x
so by cross multiplication we get 115x=100*207 => x=100*207/110 by simplifying we get x as 180 which is the price of lunch before gratuity so the gratuity is 27 so
as the question ask the average price person excluding gratuity is 180/15=12 so our answer is b
hope its concise and clear .
if u satisfied with my effort don't hesitate to give +1 kudos.
Math Expert
Joined: 02 Sep 2009
Posts: 16817
Followers: 2775
Kudos [?]: 17579 [3] , given: 2225
Re: Price of lunch [#permalink] 03 May 2012, 11:56
3
KUDOS
Expert's post
kashishh wrote:
GyanOne wrote:
The mistake you have made is that you have calculated the gratuity on the final amount, instead of finding the amount on which gratuity was charged to get the final amount.
If the amount spent excluding the gratuity was x, then x(1.15) = 207
=> x = 207/1.15 = 180
Then the average price per person = 180/15 = 12
Option (B)
gratuity is charged on final amount then is it not obvious to calculate such as!
just wish to have clarification on the two :
1.) 270 - 15% of 270 = 175.95/ 15 = 11.73
2.) if 115 is 270 then 100 = 180 and 180/15 = 12
is it not the same thing ?
Tip is charged on the bill. Consider this: the bill is $100 and tip is 15% means that total amount paid is 100+15=$115.
The price of lunch for 15 people was $207 including a 15% gratuity for service. What was the average price per person, EXCLUDING the gratuity? A. 11.73 B. 12 C. 13.8 D. 14 E. 15.87 Bill*1.15=$207 --> Bill=$180 --> per person=$180/15=$12. Answer: B. _________________ Manager Joined: 17 Sep 2011 Posts: 210 Followers: 0 Kudos [?]: 14 [0], given: 8 The price of lunch for 15 people was$207 including a 15% [#permalink] 10 May 2012, 02:22
The solution is :
207 * .15 = 31.05 ==> 207 - 31.05 = 175.95==> 175.95 / 15 = 11.73
_________________
_________________
Giving +1 kudos is a better way of saying 'Thank You'.
Intern
Joined: 05 Apr 2012
Posts: 43
Followers: 0
Kudos [?]: 8 [0], given: 12
Re: The price of lunch for 15 people was $207 including a 15% [#permalink] 14 May 2012, 05:10 enigma123 wrote: The price of lunch for 15 people was$207 including a 15% gratuity for service. What was the average price per person, EXCLUDING the gratuity?
A. 11.73
B. 12
C. 13.8
D. 14
E. 15.87
How come the answer is B?
This is how I solved it and got A, which is incorrect.
$207 includes 15% gratuity so 0.15 * 207 = 31.05 Price EXCLUDING gratuity = 207 - 31.05 = 175.95 Average price per person = 175.95/15 = 11.73 Hello here is my approach so let x be the sum of the 15 lunch WITHOUT the 15 % 1.15 X = 207 X = 207/ 1.15 X= 180 so 180 is for 15 people the average price be 180 /15= 12 HENCE B HOPE THIS HELP BEST REGARDS Intern Joined: 19 Dec 2011 Posts: 21 Followers: 0 Kudos [?]: 3 [1] , given: 37 The price of lunch for 15 people was$207.00, including a [#permalink] 25 Aug 2012, 11:33
1
KUDOS
check the option :
12
12 * 15 = 180
180 + [ 18 + 9 ] ( 15 % of 180) = 207
Senior Manager
Status: Prevent and prepare. Not repent and repair!!
Joined: 13 Feb 2010
Posts: 277
Location: India
Concentration: Technology, General Management
GPA: 3.75
WE: Sales (Telecommunications)
Followers: 8
Kudos [?]: 28 [0], given: 275
Re: The price of lunch for 15 people was $207 including a 15% [#permalink] 20 Feb 2013, 09:50 ravitejar wrote: enigma123 wrote: The price of lunch for 15 people was$207 including a 15% gratuity for service. What was the average price per person, EXCLUDING the gratuity?
A. 11.73
B. 12
C. 13.8
D. 14
E. 15.87
How come the answer is B?
This is how I solved it and got A, which is incorrect.
$207 includes 15% gratuity so 0.15 * 207 = 31.05 Price EXCLUDING gratuity = 207 - 31.05 = 175.95 Average price per person = 175.95/15 = 11.73 I'll show You perhaps the most easiest way to solve such problem As it is a percentage problem so just take the initial price before the gratuity is 100 the as the gratuity is calculated on the final price, so as we assumed the final bill before adding gratuity is 100 so gratuity is 15% of 100 is 15 so the total price of meals is 115 so the given amount i.e 207 is for 115 then we have to calculate for 100 here we go for 115 207 for 100 x so by cross multiplication we get 115x=100*207 => x=100*207/110 by simplifying we get x as 180 which is the price of lunch before gratuity so the gratuity is 27 so as the question ask the average price person excluding gratuity is 180/15=12 so our answer is b hope its concise and clear . if u satisfied with my effort don't hesitate to give +1 kudos. Typical way to solve this nice _________________ I've failed over and over and over again in my life and that is why I succeed--Michael Jordan Kudos drives a person to better himself every single time. So Pls give it generously Wont give up till i hit a 700+ Manager Joined: 18 Oct 2011 Posts: 92 Location: United States Concentration: Entrepreneurship, Marketing GMAT Date: 01-30-2013 GPA: 3.3 Followers: 2 Kudos [?]: 17 [0], given: 0 Re: The price of lunch for 15 people was$207.00, including a 15 [#permalink] 20 Feb 2013, 13:14
let the price before gratuity = x
that means that (115/100)x = 207
Solving for x we get 180. 180/15 = 12. Answer B.
Manager
Joined: 25 Jul 2012
Posts: 80
Location: United States
Followers: 0
Kudos [?]: 8 [0], given: 137
Re: The price of lunch for 15 people was $207.00, including a 15 [#permalink] 27 Feb 2013, 20:31 Can someone please explain how you compute 207/1.15 under two minutes quickly? I think my division need help _________________ If my post has contributed to your learning or teaching in any way, feel free to hit the kudos button ^_^ Math Expert Joined: 02 Sep 2009 Posts: 16817 Followers: 2775 Kudos [?]: 17579 [2] , given: 2225 Re: The price of lunch for 15 people was$207.00, including a 15 [#permalink] 28 Feb 2013, 00:50
2
KUDOS
Expert's post
DelSingh wrote:
Can someone please explain how you compute 207/1.15 under two minutes quickly? I think my division need help
Transform decimal into fraction: 1.15=1\frac{15}{100}=1\frac{3}{20}=\frac{23}{20}.
Bill*1.15=207 --> Bill*\frac{23}{20}=207 --> Bill*\frac{23}{20}=9*23 --> \frac{Bill}{20}=9 --> Bill=180.
Hope it helps.
_________________
Re: The price of lunch for 15 people was $207.00, including a 15 [#permalink] 28 Feb 2013, 00:50 Similar topics Replies Last post Similar Topics: This is a fairly easy one, but ... The price of lunch for 1 07 Jul 2006, 17:06 The price of lunch for 15 people was$207 including a 15 2 13 Nov 2006, 14:23 | 2014-03-09 06:16:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6282606720924377, "perplexity": 6491.830437734234}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999674095/warc/CC-MAIN-20140305060754-00093-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://www.zora.uzh.ch/121423/ | # Tevatron constraints on models of the Higgs boson with exotic spin and parity using decays to bottom-antibottom quark pairs
Aaltonen, T A; Amerio, S; Amidei, D E; et al; Canelli, F; Kilminster, B; CDF Collaboration; D0 Collaboration (2015). Tevatron constraints on models of the Higgs boson with exotic spin and parity using decays to bottom-antibottom quark pairs. Physical Review Letters, 114:151802.
## Abstract
Combined constraints from the CDF and D0 Collaborations on models of the Higgs boson with exotic spin $J$ and parity $P$ are presented and compared with results obtained assuming the standard model value $J^P=0^+$. Both collaborations analyzed approximately 10~fb$^{-1}$ of proton-antiproton collisions with a center-of-mass energy of 1.96 TeV collected at the Fermilab Tevatron. Two models predicting exotic Higgs bosons with $J^P=0^-$ and $J^P=2^+$ are tested. The kinematic properties of exotic Higgs boson production in association with a vector boson differ from those predicted for the standard model Higgs boson. Upper limits at the 95% credibility level on the production rates of the exotic Higgs bosons, expressed as fractions of the standard model Higgs boson production rate, are set at 0.36 for both the $J^P=0^-$ hypothesis and the $J^P=2^+$ hypothesis. If the production rate times the branching ratio to a bottom-antibottom pair is the same as that predicted for the standard model Higgs boson, then the exotic bosons are excluded with significances of 5.0 standard deviations and 4.9 standard deviations for the $J^P=0^-$ and $J^P=2^+$ hypotheses, respectively.
Combined constraints from the CDF and D0 Collaborations on models of the Higgs boson with exotic spin $J$ and parity $P$ are presented and compared with results obtained assuming the standard model value $J^P=0^+$. Both collaborations analyzed approximately 10~fb$^{-1}$ of proton-antiproton collisions with a center-of-mass energy of 1.96 TeV collected at the Fermilab Tevatron. Two models predicting exotic Higgs bosons with $J^P=0^-$ and $J^P=2^+$ are tested. The kinematic properties of exotic Higgs boson production in association with a vector boson differ from those predicted for the standard model Higgs boson. Upper limits at the 95% credibility level on the production rates of the exotic Higgs bosons, expressed as fractions of the standard model Higgs boson production rate, are set at 0.36 for both the $J^P=0^-$ hypothesis and the $J^P=2^+$ hypothesis. If the production rate times the branching ratio to a bottom-antibottom pair is the same as that predicted for the standard model Higgs boson, then the exotic bosons are excluded with significances of 5.0 standard deviations and 4.9 standard deviations for the $J^P=0^-$ and $J^P=2^+$ hypotheses, respectively.
## Citations
2 citations in Web of Science®
2 citations in Scopus®
Google Scholar™
## Downloads
11 downloads since deposited on 12 Feb 2016
11 downloads since 12 months
Detailed statistics
## Additional indexing
Item Type: Journal Article, refereed, original work 07 Faculty of Science > Physics Institute 530 Physics English 3 February 2015 12 Feb 2016 16:46 05 Apr 2016 20:03 American Physical Society 0031-9007 https://doi.org/10.1103/PhysRevLett.114.151802 arXiv:1502.00967v2
Permanent URL: https://doi.org/10.5167/uzh-121423
## Download
Preview
Content: Accepted Version
Filetype: PDF
Size: 290kB
View at publisher
Preview
Content: Published Version
Filetype: PDF
Size: 455kB
Licence:
## TrendTerms
TrendTerms displays relevant terms of the abstract of this publication and related documents on a map. The terms and their relations were extracted from ZORA using word statistics. Their timelines are taken from ZORA as well. The bubble size of a term is proportional to the number of documents where the term occurs. Red, orange, yellow and green colors are used for terms that occur in the current document; red indicates high interlinkedness of a term with other terms, orange, yellow and green decreasing interlinkedness. Blue is used for terms that have a relation with the terms in this document, but occur in other documents.
You can navigate and zoom the map. Mouse-hovering a term displays its timeline, clicking it yields the associated documents. | 2016-10-25 07:01:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7359343767166138, "perplexity": 1658.2978039678947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719960.60/warc/CC-MAIN-20161020183839-00495-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://www.gregschool.org/cosmology/2017/5/14/cosmic-microwave-background-radiation | The cooler regions (bluer hint in Figure 3) correspond to collected (by our detectors) photons with less energy and longer wavelengths. The nearby region surrounding the electron that that “cold” photon last scattered off of has more matter present than the nearby regions surrounding the electron that a “hot” photon last scattered off of. The greater abundance of matter “robbed” the photon of more energy. Thus we can relate the non-uniformities in the CMBR to slight non-uniformities in matter density. The gravity exerted by regions with more matter density on nearby particles “overpowered” the gravity exerted by regions with lower matter density. Over very long periods of time this slight imbalance lead to the formation of galaxy superclusters and clusters. According to Newtonian gravity if the matter density was completely uniform galaxy clusters would have never formed. Three centuries ago, Isaac Newton explained (using his law of gravity) that if the distribution of matter in the Universe was completely uniform, all of the matter would condense into a “great spherical mass”:
[include quote]
Therefore, according to Newtonian gravity, if the matter distribution was completely uniform structure (i.e. galaxy clusters, galaxies, stars, planets, etc.) would have never arisen.
The origin of the slight non-uniformity in matter density can be explained by quantum fluctuations in the beginning of the Universe. According to the time-energy uncertainty principle, there will always be particles randomly popping in and out of existence. Since the particles randomly pop in and out of existence, at any instant of time there will always be slight non-uniformities in the distribution of matter and energy. Cosmologists speculate that during the time interval when the Universe was $$t=10^{-37}$$ seconds old until it was $$10^{-35}$$ seconds old (called the inflationary era), the fabric of space and time stretched apart faster than the speed of light. From the time-energy uncertainty principle we know that at the instant when the age of the Universe was $$t=10^{-37}s$$ the distribution of matter and energy was slightly non-uniform. Then, during the inflationary era, the space between every particle expanded faster than the speed of light and thus every particle was causally disconnected during this short period of time. You could imagine that inflation “blew up” and enlarged these non-uniformities while keeping the proportions of their separation distances the same. In the words of the cosmologist Max Tegmark, “When inflation stretched a subatomic region [of space] into what became our entire observable Universe, the density fluctuations that quantum mechanics [and the Uncertainty Principle in particular] had imprinted were stretched as well, to sizes of galaxies and beyond. (p. 107, Our Mathematical Universe)”
The miniscule non-uniformities in temperature (and therefore energy) of the photons coming from the CMBR tells us about the miniscule non-uniformities in matter density when the Universe was only about 300,000 years old. For about the first 300,000 years of the Universe’s life, the Cosmos was too hot for electrons to be captured by hydrogen and helium nuclei. This “soup” of electrons and atomic nuclei acted like an electrical conductor and electrical conductors are opaque to light. Over the next (roughly) 100,000 years the Universe cooled enough for atomic nuclei to capture electrons allowing photons, for the first time, to travel across long distances without being scattered. Somewhere around this time period photons scattered off of electrons for the last time—they would not interact with matter again for another roughly 13.5 billion years. The last electrons that each photon scattered off of can be related to the strength of the gravitational field and thus the distribution of matter in nearby regions around each of those electrons. When our detectors collect photons from the time of last scattering, our detectors are “seeing” the CMBR. From the CMBR you’ll notice that some of the regions are cooler than others.
References
1. Singh, Simon. Big Bang: The Origin of the Universe. New York: Harper Perennial, 2004. Print.
2. Wikipedia contributors. "Cosmic Microwave Background." Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 12 May. 2017. Web. 18 May. 2017.
3. Tegmark, Max. Our Mathematical Universe: My Quest for the Ultimate Nature of Reality. New York: Knopf, 2014. Print. | 2018-12-15 18:10:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5252760052680969, "perplexity": 618.1661938695748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826968.71/warc/CC-MAIN-20181215174802-20181215200802-00182.warc.gz"} |
http://sci4um.com/post-321208---Fri-Jul-14--2006-2-02-pm.html | Search Memberlist Usergroups
Page 1 of 1 [7 Posts]
Author Message
MWimmer
science forum beginner
Joined: 11 Jul 2006
Posts: 3
Posted: Tue Jul 11, 2006 11:11 am Post subject: Numerical diagonalization by not using zgeev?
Dear group,
I have a problem when doing numerical diagonalization of an unsymmetric
complex NxN matrix H. N is typically of order 100-1000, H is sparse.
Let me briefly outline my problem here, below I will give some more
background: I need not only the eigenvalues, but also the
transformation matrix, that is, do the decomposition
H = U * diag(\lambda_1, ..., _lambda_N) U^{-1}
where \lambda_i are the complex eigenvalues and the columns of U
contain the eigenvectors. After that I apply a transformation to the
eigenvalue matrix that sets all the eigenvalues
with |\lambda_i| <=1 to 1 and all the |\lambda_i| >1 to 0. (Actually,
there's N/2 of each) .After that I do the backtransformation using U
and U^{-1}.
Quote: From this backtransformed matrix, I only need some entries and it turns out in the end that I would only need those eigenvectors with
eigenvalue |\lambda_i| <=1.
I intended to solve this problem by using the LAPACK routine zgeev to
calculate the eigenvalues and the transformation matrix U, whose
columns are the eigenvectors. However, i have found out that for matrix
sizes from N=100 and certain parameter regions (see below), U is
(numerically) singular and inverting it fails.
I was rather surprised, because the eigenvalues do not span several
orders of magnitude (in the most extreme cases 0.1<|\lambda|<10, but
usually more like 0.5<|\lambda|<5), and they are clearly pairwise
different.
I then used zgeevx to see the errors on the eigenvalues and
eigenvectors and found that I started to get problems in inverting the
matrix when the relative errors were around 10^{-4}. That kind of
accuracy would be enough for my purposes, but obviously not good enough
for the eigenvectors to linearly independent.
So I have several questions:
* I'm not an expert in numerical mathematics. Is it clear from the
matrix given below that this problem is ill-conditioned?
* Are there ways to circumvent the problem? I need U^{-1}, well, at
least I need the duals to the eigenvectors with |\lambda| <=1 and
inverting seemed to be the easiest solution.
Is there a way to directly calculate U and U^{-1}?
* I guess going to higher precision would help, but is there an easy
way to do this? (I don't want to rewrite LAPACK in terms of GMP or
similar)
* Would using some iterative routine help (ARPACK)? In the end, I need
N/2 eigenvectors with eigenvalue |\lambda|<=1 ...
Any help is greatly appreciated.
To give you more background, let me explain the origin of my problem in
more detail:
The problem comes from physics (electronic transport) where I have to
( (E-H_0) z - H_{1} z^2 - H_{-1} ) u=0
where E is a number (usually real, but could be complex), H_0 a
Hermitian matrix, H_{-1} is the Hermitian conjugate of H_{1}, z the
desired eigenvalue, u the eigenvector. all the matrices
are square nxn
I then linearize this problem to give a ordinary, 2nx2n eigenvalue
problem:
( 0 1 ) ( u ) ( u )
( -H_1^{-1} * H_{-1} H_1^{-1} * (E-H_0) ) (z u ) = z ( z u )
and this is the tried to solve with zgeev.
My problems for the following matrices (For those interested: it's
electrons in a magnetic field)
H_1=diag( exp(i B 1), exp(i B 2), .... exp(i B n)
(E-4 1 )
( 1 E-4 1 )
E-H_0= ( 1 ..... )
( 1 )
( 1 E-4 )
The problem arises for a wide range of parameters, but only once n is
larger than around 50, for example for B=3e-3 and E=0.5. Unfortunatly,
for typical problems one would like to have n ~100 - 1000 ... and the
parameter range is of importance too (corresponds to a rather medium
strength magnetic field)
Any ideas?
Peter Spellucci
science forum Guru
Joined: 29 Apr 2005
Posts: 702
Posted: Tue Jul 11, 2006 6:25 pm Post subject: Re: Numerical diagonalization by not using zgeev?
"MWimmer" <michael.wimmer1@gmx.de> writes:
Quote: Dear group, I have a problem when doing numerical diagonalization of an unsymmetric complex NxN matrix H. N is typically of order 100-1000, H is sparse. Let me briefly outline my problem here, below I will give some more background: I need not only the eigenvalues, but also the transformation matrix, that is, do the decomposition H = U * diag(\lambda_1, ..., _lambda_N) U^{-1} where \lambda_i are the complex eigenvalues and the columns of U contain the eigenvectors. After that I apply a transformation to the eigenvalue matrix that sets all the eigenvalues with |\lambda_i| <=1 to 1 and all the |\lambda_i| >1 to 0. (Actually, there's N/2 of each) .After that I do the backtransformation using U and U^{-1}. From this backtransformed matrix, I only need some entries and it turns out in the end that I would only need those eigenvectors with eigenvalue |\lambda_i| <=1. I intended to solve this problem by using the LAPACK routine zgeev to calculate the eigenvalues and the transformation matrix U, whose columns are the eigenvectors. However, i have found out that for matrix sizes from N=100 and certain parameter regions (see below), U is (numerically) singular and inverting it fails. I was rather surprised, because the eigenvalues do not span several orders of magnitude (in the most extreme cases 0.1<|\lambda|<10, but usually more like 0.5<|\lambda|<5), and they are clearly pairwise different. I then used zgeevx to see the errors on the eigenvalues and eigenvectors and found that I started to get problems in inverting the matrix when the relative errors were around 10^{-4}. That kind of accuracy would be enough for my purposes, but obviously not good enough for the eigenvectors to linearly independent. So I have several questions: * I'm not an expert in numerical mathematics. Is it clear from the matrix given below that this problem is ill-conditioned? * Are there ways to circumvent the problem? I need U^{-1}, well, at
zgeev delivers also the left eigenvectors. but the matrix of the
left eigenvectors is , up to scaling, the inverse of the
right eigenvectors:
Y'*A = diag(lambda)*Y'
A*X = X*diag(lambda)
=>
Y'*A*inv(Y') = inv(X)*A*X= diag(lambda)
?? did you check error indicators from zerred.f ?
Quote: least I need the duals to the eigenvectors with |\lambda| <=1 and inverting seemed to be the easiest solution. Is there a way to directly calculate U and U^{-1}? * I guess going to higher precision would help, but is there an easy way to do this? (I don't want to rewrite LAPACK in terms of GMP or similar) * Would using some iterative routine help (ARPACK)? In the end, I need N/2 eigenvectors with eigenvalue |\lambda|<=1 ...
surely not
Quote: Any help is greatly appreciated. To give you more background, let me explain the origin of my problem in more detail: The problem comes from physics (electronic transport) where I have to solve the quadratic eigenvalue problem ( (E-H_0) z - H_{1} z^2 - H_{-1} ) u=0 where E is a number (usually real, but could be complex), H_0 a Hermitian matrix, H_{-1} is the Hermitian conjugate of H_{1}, z the desired eigenvalue, u the eigenvector. all the matrices are square nxn I then linearize this problem to give a ordinary, 2nx2n eigenvalue problem: ( 0 1 ) ( u ) ( u ) ( -H_1^{-1} * H_{-1} H_1^{-1} * (E-H_0) ) (z u ) = z ( z u ) and this is the tried to solve with zgeev. My problems for the following matrices (For those interested: it's electrons in a magnetic field) H_1=diag( exp(i B 1), exp(i B 2), .... exp(i B n) (E-4 1 ) ( 1 E-4 1 ) E-H_0= ( 1 ..... ) ( 1 ) ( 1 E-4 ) The problem arises for a wide range of parameters, but only once n is larger than around 50, for example for B=3e-3 and E=0.5. Unfortunatly, for typical problems one would like to have n ~100 - 1000 ... and the parameter range is of importance too (corresponds to a rather medium strength magnetic field) Any ideas?
i tested matlab polyeig (which does what you did , using lapack)
exactly with the data you gave and n=100 without any problem.
cond(U) about 700, all eigenvalues different in a reasonable range
did you check anything in your coding? nowhere an intermediate
single precision calculation, everything double complex??
hth
peter
MWimmer
science forum beginner
Joined: 11 Jul 2006
Posts: 3
Posted: Wed Jul 12, 2006 4:19 pm Post subject: Re: Numerical diagonalization by not using zgeev?
Peter Spellucci wrote:
Quote: In article <1152616276.829831.224480@35g2000cwc.googlegroups.com>, "MWimmer" writes: Dear group, I have a problem when doing numerical diagonalization of an unsymmetric complex NxN matrix H. N is typically of order 100-1000, H is sparse. Let me briefly outline my problem here, below I will give some more background: I need not only the eigenvalues, but also the transformation matrix, that is, do the decomposition H = U * diag(\lambda_1, ..., _lambda_N) U^{-1} where \lambda_i are the complex eigenvalues and the columns of U contain the eigenvectors. After that I apply a transformation to the eigenvalue matrix that sets all the eigenvalues with |\lambda_i| <=1 to 1 and all the |\lambda_i| >1 to 0. (Actually, there's N/2 of each) .After that I do the backtransformation using U and U^{-1}. From this backtransformed matrix, I only need some entries and it turns out in the end that I would only need those eigenvectors with eigenvalue |\lambda_i| <=1. I intended to solve this problem by using the LAPACK routine zgeev to calculate the eigenvalues and the transformation matrix U, whose columns are the eigenvectors. However, i have found out that for matrix sizes from N=100 and certain parameter regions (see below), U is (numerically) singular and inverting it fails. I was rather surprised, because the eigenvalues do not span several orders of magnitude (in the most extreme cases 0.1<|\lambda|<10, but usually more like 0.5<|\lambda|<5), and they are clearly pairwise different. I then used zgeevx to see the errors on the eigenvalues and eigenvectors and found that I started to get problems in inverting the matrix when the relative errors were around 10^{-4}. That kind of accuracy would be enough for my purposes, but obviously not good enough for the eigenvectors to linearly independent. So I have several questions: * I'm not an expert in numerical mathematics. Is it clear from the matrix given below that this problem is ill-conditioned? * Are there ways to circumvent the problem? I need U^{-1}, well, at zgeev delivers also the left eigenvectors. but the matrix of the left eigenvectors is , up to scaling, the inverse of the right eigenvectors: Y'*A = diag(lambda)*Y' A*X = X*diag(lambda) = Y'*A*inv(Y') = inv(X)*A*X= diag(lambda)
Yes, I also tried this. It seems to make my problem a bit better, but
the problem is that
the overlap of the rows of Y' and the columns of X for the same
eigenvalue are very small
(order 1e-15-1e-17) for many eigenvalues. If I then rescale, I
introduce a lot of noise from roundoff errors ... but I should probably
investigate further in this direction, this is only a preliminary
finding.
Quote: ?? did you check error indicators from zerred.f ?
zerred.f just checks whether the LAPACK routines do input parameter
checking correctly, right? If I link against my lapack installation,
the program aborts with an error message telling you about an illegal
parameter in zgeev, as zerred tests that. To run zerred.f correctly, I
guess I have to change that behaviour of LAPACK somehow. Do you know
how?
-----snip-----------
Quote: Any help is greatly appreciated. To give you more background, let me explain the origin of my problem in more detail: The problem comes from physics (electronic transport) where I have to solve the quadratic eigenvalue problem ( (E-H_0) z - H_{1} z^2 - H_{-1} ) u=0 where E is a number (usually real, but could be complex), H_0 a Hermitian matrix, H_{-1} is the Hermitian conjugate of H_{1}, z the desired eigenvalue, u the eigenvector. all the matrices are square nxn I then linearize this problem to give a ordinary, 2nx2n eigenvalue problem: ( 0 1 ) ( u ) ( u ) ( -H_1^{-1} * H_{-1} H_1^{-1} * (E-H_0) ) (z u ) = z ( z u ) and this is the tried to solve with zgeev. My problems for the following matrices (For those interested: it's electrons in a magnetic field) H_1=diag( exp(i B 1), exp(i B 2), .... exp(i B n) (E-4 1 ) ( 1 E-4 1 ) E-H_0= ( 1 ..... ) ( 1 ) ( 1 E-4 ) The problem arises for a wide range of parameters, but only once n is larger than around 50, for example for B=3e-3 and E=0.5. Unfortunatly, for typical problems one would like to have n ~100 - 1000 ... and the parameter range is of importance too (corresponds to a rather medium strength magnetic field) Any ideas? i tested matlab polyeig (which does what you did , using lapack) exactly with the data you gave and n=100 without any problem. cond(U) about 700, all eigenvalues different in a reasonable range did you check anything in your coding? nowhere an intermediate single precision calculation, everything double complex??
Now that's interesting. Could you perhaps post your matlab program?
Because that's what I tried today in matlab:
E=0.5;
n=100;
B=3e-3;
H0=diag(ones(n,1)*(E-4))+diag(ones(n-1,1),1)+diag(ones(n-1,1),-1);
H1=diag(arrayfun(@(x) -exp(-i*B*x), 1:n));
Hm1=H1';
[U1,e]=polyeig(-Hm1,H0,-H1);
U2=U1*diag(e);
U(1:n,=U1;
U(n+1:2*n,=U2;
cond(U)
ans =
1.8921e+11
B=3e-2;
H0=diag(ones(n,1)*(E-4))+diag(ones(n-1,1),1)+diag(ones(n-1,1),-1);
H1=diag(arrayfun(@(x) -exp(-i*B*x), 1:n));
Hm1=H1';
[U1,e]=polyeig(-Hm1,H0,-H1);
U2=U1*diag(e);
U(1:n,=U1;
U(n+1:2*n,=U2;
cond(U)
ans =
7.9785e+15
First of all, notice that my H1 above is a bit different than the one
from the previous post, as I did some minus signs wrong. But this
doesn't change the outcome significantly, I checked.
(The right H1 should have been H1=diag(- exp(-i B 1), ...) )
All the condition numbers I get are way above the 700 you get.
I also tried to diagonalize the linearization directly with matlab:
H(1:n,1:n)=0;
H(1:n,n+1:2*n)=diag(ones(n,1));
H(n+1:2*n,1:n)=-inv(H1)*Hm1;
H(n+1:2*n,n+1:2*n)=inv(H1)*H0;
[V,e2]=eig(H);
cond(V)
ans =
5.5936e+15
So essentially the same result, which was to expected, as you said that
matlab does use a linearization for solving the polynomial
eigenproblem.
I'm very confused now. The lapack installation on my machine seems to
be OK: For my own code I also compiled the LAPACK from netlib and
linked against that, bypassing the systems LAPACK and BLAS. I got the
very same results.
Do you have any ideas what might cause this behaviour?
Peter Spellucci
science forum Guru
Joined: 29 Apr 2005
Posts: 702
Posted: Wed Jul 12, 2006 9:11 pm Post subject: Re: Numerical diagonalization by not using zgeev?
"MWimmer" <michael.wimmer1@gmx.de> writes:
Quote: First of all: thank you very much Peter for your answer. Peter Spellucci wrote: In article <1152616276.829831.224480@35g2000cwc.googlegroups.com>, "MWimmer" writes: Dear group, I have a problem when doing numerical diagonalization of an unsymmetric complex NxN matrix H. N is typically of order 100-1000, H is sparse. Let me briefly outline my problem here, below I will give some more background: I need not only the eigenvalues, but also the transformation matrix, that is, do the decomposition H = U * diag(\lambda_1, ..., _lambda_N) U^{-1} where \lambda_i are the complex eigenvalues and the columns of U contain the eigenvectors. After that I apply a transformation to the eigenvalue matrix that sets all the eigenvalues with |\lambda_i| <=1 to 1 and all the |\lambda_i| >1 to 0. (Actually, there's N/2 of each) .After that I do the backtransformation using U and U^{-1}. From this backtransformed matrix, I only need some entries and it turns out in the end that I would only need those eigenvectors with eigenvalue |\lambda_i| <=1. I intended to solve this problem by using the LAPACK routine zgeev to calculate the eigenvalues and the transformation matrix U, whose columns are the eigenvectors. However, i have found out that for matrix sizes from N=100 and certain parameter regions (see below), U is (numerically) singular and inverting it fails. I was rather surprised, because the eigenvalues do not span several orders of magnitude (in the most extreme cases 0.1<|\lambda|<10, but usually more like 0.5<|\lambda|<5), and they are clearly pairwise different. I then used zgeevx to see the errors on the eigenvalues and eigenvectors and found that I started to get problems in inverting the matrix when the relative errors were around 10^{-4}. That kind of accuracy would be enough for my purposes, but obviously not good enough for the eigenvectors to linearly independent. So I have several questions: * I'm not an expert in numerical mathematics. Is it clear from the matrix given below that this problem is ill-conditioned? * Are there ways to circumvent the problem? I need U^{-1}, well, at zgeev delivers also the left eigenvectors. but the matrix of the left eigenvectors is , up to scaling, the inverse of the right eigenvectors: Y'*A = diag(lambda)*Y' A*X = X*diag(lambda) = Y'*A*inv(Y') = inv(X)*A*X= diag(lambda) Yes, I also tried this. It seems to make my problem a bit better, but the problem is that the overlap of the rows of Y' and the columns of X for the same eigenvalue are very small (order 1e-15-1e-17) for many eigenvalues. If I then rescale, I introduce a lot of noise from roundoff errors ... but I should probably investigate further in this direction, this is only a preliminary finding. ?? did you check error indicators from zerred.f ? zerred.f just checks whether the LAPACK routines do input parameter checking correctly, right? If I link against my lapack installation, the program aborts with an error message telling you about an illegal parameter in zgeev, as zerred tests that. To run zerred.f correctly, I guess I have to change that behaviour of LAPACK somehow. Do you know how? -----snip----------- Any help is greatly appreciated. To give you more background, let me explain the origin of my problem in more detail: The problem comes from physics (electronic transport) where I have to solve the quadratic eigenvalue problem ( (E-H_0) z - H_{1} z^2 - H_{-1} ) u=0 where E is a number (usually real, but could be complex), H_0 a Hermitian matrix, H_{-1} is the Hermitian conjugate of H_{1}, z the desired eigenvalue, u the eigenvector. all the matrices are square nxn I then linearize this problem to give a ordinary, 2nx2n eigenvalue problem: ( 0 1 ) ( u ) ( u ) ( -H_1^{-1} * H_{-1} H_1^{-1} * (E-H_0) ) (z u ) = z ( z u ) and this is the tried to solve with zgeev. My problems for the following matrices (For those interested: it's electrons in a magnetic field) H_1=diag( exp(i B 1), exp(i B 2), .... exp(i B n) (E-4 1 ) ( 1 E-4 1 ) E-H_0= ( 1 ..... ) ( 1 ) ( 1 E-4 ) The problem arises for a wide range of parameters, but only once n is larger than around 50, for example for B=3e-3 and E=0.5. Unfortunatly, for typical problems one would like to have n ~100 - 1000 ... and the parameter range is of importance too (corresponds to a rather medium strength magnetic field) Any ideas? i tested matlab polyeig (which does what you did , using lapack) exactly with the data you gave and n=100 without any problem. cond(U) about 700, all eigenvalues different in a reasonable range did you check anything in your coding? nowhere an intermediate single precision calculation, everything double complex?? Now that's interesting. Could you perhaps post your matlab program? Because that's what I tried today in matlab: E=0.5; n=100; B=3e-3; H0=diag(ones(n,1)*(E-4))+diag(ones(n-1,1),1)+diag(ones(n-1,1),-1); H1=diag(arrayfun(@(x) -exp(-i*B*x), 1:n)); Hm1=H1'; [U1,e]=polyeig(-Hm1,H0,-H1); U2=U1*diag(e); U(1:n,=U1; U(n+1:2*n,=U2; cond(U) ans = 1.8921e+11 B=3e-2; H0=diag(ones(n,1)*(E-4))+diag(ones(n-1,1),1)+diag(ones(n-1,1),-1); H1=diag(arrayfun(@(x) -exp(-i*B*x), 1:n)); Hm1=H1'; [U1,e]=polyeig(-Hm1,H0,-H1); U2=U1*diag(e); U(1:n,=U1; U(n+1:2*n,=U2; cond(U)
I think the confusion is here: I tested cond(U1) doing essentially the same
as you (with matlab6.0, the department has no money to upgrade)
hence I tested the wrong thing.
if you look at e, then you will see a large block of nearby eigenvalues
(plot(i,real(e(i))); hold on ; plot(i,imag(i)); )
I check the residuals:
Quote: for j=1:2*n lambda=e(j);
x=U1(:,j);
r(:,j)=(-Hm1+lambda*H0-H1*lambda^2)*x;
end
Quote: norm(r)
ans =
5.2397e-13
hence this seems o.k. but surprise:
Quote: cond(U1(1:n,1:n))
ans =
3.5870e+10
hence these almost identical eigenvalues create a block of almost singular
eigenvectors and your construction then kills it totally.
Quote: ans = 7.9785e+15 First of all, notice that my H1 above is a bit different than the one from the previous post, as I did some minus signs wrong. But this doesn't change the outcome significantly, I checked. (The right H1 should have been H1=diag(- exp(-i B 1), ...) ) All the condition numbers I get are way above the 700 you get. I also tried to diagonalize the linearization directly with matlab: H(1:n,1:n)=0; H(1:n,n+1:2*n)=diag(ones(n,1)); H(n+1:2*n,1:n)=-inv(H1)*Hm1; H(n+1:2*n,n+1:2*n)=inv(H1)*H0; [V,e2]=eig(H); cond(V) ans = 5.5936e+15 So essentially the same result, which was to expected, as you said that matlab does use a linearization for solving the polynomial eigenproblem. I'm very confused now. The lapack installation on my machine seems to be OK: For my own code I also compiled the LAPACK from netlib and linked against that, bypassing the systems LAPACK and BLAS. I got the very same results. Do you have any ideas what might cause this behaviour?
unfortunately nearby eigenvalues of a nonhermitian matrix are
a problem which always gives such trouble (would cause even more
trouble with an iterative solver like arpack)
to resort to higher precision: netlib/toms
has an automatic translator of f77 programs to one which uses
multiple precision arithmetic plus a package for this arithmetic,
which you could use (works for 32 bit architecture only ?)
or if you are happy enough to have access to an HP risc machine:
they have an option -autodblpad which automatically performs
128 bit arithmetic on a given double precision fortran program
(so you have nothing to do than to set this compiler switch)
i see no trick how to separate the eigenvalues better by a transformation
of the problem
sorry for my error
hth
peter
Helmut Jarausch
science forum beginner
Joined: 08 Jul 2005
Posts: 49
Posted: Thu Jul 13, 2006 9:10 am Post subject: Re: Numerical diagonalization by not using zgeev?
MWimmer wrote:
Quote: Dear group, I have a problem when doing numerical diagonalization of an unsymmetric complex NxN matrix H. N is typically of order 100-1000, H is sparse. Let me briefly outline my problem here, below I will give some more background: I need not only the eigenvalues, but also the transformation matrix, that is, do the decomposition H = U * diag(\lambda_1, ..., _lambda_N) U^{-1} where \lambda_i are the complex eigenvalues and the columns of U contain the eigenvectors. After that I apply a transformation to the eigenvalue matrix that sets all the eigenvalues with |\lambda_i| <=1 to 1 and all the |\lambda_i| >1 to 0. (Actually, there's N/2 of each) .After that I do the backtransformation using U and U^{-1}. From this backtransformed matrix, I only need some entries and it turns out in the end that I would only need those eigenvectors with eigenvalue |\lambda_i| <=1. SNIP
There you have a unitary matrix U and an upper triangular
matrix T such that A= U*T .
The eigenvalues of A appear on the diagonal of T.
You could manipulated T to get the eigenvalues you
and multiply it by U again.
The benefit: the Schur decomposition is very stable.
--
Helmut Jarausch
Lehrstuhl fuer Numerische Mathematik
RWTH - Aachen University
D 52056 Aachen, Germany
MWimmer
science forum beginner
Joined: 11 Jul 2006
Posts: 3
Posted: Thu Jul 13, 2006 12:17 pm Post subject: Re: Numerical diagonalization by not using zgeev?
Helmut Jarausch wrote:
Quote: MWimmer wrote: Dear group, I have a problem when doing numerical diagonalization of an unsymmetric complex NxN matrix H. N is typically of order 100-1000, H is sparse. Let me briefly outline my problem here, below I will give some more background: I need not only the eigenvalues, but also the transformation matrix, that is, do the decomposition H = U * diag(\lambda_1, ..., _lambda_N) U^{-1} where \lambda_i are the complex eigenvalues and the columns of U contain the eigenvectors. After that I apply a transformation to the eigenvalue matrix that sets all the eigenvalues with |\lambda_i| <=1 to 1 and all the |\lambda_i| >1 to 0. (Actually, there's N/2 of each) .After that I do the backtransformation using U and U^{-1}. From this backtransformed matrix, I only need some entries and it turns out in the end that I would only need those eigenvectors with eigenvalue |\lambda_i| <=1. SNIP Have you check if the Schur decomposition can help you? There you have a unitary matrix U and an upper triangular matrix T such that A= U*T . The eigenvalues of A appear on the diagonal of T. You could manipulated T to get the eigenvalues you and multiply it by U again. The benefit: the Schur decomposition is very stable.
I take it that you mean I should try to apply my transformation (that
sets some eigenvalues to zero and some to 1) directly to the triangular
matrix T, right? As far as I can see, I would not only need to change
the diagonal, but also the off-diagonal entries and the only way I see
how to do this, is by using the eigenvectors of T.
But the eigenvectors of T are the problem anyway, as LAPACK zgeev does
indeed calculate the Schur decomposition and the eigenvectors from
that.
Still, I need to think about that a bit more, if I can circumvent using
the eigenvectors. My problem would be calculating
\tilde{f}(T) := V* f(D) *V^{-1}
without explicitely calculating V (V are the eigenvectors of T and I
know the action of f on the diagonal matrix D). As far as i can see, V
and V^{-1} should also be triangular, right? Still I don't see a
solution for that problem ...
Michael Wimmer
Peter Spellucci
science forum Guru
Joined: 29 Apr 2005
Posts: 702
Posted: Fri Jul 14, 2006 2:02 pm Post subject: Re: Numerical diagonalization by not using zgeev?
"MWimmer" <michael.wimmer1@gmx.de> writes:
Quote: Helmut Jarausch wrote: MWimmer wrote: Dear group, I have a problem when doing numerical diagonalization of an unsymmetric complex NxN matrix H. N is typically of order 100-1000, H is sparse. Let me briefly outline my problem here, below I will give some more background: I need not only the eigenvalues, but also the transformation matrix, that is, do the decomposition H = U * diag(\lambda_1, ..., _lambda_N) U^{-1} where \lambda_i are the complex eigenvalues and the columns of U contain the eigenvectors. After that I apply a transformation to the eigenvalue matrix that sets all the eigenvalues with |\lambda_i| <=1 to 1 and all the |\lambda_i| >1 to 0. (Actually, there's N/2 of each) .After that I do the backtransformation using U and U^{-1}. From this backtransformed matrix, I only need some entries and it turns out in the end that I would only need those eigenvectors with eigenvalue |\lambda_i| <=1. SNIP Have you check if the Schur decomposition can help you? There you have a unitary matrix U and an upper triangular matrix T such that A= U*T . The eigenvalues of A appear on the diagonal of T. You could manipulated T to get the eigenvalues you and multiply it by U again. The benefit: the Schur decomposition is very stable. Thanks for your answer. I take it that you mean I should try to apply my transformation (that sets some eigenvalues to zero and some to 1) directly to the triangular matrix T, right? As far as I can see, I would not only need to change the diagonal, but also the off-diagonal entries and the only way I see how to do this, is by using the eigenvectors of T. But the eigenvectors of T are the problem anyway, as LAPACK zgeev does indeed calculate the Schur decomposition and the eigenvectors from that. Still, I need to think about that a bit more, if I can circumvent using the eigenvectors. My problem would be calculating \tilde{f}(T) := V* f(D) *V^{-1} without explicitely calculating V (V are the eigenvectors of T and I know the action of f on the diagonal matrix D). As far as i can see, V and V^{-1} should also be triangular, right? Still I don't see a solution for that problem ... Michael Wimmer
what you really need is
sum_{lambda_i <=1 } u_i*v_i'
where u_i are the right and v_i the left eigenvectors of the triangular
matrix T and then further backtransform this using the unitary transformation
which produced the triangular form.
this back transformation could cause no harm.
there is a further trick:
Instead of getting the eigenvectors of T just by backsubstitution, which
yields those enormous roudoff errors due to the many nearby eigenvalues
you could use first a block transformation which gets T into the form
T = [ T1 O ; O T2 ] (matlab notation)
T1 has eigenvalues >1, T2 eigenvalues <=1
this is a further step to the JNF and described e.g. in Golub & van Loan.
(Indeed, in the course of getting the JNF, one applies this for every
eigenvalue cluster and here I propose you cluster the eigenvalues into two)
then, next, you could apply inverse iteration in block form, simulataneously for
the left and the right eigenvectors with a biorthogonalization step,
that means
"simultaneous inverse iteration for nonsymmetric matrices"
which has been described by G.W. Stewart, Num. Math. 25, 1976, 123-136.
this should finally do the job.
hth
peter
Display posts from previous: All Posts1 Day7 Days2 Weeks1 Month3 Months6 Months1 Year Oldest FirstNewest First
Page 1 of 1 [7 Posts]
The time now is Sat Apr 21, 2018 9:42 am | All times are GMT
Jump to: Select a forum-------------------Forum index|___Science and Technology |___Math | |___Research | |___num-analysis | |___Symbolic | |___Combinatorics | |___Probability | | |___Prediction | | | |___Undergraduate | |___Recreational | |___Physics | |___Research | |___New Theories | |___Acoustics | |___Electromagnetics | |___Strings | |___Particle | |___Fusion | |___Relativity | |___Chem | |___Analytical | |___Electrochem | | |___Battery | | | |___Coatings | |___Engineering |___Control |___Mechanics |___Chemical
Topic Author Forum Replies Last Post Similar Topics Help in identifying a numerical method Don11135 num-analysis 2 Thu Jul 20, 2006 8:56 pm Recommendation for numerical differentiation formulas? 1940LaSalle@gmail.com num-analysis 6 Wed Jul 12, 2006 8:07 pm Numerical differentiation formulas? 1940LaSalle@gmail.com Math 2 Wed Jul 12, 2006 8:01 pm integrand numerical singular near boundary Axel Vogt num-analysis 2 Wed Jul 12, 2006 6:13 pm Numerical Recipes in C - correlation method paul@paullee.com num-analysis 2 Thu Jul 06, 2006 1:51 pm | 2018-04-21 09:43:30 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8483324646949768, "perplexity": 1433.2783036093317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945111.79/warc/CC-MAIN-20180421090739-20180421110739-00422.warc.gz"} |
http://forums.afterdawn.com/threads/help-with-finding-length-of-string-in-batch-file.556830/ | # HELP with finding Length of String in BATCH file
Discussion in 'Windows - Software discussion' started by aweathe, May 28, 2007.
1. ### aweatheMember
Joined:
May 9, 2007
Messages:
55
0
Trophy Points:
16
Hello,
I realize there are alot of solutions out there in google for finding the length of a string using a batch file. I have tried to understand 3 different ones listed, and was unsuccessful. I have placed a copy of one of the scripts I found through google below. Could someone explain/help me understand it? Or give me a new solution that they can explain better?
here is one solution: (and listed just below is what I don't understand)
@echo off
set test_=Testing the length of a string
echo:..%test_%>tmp$$.txt for %%i in (1 2 3 4 5 6 7 8) do echo 1234567890>>tmp$$$.txt dir tmp$$|find "TMP$$$ TXT">tmp$$2.bat echo set lenght_=%%2>tmp$$$.bat call tmp$$2 echo set length_=%%%lenght_%>tmp$$$.bat
call tmp$$for %%f in (tmp$$$*.*) do if exist %%f del %%f echo %test_% echo length_=%length_% set length_= What I don't understand: 1. Why "$$" is used, and what does it mean? 2. Why "%%%" is used, and what does it mean? 3. The line starting with "echo set length" (ie: what is this: "%%%lenght_%" ?). 4. The last line. (it seems like it's not finished.) Thank you for any help! 2. ### AfterDawnAdvertisement 3. ### IndochineRegular member Joined: Dec 21, 2006 Messages: 1,485 Likes Received: 0 Trophy Points: 46 Hi, aweathe, I read this a couple of days ago, but I balked at trying to explain that awful batch file to you!!! It must have been written before about 1990, because it relies on certain features of early MS-DOS. The DIR command for example is completely different. The$$$ is just an old convention for naming temp files. The file seems to work by creating another batch file and calling it. The triple percent signs are because to echo one percent sign which is a special character in DOS, you need to put two in the echo statement. echo %% will actually result in % appearing on screen or in a file. To get two in a file, you need three in the batch which makes the file.
Finally, set variable= just deletes "variable" from the environment, thus saving the space it occupied. Very important in early DOS where you only had 256 bytes to play with.
That file is an awful example of how NOT to write a batch file. No REMs to explain things. Horrid.
I modestly feel that this following is more like it!
The question kept nagging at me. I have seen all kinds of complex bits of jiggery-pokery to achieve this aim. Today I had a burst of lateral thinking and came up with this
It works by echoing the string whose length is to be counted to a temp text file. Each byte in the file represents one character. The echo process adds two bytes, one for a carriage return, one for a line feed. So you find the file size, then subtract 2, which gives you the number of characters in the string
@echo off
setlocal enabledelayedexpansion
REM for testing purposes
set /p string="string ? "
REM write string to temp text file
REM put redirection symbol right after
REM variable to avoid a trailing space
echo %string%> %temp%\string.txt
REM get the file size in bytes
for %%a in (%temp%\string.txt) do set /a length=%%~za
REM do some batch arithmetic
REM subtract 2 bytes, 1 for CR 1 for LF
set /a length -=2
echo string "%string%" has %length% characters
REM clean up temp file
del %temp%\string.txt
[pre]
REM These two lines are all you need
REM Before, the variable "str" holds the string
REM After, the variable "len" holds its length
echo %str%> "%temp%\st.txt"
for %%a in (%temp%\st.txt) do set /a len=%%~za & set /a len -=2 & del "%temp%\st.txt"
[/pre]
4. ### aweatheMember
Joined:
May 9, 2007
Messages:
55
0
Trophy Points:
16
Hey it's Indochine!
Thanks once again for another SUPER-CLEAR response!
I'm just trying to fit the code into what I want to do. Just one question. In the line you wrote:
for %%a in (%temp%\string.txt) do set /a length=%%~za
is the ~z part at the end standard to find the size of the file?
thanks.
Last edited: May 31, 2007
5. ### IndochineRegular member
Joined:
Dec 21, 2006
Messages:
1,485
0
Trophy Points:
46
Yes it is. If you get a file's name into a FOR variable,
the following will yield information about the file
Assume the variable is %%I
Use single % sign on command line, double (%%) in batch file
NB the modifiers are CASE SENSITIVE so %%~z is OK, but %%~Z will fail
(My remarks in brackets)
%~I - expands %I removing any surrounding quotes
%~fI - expands %I to a fully qualified path name
%~dI - expands %I to a drive letter only (Actually, a letter and a colon eg D:
%~pI - expands %I to a path only
%~nI - expands %I to a file name only
%~xI - expands %I to a file extension only (a dot and an extension eg .txt)
%~sI - expanded path contains short names only
%~aI - expands %I to file attributes of file
%~tI - expands %I to date/time of file
%~zI - expands %I to size of file (in bytes)
The modifiers can be combined to get compound results:
%~dpI - expands %I to a drive letter and path only
%~nxI - expands %I to a file name and extension only
%~fsI - expands %I to a full path name with short names only
%~ftzaI - expands %I to a DIR like output line
By the way you don't actually need this line although I put it in
[pre]setlocal enabledelayedexpansion[/pre]
Last edited: May 31, 2007
6. ### aweatheMember
Joined:
May 9, 2007
Messages:
55
0
Trophy Points:
16
Hmm I'm running into a bit of a block due to my lack of understanding. You're script is working fine for me, and is not the problem. I'm trying to use your script in my batch file: I am placing your script within a couple of nested for loops, and I'm having trouble. In particular, the line
for %%a in (%temp%\string.txt) do set /a length=%%~za
is producing an error "Missing operand". I've been trying to play around with it using !'s ect. But without any luck. Can you think of anything for this?
Another thing I don't understand, and may be what I need to know to solve the problem, is why the line you wrote:
echo %string%> %temp%\string.txt
works. Whereas the lines:
topfolder =%CD%
echo %string%> %topfolder%\string.txt
do not. In other words the line you wrote using the %temp% file creates a string.txt there, but the 2 lines i wrote don't create the string.txt file. Can you tell me why? I wanted to be able to follow the code more easily to debug/understand what's going on.
Many thanks!!
Last edited: May 31, 2007
7. ### aweatheMember
Joined:
May 9, 2007
Messages:
55
0
Trophy Points:
16
I don't want you to have to look at it, but it may make it easier for you to understand my problem if I post the full script here. If it doesn't make it easier for you, then I'm more than happy to ask a few more 'short and specific' questions (like the last post) until I get it working.
If you like, here is the code:
@echo off
SETLOCAL ENABLEDELAYEDEXPANSION
set topfolder=%CD%
rem if exist delete text1.txt and text2.txt
echo deleting files text files if exist...
pause
if exist "%topfolder%\text1.txt" del "%topfolder%\text1.txt" > nul
if exist "%topfolder%\text2.txt" del "%topfolder%\text2.txt" > nul
pause
pause
echo entering into FOR loop...
pause
for /F "delims==" %%G in ('type "%topfolder%\text1.txt"') do (
cd %topfolder%
echo %%G>> text2.txt
pause
cd %topfolder%\%%G
for /f "delims==" %%H in ('type "%topfolder%\temp.txt"') do (
set string=test
rem set string=%%G
set string
echo %topfolder%
if exist "%topfolder%\string.txt" del "%topfolder%\string.txt" > nul
pause
REM write string to temp text file
REM put redirection symbol right after
REM variable to avoid a trailing space
echo !string!> %topfolder%\string.txt
echo check sting.txt contents here
pause
REM get the file size in bytes
for %%a in ('type "%topfolder%\string.txt"') do (set /a length=%%~za)
set length
pause
REM do some batch arithmetic
REM subtract 2 bytes, 1 for CR 1 for LF
echo !length!
echo !string!
pause
set /a length -=2
echo string "!string!" has !length! characters
pause
Rem if !length! GTR 6 (cd %topfolder% & echo %%H>> text2.txt)
REM clean up temp file
rem del %temp%\string.txt
Rem cd %topfolder%
Rem echo %%H>> text2.txt
)
)
del "%topfolder%\temp.txt" > nul
echo end of program
pause
8. ### IndochineRegular member
Joined:
Dec 21, 2006
Messages:
1,485
0
Trophy Points:
46
aweathe, I just saw this before going to bed, I'll have a look at it tomorrow - looks interesting... what is it meant to do? Mi'm sure between us we can wrestle it into submission!
9. ### IndochineRegular member
Joined:
Dec 21, 2006
Messages:
1,485
0
Trophy Points:
46
It seems you want to examine some subdirectories and find filenames longer than 6 characters and write those names to a file?
I have hacked your code about so that it works, but you seem to have lost interest, so maybe you have moved on.
Surely the full path name of each file would be more useful?
10. ### aweatheMember
Joined:
May 9, 2007
Messages:
55
0
Trophy Points:
16
Indochine, I have certainly not lost interest! Sorry for the lack of replies for the last few days.. i was meeting a friend in another city, and we did more 'site seeing' than anything else.
Yes your interpretation is correct: the code is meant to write out a list of folder names to a text file. (not the full path names)
There are a set of folders in the directory I'm running the file from. And in each of these folders there are a different set of folders. I would like to write out the names of the higher level folders followed by a list of their sub-level folder names that are longer than a 6 characters. IE: a long continuous list of file names that I can just copy into excel.. and then format.
you said you've hacked the code to make it work?? I'd love to see it! :O)
Last edited: Jun 5, 2007
11. ### IndochineRegular member
Joined:
Dec 21, 2006
Messages:
1,485
0
Trophy Points:
46
did you mean six characters including the dot and extension? (That was how it appears; not sure if that is your intention. If the extension is always the same that would simplify things.
If you clarify re. the above, I'll see if I can make the code do the desired and post it here for you.
Hope you had a nice trip...
12. ### aweatheMember
Joined:
May 9, 2007
Messages:
55
0
Trophy Points:
16
No that's not what I meant... my explanation was not very clear...
There is no need for any extensions because I'm only after the folder names, not any file names.
The folders are set up like this:
Folder1
subfoldera
subfolderb
subfolderc
Folder2
subfoldera
subfolderb
subfolderc
Folder3
subfoldera
subfolderb
subfolderc
And I would like to write the folder names to a text file, just as is shown above in the list. I would like each of the 'toplevel' folder names written (ie "Folder1, Folder2, ect.) but only the 'subfolder' names written that are longer than 6 characters.
did this make it any clearer?!
13. ### IndochineRegular member
Joined:
Dec 21, 2006
Messages:
1,485
0
Trophy Points:
46
OK... are the folders just like that - two levels of nesting?
14. ### aweatheMember
Joined:
May 9, 2007
Messages:
55
0
Trophy Points:
16
yes
15. ### IndochineRegular member
Joined:
Dec 21, 2006
Messages:
1,485
0
Trophy Points:
46
[pre]
@echo off
SETLOCAL ENABLEDELAYEDEXPANSION
set topfolder=%CD%
if exist "%topfolder%\report.txt" del "%topfolder%\report.txt" > nul
for /F "delims==" %%G in ('dir /b /ad') do (
echo %%G>> %topfolder%\report.txt
cd %topfolder%\%%G
for /f "delims==" %%H in ('dir /b /ad') do (
set string=%%H
set /a length=0
echo !string!> %topfolder%\string.txt
for %%a in (%topfolder%\string.txt) do (set /a length=%%~za)
set /a length -=2
if !length! GTR 6 echo %%H>> %topfolder%\report.txt
)
)
cd %topfolder%
type report.txt
echo end of program
[/pre]
I have called it qqq5.bat.
here is the folder layout, topfolder is f:\test\qqq
[pre]
F:\test\qqq\folder1
F:\test\qqq\folder2
F:\test\qqq\folder3
F:\test\qqq\folder1\subfolderA
F:\test\qqq\folder1\subfolderB
F:\test\qqq\folder2\subfolderC
F:\test\qqq\folder2\subfolderD
F:\test\qqq\folder3\subfolderE
F:\test\qqq\folder3\subfolderF
[/pre]
and this is what I see when I run it...
[pre]
F:\test\qqq>qqq5.bat
folder1
subfolderA
subfolderB
folder2
subfolderC
subfolderD
folder3
subfolderE
subfolderF
end of program[/pre]
Last edited: Jun 5, 2007
16. ### aweatheMember
Joined:
May 9, 2007
Messages:
55
0
Trophy Points:
16
I get the same error, "missing operand" from the line:
for %%a in (%topfolder%\string.txt) do (set /a length=%%~za)
I also don't see the text files 'report.txt' and 'string.txt' created when I refresh the topfolder, while in a 'pause' in the program in the command prompt. I should see these files even though they're in a for loop, right?
17. ### IndochineRegular member
Joined:
Dec 21, 2006
Messages:
1,485
0
Trophy Points:
46
That means that it is processing a blank. What happens if you include this line after the @echo off at the top?
setlocal enableextensions
I am running the batch from folder qqq in this folder arrangement under Windows XP Professional Service Pack 2 with Command Extensions enabled.
diagnostic version
output...
Last edited: Jun 5, 2007
18. ### aweatheMember
Joined:
May 9, 2007
Messages:
55
0
Trophy Points:
16
Still the same problems: 1) the results.txt file and string.txt files are not created at any point.
2) I'm getting the Missing operand error a whole lot. Here is the output from using a copy of your latest code:
C:\Documents and Settings\ANDREW\Desktop\EventData2\test folder>excelwritertext2
.bat
subfolders under this folder...
C:\Documents and Settings\ANDREW\Desktop\EventData2\test folder\folder1
C:\Documents and Settings\ANDREW\Desktop\EventData2\test folder\folder2
C:\Documents and Settings\ANDREW\Desktop\EventData2\test folder\folder3
C:\Documents and Settings\ANDREW\Desktop\EventData2\test folder\folder1\foldera
C:\Documents and Settings\ANDREW\Desktop\EventData2\test folder\folder1\folderb
C:\Documents and Settings\ANDREW\Desktop\EventData2\test folder\folder1\folderc
C:\Documents and Settings\ANDREW\Desktop\EventData2\test folder\folder2\foldera
C:\Documents and Settings\ANDREW\Desktop\EventData2\test folder\folder2\folderb
C:\Documents and Settings\ANDREW\Desktop\EventData2\test folder\folder2\folderc
C:\Documents and Settings\ANDREW\Desktop\EventData2\test folder\folder3\foldera
C:\Documents and Settings\ANDREW\Desktop\EventData2\test folder\folder3\folderb
C:\Documents and Settings\ANDREW\Desktop\EventData2\test folder\folder3\folderc
found folder folder1
found folder foldera
testing folder name "foldera " for length
Missing operand.
Missing operand.
Missing operand.
Missing operand.
found folder folderb
testing folder name "folderb " for length
Missing operand.
Missing operand.
Missing operand.
Missing operand.
found folder folderc
testing folder name "folderc " for length
Missing operand.
Missing operand.
Missing operand.
Missing operand.
found folder folder2
found folder foldera
testing folder name "foldera " for length
Missing operand.
Missing operand.
Missing operand.
Missing operand.
found folder folderb
testing folder name "folderb " for length
Missing operand.
Missing operand.
Missing operand.
Missing operand.
found folder folderc
testing folder name "folderc " for length
Missing operand.
Missing operand.
Missing operand.
Missing operand.
found folder folder3
found folder foldera
testing folder name "foldera " for length
Missing operand.
Missing operand.
Missing operand.
Missing operand.
found folder folderb
testing folder name "folderb " for length
Missing operand.
Missing operand.
Missing operand.
Missing operand.
found folder folderc
testing folder name "folderc " for length
Missing operand.
Missing operand.
Missing operand.
Missing operand.
finished processing
results...
The system cannot find the file specified.
end of program
Last edited: Jun 5, 2007
19. ### IndochineRegular member
Joined:
Dec 21, 2006
Messages:
1,485
0
Trophy Points:
46
try this version
20. ### aweatheMember
Joined:
May 9, 2007
Messages:
55
0
Trophy Points:
16
well it didn't work unfortunately. I'm looking at it right now and will post again soon. Here's the output anyways, before I post:
C:\Documents and Settings\ANDREW\Desktop\EventData2\test folder>excelwritertest3
.bat
subfolders under this folder...
C:\Documents and Settings\ANDREW\Desktop\EventData2\test folder\folder1
C:\Documents and Settings\ANDREW\Desktop\EventData2\test folder\folder2
C:\Documents and Settings\ANDREW\Desktop\EventData2\test folder\folder3
C:\Documents and Settings\ANDREW\Desktop\EventData2\test folder\folder1\foldera
C:\Documents and Settings\ANDREW\Desktop\EventData2\test folder\folder1\folderb
C:\Documents and Settings\ANDREW\Desktop\EventData2\test folder\folder1\folderc
C:\Documents and Settings\ANDREW\Desktop\EventData2\test folder\folder2\foldera
C:\Documents and Settings\ANDREW\Desktop\EventData2\test folder\folder2\folderb
C:\Documents and Settings\ANDREW\Desktop\EventData2\test folder\folder2\folderc
C:\Documents and Settings\ANDREW\Desktop\EventData2\test folder\folder3\foldera
C:\Documents and Settings\ANDREW\Desktop\EventData2\test folder\folder3\folderb
C:\Documents and Settings\ANDREW\Desktop\EventData2\test folder\folder3\folderc
The system cannot find the path specified.
found folder and Settings\ANDREW\Desktop\EventData2\string.txt
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
found folder and Settings\ANDREW\Desktop\EventData2\string.txt
testing folder name " and Settings\ANDREW\Desktop\EventData2\string.txt " for le
ngth
echoing and Settings\ANDREW\Desktop\EventData2\string.txt to file C:\Documents
and Settings\ANDREW\Desktop\EventData2\test folder \string.txt
file length is 117 bytes
file length is bytes
Missing operand.
file length is bytes
Missing operand.
file length is bytes
Missing operand.
file length is bytes
Missing operand.
subtracting 2
result is 115
it is more than 6 so echo and Settings\ANDREW\Desktop\EventData2\string.txt to
"C:\Documents and Settings\ANDREW\Desktop\EventData2\test folder \report.txt"
The system cannot find the path specified.
The system cannot find the file and.
The system cannot find the file and.
finished processing
results...
The system cannot find the file specified.
end of program
21. ### aweatheMember
Joined:
May 9, 2007
Messages:
55
0
Trophy Points:
16
I think i have it working...(!) just doing double checks. The problem was the directory was too long... (why i don't know (?))
When using the version of the code you sent me first(below) it all seems to work.
@echo off
setlocal enableextensions
SETLOCAL ENABLEDELAYEDEXPANSION
set topfolder=%CD%
if exist "%topfolder%\report.txt" del "%topfolder%\report.txt" > nul
for /F "delims==" %%G in ('dir /b /ad') do (
echo %%G>> %topfolder%\report.txt
cd %topfolder%\%%G
for /f "delims==" %%H in ('dir /b /ad') do (
set string=%%H
set /a length=0
echo !string!> %topfolder%\string.txt
for %%a in (%topfolder%\string.txt) do (
set /a length=%%~za)
set /a length -=2
if !length! GTR 3 echo %%H>> %topfolder%\report.txt
)
)
cd %topfolder%
type report.txt
echo end of program | 2014-11-24 02:45:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6235790252685547, "perplexity": 12554.533744188233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400380355.69/warc/CC-MAIN-20141119123300-00232-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://dev.to/nickymarino/lets-use-latex | # Let's use LaTeX!
Nicky Marino Updated on ・2 min read
This post was originally posted on my blog here.
LaTeX is a beautiful documentation system. It's similar to Markdown, but has many more features and is commonly used for academic papers and other publications that require a lot of equations. In this quick how to, we cover how to install LaTeX and use Visual Studio Code as an editor/previewer.
# Setting up our editor
If you haven't already, install Visual Studio Code and go through a tutorial. Then, we need to install our extension for LaTeX itself. Head over to LaTeX Workshop and click install.
# Using LaTeX
Now that we have our editor setup, we can write our first project. All LaTeX documents have a (non-blank) file that ends with .tex, which is the "main" file that has all of the text of the document. Since LaTeX usually generates more files (such as .log, etc.) while building the document, it's recommended that every document you want to write has its own folder.
For starters, create a file named example.tex:
\documentclass{article}
% General document formatting
\usepackage[margin=0.7in]{geometry}
\usepackage[parfill]{parskip}
\usepackage[utf8]{inputenc}
\begin{document}
This is starter text.
\end{document}
Press Ctrl-Alt-B to build your project (or use the Command Palette), then Ctrl-Alt-T to view the pdf in a new tab. The end result should look like this:
# Conclusion
LaTeX and VSCode are a great combination that you can use to write beautiful reports and papers. Check out a tutorial or two to realize the full experience LaTeX has to offer.
Edit: Fanny recommends another great tutorial as well.
Edit 2: Fixed a tutorial link.
### Discussion
Lucas Reeh
If you do not use Latex often and need just to write one paper there are online tools available. I set it up on my machine because I write a lot, but I was trying out overleaf.com/ and it was a great experience because they have a lot of templates and git support. And no I do not get money to recommend them :D Just think it is a good place for beginners who have problems with the installation ;)
Nicky Marino
+1 Overleaf is great! I've recommended it to a few people as well.
Andrés
I strongly suggest using a specialized IDE such as TeXstudio, for me it was the best option (there are a lot!) and simply a delight to use during my college years.
Dr Janet Bastiman
I've loved LaTeX since I first saw it. I've been advocating Pweave for literal programming in my team and trying to get them to use it for automated diagrams too. I'm now delving into the depths of style files to create a PDF that appeases the marketing department for platform to customer automation. I've got a couple of posts already written on diagrams in LaTeX and using other True Type fonts which I'll cross post to here.
Michael Kohl
I treat LaTeX more as a compilation target. For example Emacs' org-mode can export wonderful LaTeX documents, including Beamer presentations etc. Best of both worlds IMHO, you get to write an extremely simple markup language (inline LaTeX is supported though) and Emacs takes care of converting that to LaTeX first and your desired output format later from that.
orgmode.org/manual/LaTeX-and-PDF-e...
Heiko Dudzus
I second that. Org-mode is great. I rarely use LaTeX, these days, but I use org-mode for many things, often exporting via LaTeX to PDF
Jan Wedel
It’s probably very powerful, but it won’t replace markdown. I use markdown very often in Sublime with highlighting to just structure some thoughts without ever generating a document from it. Markdown is just some formalized way to structure a text file like humans would do. Latex clearly is not something a human would write.
Now look at you basic example, it’s completely unreadable as source.
For more complex technical documentation I would still probably use Asciidoc.
Then, there are scientific papers. Actually when I did my masters thesis, I wanted to use Latex but I knew I had to use a lot of formulas, pictures and tables. The way to add those to a latex document was overly complex at that time and unreadable in source. So I read a couple of articles about how to handle large documents in Word (yes, I said it) without crashing it. Then I spent some time to create a Layout, styles and add fonts etc to resemble a document that looks like something that Latex would produce. At the end, I was much faster with the formula editor, inline Tables and graphics.
So I don’t think I would ever write Latex as primary source. Some WYSIWYG editor that generates Latex that I could change If I want to.
Espoir Murhabazi
Thanks a lot , i've used latex for 2 years now for my research paper and all my universities homework, it is a good and awesome tool, my favorite editor is texstudio the other thing i like about latex it is his community, the tex.stackexchange has already answers to any problems you can have with that editor
Shreyas Minocha
I'm a fan of lyx, a WYSIWYM(What you see is what you mean) editor for latex.
michie1
In my experience this is an application you want to use with Docker.
michie1
This is based on my experience a few years ago. I installed Latex from the default Ubuntu repo and it missed a package. Installing the missing package didn't work because it depended on a more recent version of Latex. I installed Latex with the official ISO, but there were some conflicts with the previous installation. I removed the previous installation but I forgot to remove some configuration settings. Possible it's just me, but it was not plugin and play.
Few months ago I needed Latex again and I tried one of the popular public available docker latex container. Besides waiting to download the container it only took me one minute to figure out what I wanted to do, did it and removed the container again.
Alex Escalante
That's just how I use it nowadays. Look for the image in the Docker Hub!
Peter Hoffmann
I finally managed to write my reply to your article, thank you for the motivation
Alex Escalante
You can use LaTeX from a Docker container, so you don't have to install anything! It works like a charm…
Alex Escalante
I've produced novels, poetry books and whatnot using LaTeX. It's quirky and strange, but it gets beautiful results. More people should try it!
Pedro Domingo
I also suggest online platforms like Sharelatex or Overleaf. They work really well and are also really easy to use.
Deyan Ginev
You can also try Authorea if you're interested in web-friendly LaTeX with code snippets, data, etc. | 2020-11-27 10:07:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6040767431259155, "perplexity": 1877.2247750494848}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141191511.46/warc/CC-MAIN-20201127073750-20201127103750-00703.warc.gz"} |
https://socratic.org/questions/how-to-plot-the-graph-of-f-x-sin-7pi-2 | # How to plot the graph of f(x)=sin ((7pi) / 2)?
Sep 7, 2015
This is a constant function as it has no x term in it and so is a straight line graph of gradient zero and y-intercept 7 pi/2.
#### Explanation:
This is a constant function as it has no x term in it and so is a straight line graph of gradient zero and y-intercept 7 pi/2.
Here is the graph :
graph{0x+10.99557429 [-18.46, 21.54, -3.16, 16.84]} | 2019-07-18 00:44:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4484989047050476, "perplexity": 1256.1159990353506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525483.62/warc/CC-MAIN-20190718001934-20190718023934-00419.warc.gz"} |
https://www.physicsforums.com/threads/taylor-series-of-1-sqrt-cosx.93082/ | Taylor series of 1/sqrt(cosx)
1. Oct 8, 2005
ascky
Is there a way to get the Taylor series of 1/sqrt(cosx), without using the direct f(x)=f(0)+xf'(0)+(x^2/2!)f''(0)+(x^3/3!)f'''(0)... form, just by manipulating it if you already know the series for cosx?
2. Oct 8, 2005
ascky
nvm.. I got it (yay) | 2017-08-18 21:18:50 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8915229439735413, "perplexity": 1942.6069452644535}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105108.31/warc/CC-MAIN-20170818194744-20170818214744-00565.warc.gz"} |
https://porespy.readthedocs.io/en/master/modules/visualization.html | visualization¶
Create Basic Views
This module contains functions for quickly visualizing 3D images in 2D views.
porespy.visualization.sem(im[, direction]) Simulates an SEM photograph looking into the porous material in the specified direction. porespy.visualization.xray(im[, direction]) Simulates an X-ray radiograph looking through the porouls material in the specfied direction. porespy.visualization.show_3D(im) Rotates a 3D image and creates an angled view for rough 2D visualization. porespy.visualization.set_mpl_style() porespy.visualization.show_mesh(mesh) Visualizes the mesh of a region as obtained by get_mesh function in the metrics submodule.
porespy.visualization.sem(im, direction='X')[source]
Simulates an SEM photograph looking into the porous material in the specified direction. Features are colored according to their depth into the image, so darker features are further away.
Parameters: im (array_like) – ND-image of the porous material with the solid phase marked as 1 or True direction (string) – Specify the axis along which the camera will point. Options are ‘X’, ‘Y’, and ‘Z’. image – A 2D greyscale image suitable for use in matplotlib’s imshow function. 2D-array
porespy.visualization.xray(im, direction='X')[source]
Simulates an X-ray radiograph looking through the porouls material in the specfied direction. The resulting image is colored according to the amount of attenuation an X-ray would experience, so regions with more solid will appear darker.
Parameters: im (array_like) – ND-image of the porous material with the solid phase marked as 1 or True direction (string) – Specify the axis along which the camera will point. Options are ‘X’, ‘Y’, and ‘Z’. image – A 2D greyscale image suitable for use in matplotlib’s imshow function. 2D-array
porespy.visualization.show_3D(im)[source]
Rotates a 3D image and creates an angled view for rough 2D visualization.
Because it rotates the image it can be slow for large image, so is mostly meant for rough checking of small prototype images.
Parameters: im (3D-array) – The 3D array to be viewed from an angle image – A 2D veiw of the given 3D image 2D-array
Notes
This function assumes that the image contains True for void space and so inverts the image to show the solid material.
porespy.visualization.set_mpl_style()[source]
porespy.visualization.show_mesh(mesh)[source]
Visualizes the mesh of a region as obtained by get_mesh function in the metrics submodule.
Parameters: mesh (tuple) – A mesh returned by skimage.measure.marching_cubes fig – A handle to a matplotlib 3D axis Matplotlib figure | 2019-12-11 05:29:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2234858125448227, "perplexity": 3655.760684407725}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540529955.67/warc/CC-MAIN-20191211045724-20191211073724-00193.warc.gz"} |
https://leonardtang.me/posts/Stonks/ | Memeing r/wallstreetbets.
# To the Moon
Enjoy: http://www.wsbstonks.us/.
# Getting the Data
Much like the Twitter API, the Reddit API is garbage. It limits you to the 1000 most recent listings by time – truly unfortunate.
Thankfully, the Pushshift.io API is an effective surrogate. You have to make paginated requests and wait an annoying amount of time between requests to not overload the API, but at least it works. I also had to throw in some try-excepts with continuous re-requesting to prevent my script from dying in the event of an ephemeral server error. I ran this all from a tmux shell in case my computer decided to go haywire in the middle of the run. There were also periodic checkpoint saves for the data – smart choice, as the total script took ~10 hours to run, and halted multiple times during execution.
Surprisingly, WSB gets a lot of submissions per day. In the past year alone WSB received something like 1,200,000+ submissions. Evidently, retail investors go hard.
To actually count up stock mentions, I performed some not-so-fancy exact-matching for all NYSE stock tickers and names, with various common (mis)-spellings within the latter.
Using the yahoo-finance-2 API, I grabbed historical pricing data for the top ~20 stocks by WSB mention. (Note the official Yahoo Finance API is deprecated, shoutout to the hero maintaing this third-party API).
But why stop at stocks? I decided to rinse and repeat the above process for cRyPtoCuRrEncIes.
Finally, I also summed up the total WSB stock mentions over time to get a proxy measure for total retail investor activity. I ended up comparing this to the VIX over time to get a sense of whether or not retail investors move the needle on market volatility (spoiler, they don’t).
# Aesthetics
The frontend was built with Dash, HTML, and CSS, which got me very far. I am by no means a frontend guru, but the site still turned out reasonably sexy.
# MaCh33n L3aRn
Ah, but what’s a quant project without some ~fancy~ MaCh33n l3Arn.
To do some sentiment analysis on WSB submissions, I decided to go overboard and use some SOFA Transformer models. Really, a SVM or even VADER probably could have sufficed. But I was in the mood to be extra, so I ended up playing with a off-the-shelf DistilBERT model and a finetuned XLNet model.
I grabbed the DistilBERT model from Hugging Face – it was pretrained on the SST-2 sentiment analysis classification task, so it was a perfect fit for my needs. Unfortunately, the max window length for the model is 512 tokens. The max token length in the WSB data was something like 4000+. As such, I needed to slice up each submissions into 512-windows, classify each window, and take a max-vote across windows for the ultimate submission classification. This is an OK approach, but not very scientific at all.
In comes XLNet, or Transformer-XL. XLNet is basically Transformers on steroids. Actually, it’s also RNNs on steroids. It’s basically an RNN with recurrent connections between sequences of tokens, rather than individual tokens. Each sequence itself is modeled with local attention. So it’s essentially an RNN wrapped on top of a Transformer. It’s better at dealing with long-length data than either vanilla Transformers or RNNs themselves – this is reflected in the fact that there is no explicit max token length, unlike our original DistilBERT model.
I went ahead and grabbed a pretrained XLNet model from Hugging Face, and fine-tuned it on the IMDB Movie Reviews dataset. It achieved ~96% accuracy on the test set. To be more rigorous, I really should have labelled some WSB comments (via MTurk or the like) to ensure that XLNet was really doing what it’s supposed to. In any case, it should be pretty robust and effective at classifying WSB submissions.
# Expert.ai Hackathon
I also submitted this project on a whim over the summer and ended up being a category winner of the Expert.ai NLP hacakthon. See the press release here. | 2023-03-29 19:11:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20834870636463165, "perplexity": 3848.082571635818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949025.18/warc/CC-MAIN-20230329182643-20230329212643-00081.warc.gz"} |
http://xvet.pixj.pw/membrane-stress-in-ansys.html | ANSYS Workbench Finite Element Analysis (FEA) tips and tricks article on Creation of a Path between Nodes in ANSYS 14. solid elements. Does anyone have knowledge on decomposition of stresses in shell elements -membrane and bending stresses? Hello everyone, I am using Ansys SHELL181 element for analysis of a dome structure in MAPDL. Assume that the membrane is made of OFHC (Oxygen-free High Conductivity) copper. Linearized membrane plus bending stress is divided by the nominal stress (in a straight pipe) to determine the C. ANSYS AUTODYN has been used in a number of applications:. Proprietary Stress decomposition Stress decomposition Totalstress Elasticstresses resultfrom external loads Membranestress Averagevalue through Bendingstress Linearlyvarying part stressthrough 2007ANSYS, Inc. References 40 and 41 discuss the effect of notches on low-cycle fatigue. Understanding and Using Shell Element Results - Part II November 28, 2014 By: Peter Barrett Share: This post is the second in a two-part series that discusses the use of shell elements for finite element analysis. txt) or read online for free. The radial stress for a thick-walled pipe is equal and opposite to the gauge pressure on the inside surface, and zero on the outside surface. Stresses: Beams in Bending 237 gitudinal axis. In the next ANSYS analysis holes of size 4, 8, 10, 12, 14, 16, 20 mm located at center of the height in all three cylinders for size optimization was carried. Explanation of stress linearization in ANSYS Workbench. IJSER15159 - Free download as PDF File (. How and why the proposed design was chosen Mechanical system CAD drawings Electrical system LabView code Electrical connections Feasibility ANSYS Part Decisions Bill of Materials * P09045: Membrane Characterization Test Stand * * P09045: Membrane Characterization Test Stand * Purpose: Create test stand for measuring physical properties (stress. Questions or comments can be sent to Kent L. Verification model via ANSYS Shell model via ANSYS- This customer need is not necessarily something the senior design team must do but something the lead engineer can be able to do after for his thesis work. mass) is performed using ANSYS with Shell 41 elements. How can I extract Bending moment and shear force in ansys workbench? My model is not a beam. The way most people do that is with *VWRITE. VM3 - Thermally Loaded Support Structure. I would like to know if anyone has any conservative rules of thumb, as a quick check, that would ensure I'm in compliance with the membrane and bending stress criteria. The maximum stress for the case with applied pressure was 467 MPa. You may recall that a circular hole in a plate has a stress concentration factor of about 3. The safe design, installation, opera-tion, and maintenance of pressure vessels are in accor dance with codes such as American Society of Mechanical. I do ANSYS technical support, training, tutorial and best practice writing, consulting and webinars across Europe and globally. In order to qualify the support a limit load analysis is performed. Thermo-Elastic Stress Analysis of the GHARR-1 Vessel during Reactor Operation Using ANSYS 13. The element has variable thickness, stress stiffening, large deflection, and a cloth option. Reliable FE-Modeling with ANSYS Thomas Nelson, Erke Wang CADFEM GmbH, Munich, Germany Abstract ANSYS is one of the leading commercial finite element programs in the world and can be applied to a large number of applications in engineering. Figure 7: ANSYS Elements with Local Coordinate System The APDL Commands Object macro presented in this document is used to linearize bending stress on a cross-section. See the complete profile on LinkedIn and discover Dapeng’s connections and jobs at similar companies. typical stress–strain curves shown in Figures 1 and 2. His solution very logically assumed that a thick cylinder to consist of series of thin cylinders such that each exerts pressure on the other. The values of these stress are use to compare with criteria in mechanical engineering design codes. shown in Figure 1. The ACT extension can be found here: Linearized Stress V6 (registration required). Matthew Rudow Account Manager at ANSYS, Inc. If redirected to ANSYS App Store. However, according to the single response analysis, there are some conflicts for these two objectives. 41 of the ANSYS Theory Reference for more details about this element. We have extensive experience with all facets of structural FEA including: Static Stress Analysis; Vibration Analysis Modal; Random Vibration / Power Spectral Density (PSD). It represents purely normal stress. Nodal stresses are averaged at the boundaries between elements. 5k x Sm* section, includes discontinuities but not Stress concentrations). The Forces and Moments acting on a certain cross section are divided by the section properties to yield the nominal membrane and bending stress. Membrane Stress. I am currently a master student in Mechanical and Aerospace Engineering at the University of California Irvine. the commercial computational software ANSYS Work-bench (ANSYS Inc. The element has six degrees of freedom at each node: translations in the nodal x, y, and z directions and rotations about the nodal x, y, and z-axes. Then I used the Mechanical APDL component system in the toolbox to open this same model in Classic. help software ANSYS has been used for the analysis. I am triying to write bending and membrane stresses for each nodes on a predefined path in a text file using Ansys APDL. You may recall that a circular hole in a plate has a stress concentration factor of about 3. 0, you can use the PRSECT command to linearize the total stress along a path. Using this plugin, you can quickly determine both membrane and bending stresses at multiple points across a pressure vessel and easily determine if your design meets ASME requirements. Hi everybody, how can I define soil parameters such as cohesion and plasticity stress and strain of soil in "ansys workbench"? I knew there is "drucker prager" model option in "ansys classic" to define these parameters but I couldn't find any option in "ansys workbench. 41-1 SHELL41 Membrane Shell. The term (A/α) is called the effective shear area. This page discusses how the primary membrane and primary bending stresses (Pm and Pb) are calculated. Understanding the performance and durability of welds is a key aspect of many engineering design processes. * Application 의 규격에 따라 (압력용기 - ASME 코드에 따라 Membrane, M+Bending, M+B+Peak Stress 를 비교, 기타 Military, SAE 규격 등) 연성(von Mises-Hencky theory)재료의 경우 Maximum Equivalent Stress Safety Tool , Maximum Shear Stress Safety Too l 기준에 따라 안전계수를 판단하며,. Questions will be welcome at the end of the talk, and at the conference after the luncheon address. membrane and bending stresses are derived across the thickness of a plate. 3 Membrane Structural Analysis in ANSYS To implement the nonlinear textile model in the analysis of tent structures is very chandelling, due to both nonlinear material behavior and geometrical nonlinear behavior of the membrane structure. two operational modes for OMDPs [7,8] where in FO mode the active layer of membrane. 2 which is for design by analysis for critical junctions in pressure vessels and in which detailed stress classification under various loadings is carried out unlike div. how to apply symmetric boundary condition 2. Another element having "membrane only" capability as an option is SHELL63. The equation of motion for this membrane with clamped edges was derived and a closed-form solution for the. Damped • Training Manual DYNAMICS 8. Note that the diagonal of a stress element toward which the shearing stresses act is called the shear diagonal. For each type of fuel cell add-on module, you will find background information pertaining to the models, a theoretical discussion of the models. Skilled and result-oriented Mechanical Engineer with 15 years of experience within various engineering disciplines including 3D/2D design, stress analysis and simulations (FEA), computational fluid dynamics (CFD), R&D projects and specific software programming. This causes a change in the geometry between the right and left nasal passage,. The safe design, installation, opera-tion, and maintenance of pressure vessels are in accor dance with codes such as American Society of Mechanical. A typical example is a. com Drag the structural modual into the work space. There are a number of different codes of practice EN, DNV, BS and ASME all of which give their own recommendations and methodologies for how welds should be modelled and assessed with the Finite Element Method, the choice of which is largely industry drive. ANSYS Results for Axial stress of steel I- Beam. stresses are never classified as primary stresses. One of the major barriers for polymer electrolyte membrane (PEM) fuel cells to be commercially viable for stationary and transportation applications is the durability of membranes undergoing chemical and mechanical degradation over the period of operation. Permeate flux and pressure drop from different sinusoidal membrane channels and the mesh spacer-filled membrane channel were compared to evaluate the performance of the spacers. The capability for calculating linearized stresses is available in the Visualization module of ABAQUS/CAE; it is most commonly used for two-dimensional axisymmetric models. Strength, yield stress, and elonga-tion are not measurably affected after 60 hr exposure in 85 psig steam 163°C (325°F). Our curriculum lays great stress on practical, hands-on learning over and above the rigorous academic requirement of the IIT system. Further options are also provided to account for stress gradients and surface finishes. Renewable Energy. Shell Elements in ANSYS. This improves the efficiency and life time of the riveted joints. 1 Introduction Here the concepts of stress analysis will be stated in a finite element context. Linearizing decomposes the total stress into the membrane, bending, and peak stress components. At time t=3, the load should be. Residual stress is well defined as the stress in the mechanically unloaded structure (e. SHELL 209 = like 208, but with midside node (3-node element) SHELL 28 = shear twist panel – 3 DOF/node (3 translation or 3 rotation). ANSYS simulation tools are also very effective for simulating problems that require solid FEA elements, as opposed to zero thickness membrane elements as is typical for sheet metal problems. 0 The purpose of this tutorial is to outline the steps required to view cross sectional results (Deformation, Stress, etc. Membrane desalination is a pressure driven process which is being employed on a large scale in areas which do not have an easy access to fresh water resources. A membrane stress value is calculated at the interpolations points based on the 1/t formula from the document. Let us take the example of the cantilever beam. The stresses could be easily summated over the cross-section to give the Fx axial force and Mz bending moment at these two stations. Shell and Membrane Elements • Membrane elements in ABAQUS/Explicit –Membrane elements (M3D4R, M3D4, M3D3) are used to represent thin surfaces in space that offer strength in the plane of the element but have no bending stiffness. ANSYS has these elements available for stress, thermal, electro-magnetic, or multi. Modeling is done by CATIA V16. 48 Ecole Polytechnique F¶ed¶eral Lausanne¶. and the element type is shell 181. The stress linearization option (accessed using the PRSECT, PLSECT, or FSSECT commands) uses a path defined by two nodes (with the PPATH command). They can be used to model thin membrane like materials like fabric, thin metal shells, etc. Ł You can also quickly locate the maximum and minimum values of the item being queried. ) are parallel to the element coordinate system, as are the membrane strains and curvatures of the element. After a series of subsequent heat stress studies, bulls were castrated and testicular tissue samples processed for evidence of histopathology. Primary stresses – Due to mechanical loads – satisfies force and moment equilibrium – Primary stress that exceeds the yield stress by some margin will result in failure – Exclude stress concentrations Secondary stresses – arise from geometric discontinuities or stress concentrations Primary membrane stresses – Membrane component of. The configuration’s shape had. Electro-mechanical modeling and simulation of RF Stress contours in the membrane 54 finite element code ANSYS is used for simulations and modeling. Xavier Martinez, 2012 05. aspx?doi=10. Also provided is the Stress Intensity, or Equivalent Stress, if required. The configuration’s shape had. ANSYS ASAS is a software suite that is part of the comprehensive range of ANSYS applica-tions for collectively satisfying demanding engineering and design requirements of the offshore industry. Specialities: Membrane technology, polymeric hollow fibre membrane fabrication for ultrafiltration, membrane module design, adsorption of heavy metal, wastewater treatment, nano-composite, process design, expertise on momentum heat and mass transport, heat and material balance, aspen Plus, basic knowledge of MATLAB, Microsoft excel, and Origin basic knowledge of ANSYS CFD. The vertices are nodes and triangles are elements. Specialties: Finite Element Analysis, Stress lead, Patran/Nastran, Ansys, Samcef, Femap. The integrated reactor structural design is evaluated to demonstrate with applicable criteria and ANSYS/WORK- BENCH has better operability than ANSYS APDL on stress analysis of. Strength, yield stress, and elonga-tion are not measurably affected after 60 hr exposure in 85 psig steam 163°C (325°F). Normal stress analysis-2 stresses. Nodal stresses are averaged at the boundaries between elements. How it works (Clients). I will graduate in December 2019. ANSYS program had stress linearization tool to obtain the stress component as membrane stress P m and bending stress P m m + + analysis. This feature is not available right now. Fig 4: Simulated contour plots of the reference strain gauge by ANSYS with a tensile stress of 10 Pa applied along x-axis. Find Freelancers; Find Tasks; How it Works. A hemispherical tank with radius r is used to contain water and is supported by flanges as show. (To avoid confusion, I would add a comment: inhomogeneous stress profile means the distribution of the stress along the direction parallel to the lipid molecules. Read "Correlation between epitaxial growth conditions of 3C–SiC thin films on Si and mechanical behavior of 3C–SiC self-suspended membranes, Sensors and Actuators A: Physical" on DeepDyve, the largest online rental service for scholarly research with thousands of academic publications available at your fingertips. To learn how to utilize local mesh control for the solid elements it is useful to review some two-dimensional (2D) problems employing the triangular elements. Design of bossed silicon membranes for high sensitivity microphone applications The mechanical stress produced by the deformation of the membrane is converted to a resistance change by the piezoresistive effect. Figures showing the Stress Transformation of 2D x-y Stress Element in non-eccentric portion of the bar To start the comparing values view the Maximum Principal Stress results. The model consists of five components: a square passive silicon membrane, a silicon substrate, a PZT thin film, a square top electrode, and a silicon residue region. The way most people do that is with *VWRITE. Equivalent stress is a concept helpful in the design of components undergoing a stress field along multiple components of stress. These elements may only be applied if there is no bending outside the plane of the structure, like in walls, deep beams and the like. This project consists of the design of a polymer electrolyte membrane fuel cell, building a single cell fuel stack and experimental testing thereof. STRESS The intensity of internal forces in a body (force per unit area) acting on a plane within the material of the body is called the stress on that plane. Understanding the performance and durability of welds is a key aspect of many engineering design processes. We begin by recasting the six global stress tensors in the local coordinate system dictated by the stress classification line (SCL). In addition, explains, stress extraction and stress assessment though stress linearization, membrane stress & membrane plus bending stress evaluation. Then I used the Mechanical APDL component system in the toolbox to open this same model in Classic. From: Materials and the Environment (Second Edition), 2013. See the complete profile on LinkedIn and discover Afsaneh’s connections and jobs at similar companies. Load incrementation of initially weak structures I just worked on an FEA model which was essentially a flat, very thin plate of plastic with a pressure applied. ) for the coupling of the finite-element-based software, ANSYS, with the finite-volume-based soft-ware, ANSYS CFX. VM5 - Laterally Loaded Tapered Support Structure. how to apply symmetric boundary condition 2. membrane inflation experiment of human aortic tissue by the help of ANSYS Workbench. 5 kN/m 2 to 2. a finite element model using ANSYS. - "Cartesian" or generalized linearization in ANSYS - "Linearization for three-dimensional structures" in ABAQUS - Stress Linearization Procedure described in Section 5. Maximum linearized membrane plus bending stress through the wall are determined for each bend and weld region. Around its first natural resonance frequency, Z m can be modeled by a mass and a spring system. ANSYS: Circular membrane and pre-stress by the "INISTATE" command? Dear all I am about to compare the eigen frequencies of a prestressed membrane calculated analytically to one in Ansys. Membrane and bending stress keyword after analyzing the system lists the list of keywords related and the list of websites with related content, in addition you can see which keywords most interested customers on the this website. ANSYS Examples These pages have been prepared to assist in the use of ANSYS for the formulation and solution of various types of finite element problems. 0, 5 in the format needed by the. It is most common pump. The rest of this talk will be a non-technical history of ANSYS, the worlds premier engineering simulation software. Are the above listed SMISC quantities doing the same thing as the stress linearization option under 'Path operations'? Where can I find more information on how these quantities are being calculated by Ansys?. We'll ask to evaluate the expression seqv (von Mises) and set the integration option of using the elemental mean. mary membrane stress, secondary membrane stress and bending stress to be evaluated. Stress Membrane Peak= 84,6Mpa (low physical stress) Results : Stress Membrane region Weld longitudinal (6,4Mpa) Stress Equivalent Von Misses Peak= 77,7Mpa. The object is fixed along part of the boundary and does not move. A of ASME Section VIII, Division 2 x. View Alireza Pourimani’s profile on LinkedIn, the world's largest professional community. Problem Specification. I have a shell model in ABAQUS and I would like to extract the bending and membrane stress of certain elements through the thickness. 5, used for stress linearization. ANSYS has these elements available for stress, thermal, electro-magnetic, or multi. For the fabricated membrane. • 2) Radial stress which is stress similar to the pressure on free internal or external surface. The critical stress (σ cr) was determined using an 2. Mb Mb A B D C Mb Mb Now run around to the other side of the page and look at the section AB. It briefs about geometry creations as well as meshing of nozzle & shell. If redirected to ANSYS App Store. Dear collegues, I am trying to simulate a special kind of joints and figuring out how the stresses will be distributed around them. Are the above listed SMISC quantities doing the same thing as the stress linearization option under 'Path operations'? Where can I find more information on how these quantities are being calculated by Ansys?. Rattanasiri et al. 2) The stress in the direction "normal" to the beam (or shell) mid-surface is zero throughout the response history. To linearize stresses, the following procedure is used: You define a section through your model. 1 DYNAMICS 8. Analytical results of CPT are validated with ANSYS results. Is it possible to get this data using *VGET? I could not find these items (bending and membrane stresses ) in *VGET command. Some parts that might experience axial force are building joists, studs and various types of shafts. In the Excel file it appears that ANSYS has already categorised the stresses into Membrane, Bending, and Peak etc. To illustrate the ANSYS linearization Figures 8(a) to 8(f) show the stress linearization results œction AA (ee. These we conveniently resolve into parallel and meridional stresses, $\sigma$ p and $\sigma$ m as shown in the expansion. ANSYS macro is used to implement the offset direction and weld cap, mesh, analyze and post-process the results. the pressure acts in the +z direction). Figure 1 The sketch of the steam generator support. The stress function has noting on x in most of the cross section. Linearized membrane plus bending stress is divided by the nominal stress (in a straight pipe) to determine the C. Figure Figure1 1 shows a shell element accompanied with stress resultants required for membrane theory under symmetric loading. Under my supervision, we have initiated and successfully performed FEM-based analyses of several mechanical components using ANSYS 14. The temperature variation has been used as boundary condition to the problem. It is of interest in pressure vessel design when it exceeds the primary membrane stress outside the local region remote from discontinuities. The stress intensity for each of these components can be compared to the appropriate ASME Code limit. A more general case is when a component may be subjected to two normal stresses acting at right angles plus shear stresses. and the element type is shell 181. The mass flux of vapour across the membrane is regulated by three diffusion mechanisms [3]: 1) Knudsen diffusion through the membrane pores; 2) Poiseuille flow through the membrane. • membrane stiffness only option since “membrane stresses” are required. Firstly, the high stress. Printout includes the moments about the x face (MX), the moments about the y face (MY), and the twisting moment (MXY). In this paper, stress analysis is carried out for an I-section beam with semi-elliptical cracks using ANSYS software, where four case studies are done by varying the location of crack in the beam. This can be three-dimensional membrane elements in a non-flat geometry or e. Residual stress is well defined as the stress in the mechanically unloaded structure (e. The Membrane is fixed through the sides with a residual/intrinsic/internal stress of about 1Mpa. The element formulation is based on logarithmic strain and true stress measures. If at some point principal stress is said to have acted it does not have any shear stress component. A del codice, gli unici valori di ansys sono: membrane, bending, m+b, peak. EXAMPLE The general primary membrane stress in a pipe loaded in pure tension is the tension divided by the cross-sectional area. This is nearly so in the plot above. Everything At One Click Sunday, December 5, 2010. It's important to note that peak stress is generally only important for fatigue, and really it doesn't matter for structural integrity. Is it possible to get this data using *VGET? I could not find these items (bending and membrane stresses ) in *VGET command. After a series of subsequent heat stress studies, bulls were castrated and testicular tissue samples processed for evidence of histopathology. I have chosen the pre-listed neoprene rubber in ANSYS Workbench. A typical evaluation section usually is the structural discontinuous section where stress intensity is high due to mechanical loads. The code rules for determining the acceptability of stresses refer to membrane + bending stress… Step 5b – Calculate the Bending + Membrane Stress: Pm + Pb. For example, the element mean stress of a first order hexa element is the average stress of its 8 nodes which attached to the element. We design tailor-made heat transfer systems. Applying the ACT linearized stress extension. Autonomous time and work managing. Maximum linearized membrane plus bending stress through the wall are determined for each bend and weld region. 0 The purpose of this tutorial is to outline the steps required to view cross sectional results (Deformation, Stress, etc. Find the first five modal frequencies for a load of 10,000 lbf. (refer Clause 5. In this study, a new three-dimensional CFD model was developed using ANSYS FLUENT to study the shear stress on industrial scale hollow fibers in non-Newtonian flow and compared with. Solving linear problems with solid elements is mostly straight forward in ANSYS but once the non-linear region is encountered, a different type of solver is. The second is to further examine the accuracy of a new 3-D Cosserat eight noded brick element (Nadler and Rubin in Int J Solids Struct 40: 4585–4614, 2003) which was developed within the context of the. POST1 - Stress Linearization' talks about separation of stresses through a section into constant (membrane) and linear (bending) stresses by commands like PRSECT, PLSECT, or FSSECT. The design of openings and nozzles is based on two considerations 1. frequency, or postprocesses results for a random acoustics analysis with diffuse sound field. Hi there! I have a very short question. element employing an in-plane, constant-stress assumption. Design of bossed silicon membranes for high sensitivity microphone applications The mechanical stress produced by the deformation of the membrane is converted to a resistance change by the piezoresistive effect. Stress intensity is defined as twice the maximum shear stress (which is equivalent to the difference between the largest principal stress and the smallest principal stress as a given point). Another element having "membrane only" capability as an option is SHELL63. where gyroscopic damping effects are important. Stresses: Beams in Bending 237 gitudinal axis. help software ANSYS has been used for the analysis. The 2D plane stress FE model has a thickness of 1 mm and consists of 4724 PLANE2 elements and 9672 nodes. membrane stress intensity or membrane plus bending stress intensity (K bk, K m) and stress limit factor applicable to the Design allowable shear stress (K v) are 1. So where else is the problem? Regards _____. The theoretical values and ANSYS values are compared for both solid wall and Head of pressure vessels. Researchgate. 09 MPa were assigned to the. Using Ansys-fluent predicted the final temperature and SoC reached after compressed hydrogen gas filled at 700 bar pressure in Type IV tank. "Control parameters,"he explained, "are those factors. How it works (Clients). Equivalent stress is a concept helpful in the design of components undergoing a stress field along multiple components of stress. org/proceeding. This project consists of the design of a polymer electrolyte membrane fuel cell, building a single cell fuel stack and experimental testing thereof. Informazioni. Manassas, Virginia. ANSYS 应力分析报告 Stress Analysis Report 学生姓名 学号 任课教师 导师 计算机辅助工程分析报告 目录 一. stresses are never classified as primary stresses. membrane inflation experiment of human aortic tissue by the help of ANSYS Workbench. Team Members Joseph Ferrara Team lead Mechanical Engineering Joseph Church Integration Mechanical Engineering Joelle Kirsch Design Mechanical Engineering Jamie Jackson Sensors and Integration Electrical Engineering Joshua Van Hook Labview and Integration Electrical Engineering Project Description Project is done in correspondence with team lead’s thesis research on structural aspects of the alveolar sac within the lung Main Focal Point Design and Develop a test fixture for Membrane. The Influence of Residual Stress Induced by Anodic Wafer Bonding on MEMS Membrane Properties Cezary Maj, Piotr Zając, Michał Szermer, Andrzej Napieralski Dept. 1 software to analyse the dynamics involved in membrane removal off the retinal surface. There exist a couple of particular angles where the stresses take on special values. General Primary Membrane Stress (P m) -. Damped • Training Manual DYNAMICS 8. VM2 - Beam Stresses and Deflections. Applying the ACT linearized stress extension. Team Members Joseph Ferrara Team lead Mechanical Engineering Joseph Church Integration Mechanical Engineering Joelle Kirsch Design Mechanical Engineering Jamie Jackson Sensors and Integration Electrical Engineering Joshua Van Hook Labview and Integration Electrical Engineering Project Description Project is done in correspondence with team lead’s thesis research on structural aspects of the alveolar sac within the lung Main Focal Point Design and Develop a test fixture for Membrane. Nodes and Elements. In this case, the peak stress is simply the difference between the linearised membrane plus bending stress and the actual membrane plus bending distribution. 들은 bending stress, membrane stress, peak stress로 이루 어진 값이다. ANSYS Results for Axial stress of steel I- Beam. or triangles. Figures showing the Stress Transformation of 2D x-y Stress Element in non-eccentric portion of the bar To start the comparing values view the Maximum Principal Stress results. The work included coding a specific FEM model by using Matlab. Plane stress elements are characterized by the fact that the stress components perpendicular to the face are zero: = 0. Please try again later. Development of a numerical model for simulating the behaviour of Cell membranes. The direction angle of major principal is the included angle between the horizontal and the major principal directions. In order to compute the load-deformation behavior of reinforced concrete membranes, the cracked membrane model is combined with a linear elastic material law and a biaxial compression model for the concrete. 0 is the Young’s modulus, T is the residual stress, and σ is the Poisson’s ratio of the membrane material. investigated the viscous interaction between autonomous underwater vehicles (AUVs) and studied the influence of their shape using ANSYS CFX 12. I am currently working with V12. Code rules separate areas of the model that are acceptable from those that are overstressed, and if required, the solid model is modified, and the process repeated until successful. The yield stress in compression will be approximately the same as (the negative of) the yield stress in tension. Other factors, such as humidity, film thickness, and tensile elongation rates, were found to. A of ASME Section VIII, Division 2 x. Upon etching, the residual stress decreases slightly in the membrane region and increases slightly outside the membrane. For each type of fuel cell add-on module, you will find background information pertaining to the models, a theoretical discussion of the models. Strength, yield stress, and elonga-tion are not measurably affected after 60 hr exposure in 85 psig steam 163°C (325°F). and the analysis is done using ANSYS 15. ANSYS Mechanical is unique, however, offering this capability with Allman rotational DOF, and enhancement of membrane behavior when used as a quadrilateral. The molecular interaction between probe molecules and target molecules generate a surface stress on the thin gold layer. - Participated in selection of top automotive fuel cells membrane candidates by. The beam is a simple cylinder with a thickness of 0. investigated the viscous interaction between autonomous underwater vehicles (AUVs) and studied the influence of their shape using ANSYS CFX 12. You must make a shaft and make its one end as the fixed inside the ANSYS workbench workspace. There is an option of stress linearization in Path operations which returns membrane, membrane+bending and total stresses. The first analysis option, Plane Stress, is the ANSYS default and provides an analysis for a part with unit thickness. Stress limits for local membrane and bending stresses from “load-controlled” or “strain-controlled” loads are provided in paragraph 5. The simulated stress profile, shown in Figure 2, is the result of analysis of the device membrane using ANSYS software. Note that here the stress along the material fiber that is initially normal to the mid-surface is considered; because of shear deformations, this material fiber does not remain exactly normal to the mid-surface. Stress linearization of a shell model. Renewable energy encompasses an incredibly diverse array of innovative technologies used in power generation. This page discusses how the primary membrane and primary bending stresses (Pm and Pb) are calculated. Stress linearization is the separation of stresses through a section into constant membrane, linear bending, and nonlinearly varying peak stresses. From: Materials and the Environment (Second Edition), 2013. typical stress–strain curves shown in Figures 1 and 2. 1115/SBC2012-80235 Franck J. The configuration’s shape had. 2) The stress in the direction "normal" to the beam (or shell) mid-surface is zero throughout the response history. Matthew Rudow Account Manager at ANSYS, Inc. Thin Wall Pressure Vessel Longitudinal Stress. org/proceeding. The element formulation is based on logarithmic strain and true stress measures. Straight lines in the plate that were originally vertical remain straight but become inclined; therefore the intensity of either principal stress at points on any such line is proportional to the distance from the middle surface. In this case, the results are:. Find the first five modal frequencies for a load of 10,000 lbf. I need to set T=35MPa in order to achieve pre-stress in structural analysis and then i could go onto modal analysis. Solving linear problems with solid elements is mostly straight forward in ANSYS but once the non-linear region is encountered, a different type of solver. pl Abstract—The bonding is one of the common processes operational temperature. Stress linearization is the separation of stresses through a section into constant membrane, linear bending, and nonlinearly varying peak stresses. The load is constant pressure ranging from 20 uPa to 20kPa. or triangles. 2) The stress in the direction "normal" to the beam (or shell) mid-surface is zero throughout the response history. a sketch of a Direct Contact Membrane Distillation module is shown in Fig. a finite element model using ANSYS. A static analysis is first performed to apply a desired initial tension (pre-stress) to the membrane. membrane stress. As a review of shear stresses in beams, consider the shear stress in a rectan- gular section (with section d×b). Thermal Stresses in a Bar. 332) Stresses are actually computed at "integration points" inside the element. The key to finding near accurate stresses is the choice of the element and meshing to suitable degree of accuracy. Linearized membrane plus bending stress is divided by the nominal stress (in a straight pipe) to determine the C. 12, ANSYS, USA) to create a simplified finite element model of the hind wing of a male beetle instead of a real model. Vlahinos noted. | 2020-04-02 13:10:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3614822030067444, "perplexity": 2706.3125450328516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506959.34/warc/CC-MAIN-20200402111815-20200402141815-00429.warc.gz"} |
https://mathhelpboards.com/threads/lil-math-square.25355/ | # Li'l math square...
#### Wilmer
##### In Memoriam
Code:
A + B - C = D
+ / +
E + F - G = H
- / +
I * J * K = L
= = =
M N 0
Al numbers from 1 to 15 used.
Hint: J = 1
#### Monoxdifly
##### Well-known member
Thanks for the hint, Denis McField!
Code:
8 + 12 - 7 = 13
+ / +
11 + 4 - 6 = 9
- / +
5 * 1 * 2 = 10
= = =
14 3 15
#### Wilmer
##### In Memoriam
Yagotitt My.Fly !! | 2020-10-30 13:47:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5884546637535095, "perplexity": 6893.747758529448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107910815.89/warc/CC-MAIN-20201030122851-20201030152851-00133.warc.gz"} |
http://weisu.blogspot.com/2009/10/short-time-energy-ste-in-audio.html | ## Wednesday, October 28, 2009
### Short-Time Energy (STE) in Audio
STE is the audio feature that is widely used and the easiest. It is also called volume. STE is a reliable indicator for silence detection. Normally STE is approximated by the rms (root mean square) of the signal magnitude within each frame. The MATLAB code:
% assume the window size is 2 seconds% there are overlaps in windowing% assume the step of shif is 1 second[wav fs] = wavread('DEMO.wav');wav = wav / max(max(wav));window_length = 2 * fs;step = 1 * fs; % has overlapframe_num = floor((length(wav)-window_length)/step) + 1;energy = zeros(frame_num, 1);pos = 1;for i=1:frame_num wav_window = wav(pos:pos + window_length-1); energy(i) = 1/window_length * sum(wav_window.^2); pos = pos + step;end
The short time energy of audio signal depends on the gain value of the recording devices. Usually we normalize the value for each frame | 2017-10-24 00:08:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5415917038917542, "perplexity": 5619.20926262777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187827662.87/warc/CC-MAIN-20171023235958-20171024015958-00550.warc.gz"} |
https://artofproblemsolving.com/wiki/index.php/2004_AMC_10A_Problems/Problem_20 | # 2004 AMC 10A Problems/Problem 20
## Problem
Points $E$ and $F$ are located on square $ABCD$ so that $\triangle BEF$ is equilateral. What is the ratio of the area of $\triangle DEF$ to that of $\triangle ABE$?
$\mathrm{(A) \ } \frac{4}{3} \qquad \mathrm{(B) \ } \frac{3}{2} \qquad \mathrm{(C) \ } \sqrt{3} \qquad \mathrm{(D) \ } 2 \qquad \mathrm{(E) \ } 1+\sqrt{3}$
## Solution 1
Since triangle $BEF$ is equilateral, $EA=FC$, and $EAB$ and $FCB$ are $SAS$ congruent. Thus, triangle $DEF$ is an isosceles right triangle. So we let $DE=x$. Thus $EF=EB=FB=x\sqrt{2}$. If we go angle chasing, we find out that $\angle AEB=75^{\circ}$, thus $\angle ABE=15^{\circ}$. $\frac{AE}{EB}=\sin{15^{\circ}}=\frac{\sqrt{6}-\sqrt{2}}{4}$. Thus $\frac{AE}{x\sqrt{2}}=\frac{\sqrt{6}-\sqrt{2}}{4}$, or $AE=\frac{x(\sqrt{3}-1)}{2}$. Thus $AB=\frac{x(\sqrt{3}+1)}{2}$, and $[ABE]=\frac{x^2}{4}$, and $[DEF]=\frac{x^2}{2}$. Thus the ratio of the areas is $\boxed{\mathrm{(D)}\ 2}$
## Solution 2 (Non-trig)
WLOG, let the side length of $ABCD$ be 1. Let $DE = x$. It suffices that $AE = 1 - x$. Then triangles $ABE$ and $CBF$ are congruent by HL, so $CF = AE$ and $DE = DF$. We find that $BE = EF = x \sqrt{2}$, and so, by the Pythagorean Theorem, we have $(1 - x)^2 + 1 = 2x^2.$ This yields $x^2 + 2x = 2$, so $x^2 = 2 - 2x$. Thus, the desired ratio of areas is $$\frac{\frac{x^2}{2}}{\frac{1-x}{2}} = \frac{x^2}{1 - x} = \boxed{\text{(D) }2}.$$
## Solution 3
$\bigtriangleup BEF$ is equilateral, so $\angle EBF = 60^{\circ}$, and $\angle EBA = \angle FBC$ so they must each be $15^{\circ}$. Then let $BE=EF=FB=1$, which gives $EA=\sin{15^{\circ}}$ and $AB=\cos{15^{\circ}}$. The area of $\bigtriangleup ABE$ is then $\frac{1}{2}\sin{15^{\circ}}\cos{15^{\circ}}=\frac{1}{4}\sin{30^{\circ}}=\frac{1}{8}$. $\bigtriangleup DEF$ is an isosceles right triangle with hypotenuse 1, so $DE=DF=\frac{1}{\sqrt{2}}$ and therefore its area is $\frac{1}{2}\left(\frac{1}{\sqrt{2}}\cdot\frac{1}{\sqrt{2}}\right)=\frac{1}{4}$. The ratio of areas is then $\frac{\frac{1}{4}}{\frac{1}{8}}=\framebox{(D) 2}$ | 2019-11-16 00:36:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 49, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9716681241989136, "perplexity": 91.09419951782392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668716.22/warc/CC-MAIN-20191115222436-20191116010436-00212.warc.gz"} |
https://epg.modot.org/index.php/238.1_Aerial_Mapping_and_LiDAR_Surveys | 238.1 Aerial Mapping and LiDAR Surveys
Aerial Mapping and LiDAR surveys can achieve the same results as conventional topographical surveys by methods which require a minimum of fieldwork. Airborne LiDAR sensors are used by companies in the remote sensing field. It can be used to create DTM (Digital Terrain Models) and DEM (Digital Elevation Models). This is a common practice for larger areas since a plane can take in a swath 1 km wide in one flyover. Greater vertical accuracy can be achieved with a lower flyover and a narrower swath, even over a forest, where the height of the canopy as well as the ground elevation can be determined. Conventional surveys require numerous field measurements to obtain the same data and information.
238.1.1 Surveys Adaptable to Aerial Mapping and LiDAR
Projects with certain physical characteristics are adaptable to aerial mapping and LiDAR surveys. The districts must make the judgment, based on their knowledge of the requirements of each project, whether a conventional or aerial mapping and LiDAR survey best satisfies the need. Factors that influence this decision include:
• Scope of the project: The use of aerial mapping and LiDAR methods, in almost all cases, will result in the most economical survey for projects encompassing larger areas. Conversely, smaller projects are best suited for field surveys. It is difficult to provide a quantitative guideline to determine which projects fall into which category. Generally, it is better to field survey projects that are shorter than 1300 feet. Aerial mapping and LiDAR projects should be reviewed to determine the best application of LiDAR that will meet the project needs and budget.
Rough terrain with heavily timbered areas
• Type of project: The project's scope will usually be the best guide for the type mapping needed. Urban enhancements of existing structures may lend themselves to a terrestrial or mobile type of collection for the best results. Where a cross country new alignment would be much better suited for aerial platform like a fixed wing aircraft or helicopter.
• Terrain: Projects with terrain that is difficult or impossible to field survey will be surveyed by aerial mapping and LiDAR methods. These include projects in highly developed areas, extremely rough terrain, heavily timbered areas as well as bridge surveys for large streams.
• Time consideration: Time sensitive projects will be field surveyed, because of the lead-time necessary for aerial mapping and LiDAR surveys. Table 238.1.1 shows guidelines to determine what type of survey best suites a project.
Table 238.1.1
Aerial Mapping and LiDAR Survey Versus Conventional Survey
Aerial Mapping and LiDAR Conventional
Wide Corridor Projects Resurfacing Projects
Relocation Shoulder Widening Projects
Large Area Planimetric Surveys Small Area Planimetric Surveys
Large Bridge Replacements Small Bridge Replacements
> 1300 ft. < 1300 ft.
Interchanges -
Rough Terrain -
In addition, district personnel requesting mapping should discuss project specifics with the aerial mapping and LiDAR personnel when determining which method of survey better suits the project.
238.1.2 Stages of Aerial Mapping and LiDAR Services
Data acquisition for aerial mapping and LiDAR surveys are provided by professional consulting services. These professional consultant services are managed by Design Division. These services are provided via the “Annual Flight Program Contract” and the following sequential stages must be coordinated between the district and Design Division:
1. Recommendations for the flight program
2. Mapping limit plan
3. Accuracy planning
4. Consultant selection
5. Quality control on projects
6. Review and quality checks on aerial mapping and LiDAR survey data
7. Furnishing electronic data to the district.
238.1.2.1 Recommendations for Flight Program
The district recommends projects for the flight program each year at the request of Design by the last day of September. Projects recommended for mapping are those that a location study, if necessary, has been approved, or projects for which this will be completed in time to obtain the data during the flying season (approximately December 15 to April 15). Normally, projects that are in the design year of the approved STIP are considered, but other projects may be considered if conditions warrant.
238.1.2.2 Flight Planning
Design performs flight planning, but information from the district is necessary to efficiently plan the the aerial mapping and LiDAR mission. The district furnishes information including the location of the proposed improvement indicated in a CADD drawing file, and the desired accuracy. Mapping and photography limits are to be submitted electronically using CADD software and ProjectWise.
It is desirable to limit the aerial mapping and LiDAR corridor to only that area that is necessary for the design of the project. The district's recommendation regarding the type and extent of aerial mapping and LiDAR survey data coverage will consider the following:
• For planimetric coverage, corridors will include all features that may affect design considerations and right of way takings. Planimetric corridors do not have to be connected but must be within the area in which horizontal controls have been established.
• For terrain coverage, corridors will be limited to the area necessary for earthwork computations. Generally, this area is within the limits of proposed right of way. Terrain corridors do not have to be connected but must be within the area in which horizontal and vertical control have been established.
• Generally, corridor requests do not include areas for drainage computations. Keep in mind that aerial mapping and LiDAR data should be supplemented with conventional survey data and shall be verified by the district survey party .
238.1.2.3 Accuracy Planning
To best extract topographic and manmade features from the LiDAR data, three density and accuracy levels are used, as describe below:
Type A, Roadway and Pavement Scans (Mobile, Helicopter Based or Terrestrial LiDAR)
1) Internal Horizontal / Vertical Accuracy of 0.3 ft. at 95% confidence.
2) Maximum point spacing of 0.3 ft. on the full classified LAS file.
Type B, Corridor and earthwork Scans Urban (fixed wing or Helicopter Based Aerial LiDAR)
1) Internal Horizontal / Vertical Accuracy of 0.5 ft. at 95% confidence.
2) Maximum point spacing of 1 ft. on the full classified LAS file.
Type C, Corridor and earthwork Scans Rural (fixed wing)
1) Internal Horizontal / Vertical Accuracy of 0.5 ft. at 95% confidence.
2) Maximum point spacing of 2 ft. on the full classified LAS file.
238.1.2.5 Quality Control on Projects
Quality control all projects will be done by the Central Office Design survey staff to ensure the data provided meets MoDOT’s needs for engineering mapping and design.
To verify the accuracy of the surface of the delivered aerial mapping and LiDAR, the Central Office survey staff will take check shots along the main alignment of the project at least every 200 ft. and alternate shoot every 400 ft. each side of the main alignment.
To verify the accuracy of extracted features of the delivered aerial mapping and LiDAR, the Central Office survey staff shall take check shots along curb lines, bridge rails, retaining walls and other features with elevation differences that can easily be identified.
238.1.2.6 Review and Quality Checks on Aerial Mapping and LiDAR Survey Data
All quality control will be done using MoDOT's CADD software. Project control shots will be compared to the surface model and topographic and manmade features provided by the aerial mapping and LiDAR consultant and must meet the internal horizontal/vertical accuracy defined by the scope of services at 95% confidence.
A report will be run and kept with the files for the project certifying these accuracies.
Any acceptance of data below these standards will have written documentation explaining the prescence of this data and what resulting developments may be encountered during the design process.
238.1.2.7 Furnishing Electronic Data to the District
All data will be delivered to the districts via ProjectWise. Once all the data has been delivered and has met the quality standards, Central Office will place the file into the districts' ProjectWise folders for access.
238.1.3 Standard Deliverables for LiDAR Data
The consultant shall provide to the Commission the following items:
1) Three ASCII coordinate files all containing the primary control, photo control and check points for the project survey. These files are:
Primary Control File. A file listing control positions by point number, X, Y, and Z values in project units referenced to the Missouri Coordinate System of 1983, the correct zone name (ex. East Zone), with X and Y values modified by the projection factor. This ASCII formatted file will be in the form of: J######.rec.
The Geodetic Control File. A file containing latitude and longitude information for all control points named J######.txt with file format listed on page 3 of Fig. 238.1.3, Aerial Photography and Control Survey. All OPUS solution sheets and/or data sheets from post processed static GPS sessions, calculations for grid and projection factor including the centroid point, mean elevation and the final grid and projection factor will also be listed in this file.
Check Shots File. A file listing quality control positions by point number, X, Y, and Z values in project units referenced to the Missouri Coordinate System of 1983, the correct zone name (ex. East Zone), with X and Y values modified by the projection factor. This ASCII formatted file will be in the form of: J######.txt.
2) MoDOT Survey Report. A MoDOT survey project report for each project. It shall include copies of all inter-visible control survey pair station descriptions along with all benchmark descriptions and field ties. A sketch of each point shall be provided showing the relative location of field ties to the point being referenced. The consultant shall provide a letter certifying that the below mentioned surveying specifications have been achieved for this project. The letter shall document the relative positional accuracies in parts per million, the confidence level in percent and the post adjustment residual values in centimeters that were achieved on this project. If any portion of the survey does not comply with these specifications, a written report substantiating the material variances from the specifications with the responsible surveyor’s signature is required. The Commission reserves the right to disallow variations.
The survey report documents proof of these specifications:
a. Fixed preprocess baseline solutions.
b. Control station relative positional accuracies of 10 ppm in relation to adjacent stations at the 95% confidence level.
c. Post adjustment residual values <3 cm in any dimension for control stations.
d. A dgn file with all survey control points plotted and labeled.
3) An Orthomosaic captured simultaneously with LiDAR or separate aerial sensor, meeting the following requirements:
a. Shall have a resolution of 0.5 ft. per pixel.
b. Shall be tiled, with tiles no larger than 3250 x 3250 pixels.
c. Shall encompass the area requested for mapping.
d. Shall be a geotiff and accompanied by a projection file (.prj).
e. Shall include a shape file indicating the locations of the orthomosaic tiles.
4) LiDAR projects, the following shall be delivered:
a. Data will be delivered in LAS version 1.2 format or newer with the following information.
i. Record return
ii. Intensity
iii. GPS time
iv. Swath line number designation
v. Classification values after trimming (without data voids between swath lines)
0 = raw, never classified
1 = unclassified
2 = ground (i.e. bare earth)
3 = low vegetation
4 = medium vegetation
5 = high vegetation
6 = building
7 = low point
9 = water
10 = bridge
12 = overlap
b. LiDAR Processing Report.
c. Vertical Accuracy Report.
d. A shape file containing numbered LAS tiles.
5) ASCII coordinate file for each project, containing the following items for each point:
a. X, Y, and Z coordinates using the Missouri Coordinate System of 1983, the correct zone name (ex. East Zone), modified by a factor developed by the consultant.
b. Feature code using MoDOT Standard Surveying Feature Codes.
6) Microstation and Geopak files to be provided:
a. Provide a Topo_ConsultantName_JOB#.dgn (3D MicroStation file) of all the topographic and manmade survey data collected.
i. All dgn files will be based on modified state plane coordinates, using the projection factor for the project as described in EPG 238.1.4 Datum and Horizontal Control.
ii. Working units: U.S. Survey Foot
iii. Features shall be plotted according to MoDOT CADD Standards. Features to be plotted at 1” = 100’ scale. Standards are available in the GEOPAK Survey Manager Database (.smd).
Topography Features. The mapping data shall include natural positions on the earth’s surface within the project limits that determine the configuration of the terrain. The positions will be in the form of points and strings that locate vertical and horizontal transitions.
Planimetry Features. The mapping data shall include the positions of all natural and all man-made features within the project limits. The positions will be in the form of points and strings that define the shape, size and position of the features.
b. Tin and GPK files will be based on modified state plane coordinates, using the projection factor for the project as described in EPG 238.1.4 Datum and Horizontal Control.
c. Geopak Digital Terrain Models (.tin) for the entire project. Tin models should not exceed 200 megabytes.
d. Geopak Coordinate Geometry Database (.gpk) containing the data imported for the project. gpks should not exceed 30 megabytes.
238.1.4 Datum and Horizontal Control
238.1.4.1 Linear measures
Linear measures will be made in the English System. The base unit will be the United States Survey Foot (and decimal parts thereof).
238.1.4.2 Coordinate System
All coordinates shall be based on the State Plane Coordinate System, North American Datum (NAD) of 1983 (1997) in the appropriate zone for the project.
238.1.4.3 Vertical Datum
The elevations shall be based on the North American Vertical Datum (NAVD) of 1988. The elevations shall be based upon ellipsoidal heights that have been modified by the most current NGS Geoid model.
238.1.4.4 Projection Factor
The consultant is responsible for developing a project projection factor based on the Missouri Coordinate System of 1983 Manual for Land Surveyors.
Scale Factor. Using the most easterly and westerly control points within the project to develop a centroid point for a project. Use the converted English easting of the centroid point in the correct zone formula, below.
${\displaystyle East\,Zone={\frac {(easting\,-\,820,208.3333)}{393,700}}\,*\,0.00000000045\,*\,(easting\,-\,820,208.3333)\,+\,0.9999333}$
${\displaystyle Central\,Zone={\frac {(easting\,-\,1,640,416.6665)}{393,700}}\,*\,0.00000000045\,*\,(easting\,-\,1,640,416.6665)\,+\,0.9999333}$
${\displaystyle West\,Zone={\frac {(easting\,-\,2,788,708.3331)}{393,700}}\,*\,0.00000000045\,*\,(easting\,-\,2,788,708.3331)\,+\,0.9999412}$
Elevation Factor is determined by dividing the ellipsoid radius by the ellipsoid radius plus the mean elevation for the project.
${\displaystyle Elevation\,Factor={\frac {20,909,689.00}{{\Big (}20,909,689.00\,+\,[elevation\,in\,feet\,-\,100.065]{\Big )}}}}$
Grid Factor is the result of multiplying the Elevation Factor by the Scale Factor of the centroid point of the project.
Grid Factor = Elevation factor * Scale factor
Projection Factor is the reciprocal of the grid factor.
Projection Factor = 1 / Grid factor
238.1.5 Field Notebooks (Targets)
A separate notebook is used to record the target locations and descriptions. The notes include, for each target, the target numbers, shape of the target and the target ties. A sketch should be provided in the field notes so someone who is not familiar with the location can identify each target on the photographs and easily relocate the targets in the field. Offset distances are recorded for targets that are offset. The targets are listed in numerical sequence in the field book. Acceptable examples for recording target notes are available.
238.1.6 Aerial Mapping and LiDAR MOU
When mapping becomes necessary beyond the time frame of the annual flight program, district may work with the Central Office Design survey staff to hire an on-call consultant to perform these services. Please refer to EPG 134.2.4 Consultant Solicitation and Selection Process – Standard Solicitation Method for On-Call Cost Plus Fixed Fee Contracts.
238.1.7 Orthomosaic
Orthomosaics are a standard delivery with aerial photography mapping. An Orthomosaic is a spacially correct mosaic, which includes the photos for the entire project. The mosaics are generated at 0.5 ft. per pixel. The district Design personnel can use the mosaics as raster images behind the project corridor line work prepared using CADD tools for the purpose of public displays.
238.1.8 Requisitioning Historic Photographs
A request for contract prints and enlargements is made using Form D-102. | 2021-10-18 10:19:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2629297375679016, "perplexity": 4786.220742368794}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585201.94/warc/CC-MAIN-20211018093606-20211018123606-00060.warc.gz"} |
http://mathhelpforum.com/algebra/183321-basketball-game-problem.html | Math Help - Basketball game problem
Sorry for the lame title, couldn't thinnk of a better one xD
Anyways, can anyone please tell me how to solve this problem:
At a basketball tournament involving 8 teams, each team played 4 games with each of the other teams. How many games were played at this tournament?
The mark scheme says: since each team 4 games with other teams, 8*4= 32 games are played by each team. each game involved two teams, hence 32*(8/2)=128 games were played.
hmmm, so, if each team plays 4 games with each other team, but not itself (which is rediculous) , shouldn't it be 4*7=28 games each team, also, why do we divide the number of teams by two? I know it says there is two teams each match, but what effect why do we divide it by two?
Please help me, I am just confused, I just need a good explanation to remove this confusion.....
Originally Posted by IBstudent
At a basketball tournament involving 8 teams, each team played 4 games with each of the other teams. How many games were played at this tournament?
I am a bit concerned about the wording used in this question. I am woefully ignorant of sports. A tournament in which each team plays each other team four times seems a bit much. But let’s say that is correct.
It can be modeled as an multi-graph on eight vertices in which there are four edges between any two vertices.
$\binom{8}{2}=28$, so there are 28 pairing of teams.
Four edges, games, for each pair: $112$. | 2016-04-30 18:27:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.411472350358963, "perplexity": 482.6440224414034}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860112228.39/warc/CC-MAIN-20160428161512-00200-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://online.stat.psu.edu/stat500/lesson/6b/6b.3 | # 6b.3 - Further Considerations for Hypothesis Testing
In this section, we include a little more discussion about some of the issues with hypothesis tests and items to be concious about.
## Committing an Error Section
Every time we make a decision and come to a conclusion, we must keep in mind that our decision is based on probability. Therefore, it is possible that we made a mistake.
Consider the example of the previous Lesson on whether the majority of Penn State students are from Pennsylvania. In that example, we took a random sample of 500 Penn State students and found that 278 are from Pennsylvania. We rejected the null hypothesis, at a significance level of 5% with a p-value of 0.006.
The significance level of 5% means that we have a 5% chance of committing a Type I error. That is, we have 5% chance that we rejected a true null hypothesis.
If we failed to reject a null hypothesis, then we could have committed a Type II error. This means that we could have failed to reject a false null hypothesis.
## How Important are the Conditions of a Test? Section
In our six steps in hypothesis testing, one of them is to verify the conditions. If the conditions are not satisfied, we can still calculate the test statistic and find the rejection region (or p-value). We cannot, however, make a decision or state a conclusion. The conclusion is based on probability theory.
If the conditions are not satisfied, there are other methods to help us make a conclusion. The conclusion, however, may be based on other parameters, such as the median. There are other tests (some are discussed in later lessons) that can be used.
## Statistical and Practical Significances Section
Our decision in the emergency room waiting times example was to reject the null hypothesis and conclude that the average wait time exceeds 10 minutes. However, our sample mean of 11 minutes wasn't too far off from 10. So what do you think of our conclusion? Yes, statistically there was a difference at the 5% level of significance, but are we "impressed" with the results? That is, do you think 11 minutes is really that much different from 10 minutes?
Since we are sampling data we have to expect some error in our results therefore even if the true wait time was 10 minutes it would be extremely unlikely for our sample data to have a mean of exactly 10 minutes. This is the difference between statistical significance and practical significance. The former is the result produced by the sample data while the latter is the practical application of those results.
Statistical significance is concerned with whether an observed effect is due to chance and practical significance means that the observed effect is large enough to be useful in the real world.
Critics of hypothesis-testing procedures have observed that a population mean is rarely exactly equal to the value in the null hypothesis and hence, by obtaining a large enough sample, virtually any null hypothesis can be rejected. Thus, it is important to distinguish between statistical significance and practical significance.
## The Relationship Between Power, $$\beta$$, and $$\alpha$$ Section
Recall that $$\alpha$$ is the probability of committing a Type I error. It is the value that is preset by the researcher. Therefore, the researcher has control over the probability of this type of error. But what about $$\beta$$, the probability of a Type II error? How much control do we have over the probability of committing this error? Similarly, we want power, the probability we correctly reject a false null hypothesis, to be high (close to 1). Is there anything we can do to have a high power?
The relationship between power and $$\beta$$ is an inverse relationship, namely...
Power $$=1-\beta$$
If we increase power, then we decrease $$\beta$$. But how do increase power? One way to increase the power is to increase the sample size.
Relationship between $$\alpha$$ and $$\beta$$:
If the sample size is fixed, then decreasing $$\alpha$$ will increase $$\beta$$, and therefore decrease power. If one wants both $$\alpha$$ and $$\beta$$ to decrease, then one has to increase the sample size.
It is possible, using software, to find the sample size required for set values of $$\alpha$$ and power. Also using software, it is possible to determine the value of power. We do not go into details on how to do this but you are welcome to explore on your own.
Gathering data is like tasting fine wine—you need the right amount. With wine, too small a sip keeps you from accurately assessing a subtle bouquet, but too large a sip overwhelms the palate.
We can’t tell you how big a sip to take at a wine-tasting event, but when it comes to collecting data, software tools can tell you how much data you need to be sure about your results. | 2023-03-21 08:36:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5395922064781189, "perplexity": 300.0943439093756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943637.3/warc/CC-MAIN-20230321064400-20230321094400-00015.warc.gz"} |
https://paperswithcode.com/paper/an-improved-convergence-analysis-of | An Improved Convergence Analysis of Stochastic Variance-Reduced Policy Gradient
29 May 2019Pan XuFelicia GaoQuanquan Gu
We revisit the stochastic variance-reduced policy gradient (SVRPG) method proposed by Papini et al. (2018) for reinforcement learning. We provide an improved convergence analysis of SVRPG and show that it can find an $\epsilon$-approximate stationary point of the performance function within $O(1/\epsilon^{5/3})$ trajectories... (read more)
PDF Abstract | 2020-07-08 04:41:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21500325202941895, "perplexity": 2246.4850554520644}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896374.33/warc/CC-MAIN-20200708031342-20200708061342-00302.warc.gz"} |
https://jeeneetqna.in/1334/enters-solid-glass-sphere-refractive-index-angle-incidence | # A light ray enters a solid glass sphere of refractive index m = µ = √3 at an angle of incidence 60°.
more_vert
A light ray enters a solid glass sphere of refractive index m = µ = $\sqrt3$ at an angle of incidence 60°. The ray is both reflected and refracted at the farther surface of the sphere. The angle (in degrees) between the reflected and refracted rays at this surface is _____________.
Numerical Value Type | 2022-05-18 12:14:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6526843309402466, "perplexity": 434.73304814375734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522270.37/warc/CC-MAIN-20220518115411-20220518145411-00110.warc.gz"} |
https://physics.stackexchange.com/questions/451300/at-what-pressure-do-semiconductors-break-down | # At what pressure do semiconductors break down? [closed]
So let's say you were going to send some electronics to the bottom of the ocean, 3-5km down. This would be about $$5km*1000kg=5Mkgf/m^{2}$$. So at what pressure do circuit boards, transistors, etc stop working? Apparently there's a thing called the quantum critical point which some semiconductors break down. Although in this example the semiconductor broke down at about 10x the pressure that I'd be dealing with, I'm wondering if there are other known issues at these pressures with electronics?
My hypothetical specifically would exist as some circuits/computer hardware in caster oil (so salt water can't short it and the oil remains relatively incompressible), where instead of making the container withstand the pressure difference and keep the internals at 1atm, the contain might flex just enough so that the pressure inside is the same as the outside. So I'm wondering what kind of semiconductors/etc might break down under high pressures? Or if there's any other properties I might be missing?
## closed as off-topic by Emilio Pisanty, ZeroTheHero, Jon Custer, Buzz, Kyle KanosJan 1 at 15:27
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "This question appears to be about engineering, which is the application of scientific knowledge to construct a solution to solve a specific problem. As such, it is off topic for this site, which deals with the science, whether theoretical or experimental, of how the natural world works. For more information, see this meta post." – Emilio Pisanty, ZeroTheHero, Jon Custer, Buzz, Kyle Kanos
If this question can be reworded to fit the rules in the help center, please edit the question. | 2019-03-21 13:44:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39438292384147644, "perplexity": 1039.5295935951033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202525.25/warc/CC-MAIN-20190321132523-20190321154523-00040.warc.gz"} |
https://proxieslive.com/tag/line/ | ## Plotting horizontal line on Manipulate Plot
I have the following code, which outputs a Manipulate style plot. I want to draw a horizontal line on the plot related to q, with equation: y = Roche[\[rho]].
My code is as follows:
Constants au = QuantityMagnitude[UnitConvert[Quantity[1, "AstronomicalUnit"], "Meters"]]; c = QuantityMagnitude[UnitConvert[Quantity[1, "SpeedOfLight"], "MetersPerSecond"]]; Qpr = 1; Lsun = QuantityMagnitude[UnitConvert[Quantity[1, "SolarLuminosity"], "Watts"]]; Rsun = QuantityMagnitude[UnitConvert[Quantity[1, "SolarRadius"], "Meters"]]; Msun = QuantityMagnitude[UnitConvert[Quantity[1, "SolarMass"], "Kilograms"]]; G = QuantityMagnitude[UnitConvert[Quantity[1, "GravitationalConstant"], ("Meters"^2*"Newtons")/"Kilograms"^2]]; year = QuantityMagnitude[UnitConvert[Quantity[1, "Years"], "Seconds"]]; Myr = year*10^6; Gyr = year*10^9; Mwd = 0.6*Msun; Cst = 1.27; U = 1*10^17; Functions L[t_] := (3.26*Lsun*(Mwd/(0.6*Msun)))/(0.1 + t/Myr)^1.18; Roche[dens_] := (0.65*Cst*Rsun*(Mwd/(0.6*Msun))^(1/3))/(dens/3000)^3^(-1); Papsis[t_] := a[t]*(1 - e[t]); Radiative Drag RDdadtR\[Rho]a = -((3*L[t]*Qpr*(2 + 3*e[t]^2))/(c^2*(16*Pi*2000*Rast*a[t]*(1 - e[t]^2)^(3/2)))); RDdedtR\[Rho]a = -((15*L[t]*e[t])/(c^2*(32*Pi*Rast*2000*a[t]^2*Sqrt[1 - e[t]^2]))); Null RDsolR\[Rho]a = ParametricNDSolveValue[{Derivative[1][a][t] == RDdadtR\[Rho]a, Derivative[1][e][t] == RDdedtR\[Rho]a, a[0] == a0, e[0] == 3/10}, {a, e}, {t, 0, 9*Gyr}, {Rast, \[Rho], a0}]; fRDticks = {{Automatic, Automatic}, {ChartingFindTicks[{0, 1}, {0, 1/Myr}], Automatic}}; Manipulate[Column[{Style["Working Plot", Bold], Plot[fun[func, t]/scale[func], {t, 0, 9*Gyr}, FrameTicks -> fRDticks, PlotStyle -> {Directive[Blue, Thickness[0.01]]}], Style["Compiled Plot", Bold], If[comp === {}, Plot[fun[func, t]/scale[func], {t, 0, 9*Gyr}, FrameTicks -> fRDticks, PlotStyle -> {Directive[Blue, Thickness[0.01]]}], Plot[comp, {t, 0, 9*Gyr}, FrameTicks -> fRDticks, PlotStyle -> {Directive[Blue, Thickness[0.01]]}]]}], {{func, 1}, {1 -> "a", 2 -> "e", 3 -> "q"}}, {{Rast, 0.005}, 0.0001, 0.1, 0.001, Appearance -> "Labeled"}, {{\[Rho], 3000}, 1000, 7000, 50, Appearance -> "Labeled"}, {{a0, 10, "a0 (au)"}, 2, 20, 0.2, Appearance -> "Labeled"}, Button["Append", AppendTo[comp, fun[func, t]]], Button["Reset", comp = {}], TrackedSymbols -> {func, Rast, \[Rho], a0}, Initialization :> {comp = {}, fun[sel_, t_] := Switch[sel, 1, RDsolR\[Rho]a[Rast, \[Rho], a0*au][[1]][t], 2, RDsolR\[Rho]a[Rast, \[Rho], a0*au][[2]][t], 3, RDsolR\[Rho]a[Rast, \[Rho], a0*au][[1]][t]*(1 - RDsolR\[Rho]a[Rast, \[Rho], a0*au][[2]][t])], scale[sel_] := Switch[sel, 1 | 3, au, 2, 1]}]
I have tried to use Epilog but no line was displayed.
Any help would be appreciated.
## Can i use a string as an end of line delimeter when importing a csv to mysql?
Very basic question but im struggling with it. When i am importing a csv file to mysql is it possible to use any character or string as an end of line delimeter? If so is it possible from the admin window ‘Import’? Or must i do so in a sql query (doesnt seem logical).
So far i have had issues using everything besides \n and auto in the php my admin window. I tried the € sign i tried ///. Didnt work and got errors relating to an incorrect end of line delimeter.
Any help would be much appreciated. Thank you.
## Do you need line of sight to cast spells on someone?
The rules on spellcasting contain the following section:
### A Clear Path to the Target
To target something [with a spell], you must have a clear path to it, so it can’t be behind total cover. If you place an area of effect at a point that you can’t see and an obstruction, such as a wall, is between you and that point, the point of origin comes into being on the near side of that obstruction.
This section is not really clear to me. Should this mean that only non-transparent objects are a problem in targeting, or that you have to have line of sight and line of effect as well?
Also, can you prevent a wizard from casting spells by blinding her?
## Authorization error running db2 command line with db2inst1
I need to enable the COLUMNAR setting in DB2 Express Edition on Docker. For that to work I need to set INTRA_PARALLEL ON at the instance or database level.
I connect to the db2 command with db2inst1 that is the instance owner, but I’m getting an error saying that the user is root. How to fix this problem?
db2 => connect to bank0002 user db2inst1 using xxxxx Database Connection Information Database server = DB2/LINUXX8664 11.5.4.0 SQL authorization ID = DB2INST1 Local database alias = BANK0002 db2 => UPDATE DBM CFG USING INTRA_PARALLEL ON SQL5001N "ROOT" does not have the authority to change the database manager configuration file.
## Line of Sight/Clear Path Check Theta* For Rectangle Character
I am implementing Theta* pathfinding in a tile-based 2D game. A tile is either blocked or unblocked. The character has a rectangle collider with a width and height in tiles. Is there an algorithm for finding all the tiles the rectangle will overlap when moving in a straight line from A to B on the tile grid?
## What is the point of this line in the Hide of the Feral Guardian?
The Hide of the Feral Guardian, a legendary item from the Explorer’s Guide to Wildemount, includes the following ability.
When you cast the polymorph spell using this armor, you can transform into a cave bear (use the polar bear statistics).
Why does it do this, and not just turn the user into either a polar bear or cave bear directly? At first I thought it might be because of the book each creature was from, but they both have stat blocks in the monster manual, and neither are in the explorer’s guide elsewhere as far as I can see.
## Full line trajectories plot for the solution of Second Order nonlinear coupled differential equations
I wanted to plot a phase plane containing the trajectories of the solutions found by using ‘NDSolve’ using the initial conditions for x[0], y[0], x'[0] and y'[0]. The equations are: x”[t] – 2 y'[t] == -x[t] + y[t]^2; y”[t] + 2 x'[t] == x[t] + y[t] + x[t]*y[t]
The equilibrium point for the system is (0,0). I have plotted the stream plot for the system but unable to plot a phase portrait that would give me the full line trajectories of the system for different initial conditions. I am also looking for any periodic solution if present in it. The stream plot I got is given below and I would take initial conditions from it.
I get this by using the Parametric Plot of the NDSolve solution:
Kindly help in this capacity. Thanks in advance.
## Unity: What does the green line gizmo signify?
When I place a point light it shows this green gizmo that ends in the terrain.
What’s the name of this gizmo?
What does it signify?
## LinearModelFit to find more than one linear regression line
I have a data that I used linearModelFit, to find the linear regression line. How can I get more than one linearmodelfit for the same data? more specifically I want 3 different functions for the linear regression with the built-in function LinearModelFit.
## How to move axis and ticklables in RegionPlot to the top? How to change border line color?
I use this code
RegionPlot[Sin[t^(1/3)*y] > 0, {y, 0, 5}, {t, 0, 8}, FrameLabel -> {"y", Rotate["t", 270 Degree]}] `
and the result is
Now, I have three questions:
1. How can I move the $$y$$ axis (and ticklabels) to the top instead of bottom?
2. How can I change border lines color?
3. Is it possible to change the color of one of the blue parts? | 2021-03-08 05:41:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22489668428897858, "perplexity": 4986.216827136856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178381989.92/warc/CC-MAIN-20210308052217-20210308082217-00178.warc.gz"} |
https://www.wikidata.org/wiki/Q56896365 | # Search for lepton flavour violating decays of the Higgs boson to μτ and eτ in proton-proton collisions at s = 13 $$\sqrt{s}=13$$ TeV (Q56896365)
No description defined
Language Label Description Also known as
English
Search for lepton flavour violating decays of the Higgs boson to μτ and eτ in proton-proton collisions at s = 13 $$\sqrt{s}=13$$ TeV
No description defined
## Statements
0 references
Search for lepton flavour violating decays of the Higgs boson to μτ and eτ in proton-proton collisions at s = 13 $$\sqrt{s}=13$$ TeV (English)
0 references
C. Amendola
0 references
R Reyes-Almanza
1006
0 references
H. Siikonen
0 references
U. Kiminsu
1427
0 references | 2018-10-21 21:42:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9991064071655273, "perplexity": 9720.938548040245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514355.90/warc/CC-MAIN-20181021203102-20181021224602-00248.warc.gz"} |
http://openstudy.com/updates/503a1042e4b0edee4f0d8a63 | ## anonymous 4 years ago The surface charge density of an object is σ = dq/dA. Why is the total charge on the surface, Q = double integral σ dA ?
1. anonymous
total charge on surface = surface charge density* area basically, surface charge density is the charge per unit area ...which means how much charge does an unit area has. so here $q=\int\limits_{0}^{A}dA*\sigma$
2. anonymous
if charge density would have varied then we might have used $q= \int\limits_{0}^{A}dA* \sigma + \int\limits_{0}^{\sigma} d \sigma*A$ | 2016-08-26 08:30:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9523600935935974, "perplexity": 1156.3497017403627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982295383.20/warc/CC-MAIN-20160823195815-00233-ip-10-153-172-175.ec2.internal.warc.gz"} |
http://mathoverflow.net/revisions/114825/list | Post Closed as "no longer relevant" by Ryan Budney, Allen Knutson, Renato G Bettiol, Deane Yang, Lee Mosher
2 Added reason why apparent contradiction is false; will now close the question
• The Hopf fibration $S^7\to S^{15}\to S^8$ is of the form $K/H \to G/H \stackrel{\pi}{\to} G/K$, where $\pi(gH)=gK$ and $H< K < G$ are the groups $Spin(7)$, $Spin(8)$ and $Spin(9)$, respectively. The inclusion of $K$ in $G$ is the usual one; however, the inclusion of $H$ in $K$ is the usual one followed by a nontrivial triality automorphism of $Spin(8)$, see, e.g., Section 4 of this paper or Besse's book "Einstein manifolds", p. 258, 9.84 Example 4. Such an automorphism is outer, and is not the restriction of any automorphism of $Spin(9)$. Now, consider the $K$-action on $G/H$ given by $k\cdot gH:=gk^{-1} H$. Its orbits are clearly of the form $gKH\subset G/H$, and I claim these are exactly the fibers of $G/H\to G/K$, i.e., the Hopf fibers. Indeed, if both $aH$ and $bH$ are mapped to $gK$ under the projection $G/H\to G/K$, then $aK=bK$, i.e., $b^{-1}a\in K$, which means $a\in bK$; so the subset of $G/H$ that get mapped to $gK$ is exactly $(gK)H$; and these were the $K$-orbits. The above construction (for abstract Lie groups $H, K, G$; not specifically for the Hopf fibration as above) shows up in many places, e.g., in Ziller's survey, p. 16; as well as papers of Schwachhofer-Tapp, Kerr-Kollross and others.
• Although
Edit: Shortly after posting this question, I am almost sure that received an email from Ziller in which he answers the statement question. Statement (1) is correct, and that the problem with is in statement (2) is 2), as suspected. In fact, the claim that somehow this abstract construction with nested Lie groups the $H\subset K\subset G$ does not apply to K$-orbits are always fibers of$Spin(7)\subset Spin(8)\subset Spin(9)$G/H\to G/K$ is false in general, I still see no reason why this unless $H$ is a normal subgroup of $K$. This is because the case. The suspicion lies on action by multiplication (by the triality automorphism that follows inverse) on the usual inclusion to give right on cosets defined above is only well-defined (i.e., independent of choice of coset representative) if $Spin(7)\subset Spin(8)$, but as long as this H$is normal in$K$, as was also pointed out in Emerton's comment. The rest of the claims in (2) are correct. As a fixed automorphism side note, for the other Hopf fibrations$\phi$, one could pick S^1\to S^{2n+1}\to \mathbb CP^n$ and $S^3\to S^{4n+3}\to \mathbb H P^n$ the corresponding subgroup $H$ as is normal in $K$ -- after all the image under fiber $\phi$ of K/H$is a group -- and the usual$Spin(7)$K$-action is hence well-defined, so statement (2) holds in full for these cases. In general, however, the $Spin(8)$, and this seems a perfectly valid subgroup K$-orbits are not fibers of$K$to proceed with G/H\to G/K$, as the construction$S^7\to S^{15}\to S^8$ example illustrates.What am I missing here?
1
# "Homogeneity" of the Hopf fibration $S^7\to S^{15}\to S^8$
My question has to do with an apparent contradiction I get regarding the Hopf fibration $S^7\to S^{15}\to S^8$. Namely, the two following statements cannot be true at the same time (but I do not see any problem with any of them):
1. The Hopf fibration $S^7\to S^{15}\to S^8$ is not homogeneous, i.e., there is no isometric group action on the round sphere $S^{15}$ whose orbits are the Hopf fibers. This is claimed by Guijarro-Walschap, Corollary 3.2; and was also previously observed, e.g., by Gromoll-Grove.
2. The Hopf fibration $S^7\to S^{15}\to S^8$ is of the form $K/H \to G/H \stackrel{\pi}{\to} G/K$, where $\pi(gH)=gK$ and $H< K < G$ are the groups $Spin(7)$, $Spin(8)$ and $Spin(9)$, respectively. The inclusion of $K$ in $G$ is the usual one; however, the inclusion of $H$ in $K$ is the usual one followed by a nontrivial triality automorphism of $Spin(8)$, see, e.g., Section 4 of this paper or Besse's book "Einstein manifolds", p. 258, 9.84 Example 4. Such an automorphism is outer, and is not the restriction of any automorphism of $Spin(9)$. Now, consider the $K$-action on $G/H$ given by $k\cdot gH:=gk^{-1} H$. Its orbits are clearly of the form $gKH\subset G/H$, and I claim these are exactly the fibers of $G/H\to G/K$, i.e., the Hopf fibers. Indeed, if both $aH$ and $bH$ are mapped to $gK$ under the projection $G/H\to G/K$, then $aK=bK$, i.e., $b^{-1}a\in K$, which means $a\in bK$; so the subset of $G/H$ that get mapped to $gK$ is exactly $(gK)H$; and these were the $K$-orbits. The above construction (for abstract Lie groups $H, K, G$; not specifically for the Hopf fibration as above) shows up in many places, e.g., in Ziller's survey, p. 16; as well as papers of Schwachhofer-Tapp, Kerr-Kollross and others.
Although I am almost sure that the statement (1) is correct, and that the problem with statement (2) is that somehow this abstract construction with nested Lie groups $H\subset K\subset G$ does not apply to $Spin(7)\subset Spin(8)\subset Spin(9)$, I still see no reason why this is the case. The suspicion lies on the triality automorphism that follows the usual inclusion to give $Spin(7)\subset Spin(8)$, but as long as this is a fixed automorphism $\phi$, one could pick $H$ as the image under $\phi$ of the usual $Spin(7)$ in $Spin(8)$, and this seems a perfectly valid subgroup of $K$ to proceed with the construction. What am I missing here?
I am not sure how useful these comments are, but here is some more information about a $Spin(8)$ action on $S^{15}$. The representation $\rho_8\oplus\Delta^\pm_8$ of $Spin(8)$ in $R^{16}$ gives a cohomogeneity one action on the unit sphere $S^{15}$, and the orbit space is the interval $[0,\pi/2]$. This action has two singular orbits, whose isotropy is $Spin(7)$, and the principal isotropy is $G_2$. The inclusion of the singular isotropies is the usual one followed by a nontrivial triality automorphism that is $\pm$, according to the choice $\Delta^\pm_8$ of spinorial representation. The principal orbits of this $Spin(8)$ action have dimension $14$, and are of course not Hopf fibers. The singular orbits, however, give a pair of antipodal $S^7$'s inside $S^{15}$ and these are Hopf fibers. [Note this action is different from the action $k\cdot gH=gk^{-1}H$ described in (2) above.] From what I have heard, the only subactions of the transitive $Spin(9)$ action on $S^{15}$ that preserve a fixed Hopf fiber (and hence its antipodal fiber as well) are actions by some $Spin(8)\subset Spin(9)$ conjugate to the one I have just described. The $Spin(9)$ action on $S^{15}$ is $g_1\cdot g_2H=g_1g_2H$, so the action described in (2) is not a restriction of it; however the statements in (1) seem to not specify any particular action on $S^{15}$, i.e., to my understanding, they show no group can act isometrically and have orbits that are precisely the Hopf fibers. | 2013-05-25 00:10:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9693772196769714, "perplexity": 170.81598888498877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705300740/warc/CC-MAIN-20130516115500-00030-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://wlm.userweb.mwn.de/SPSS/wlmsagg.htm | # AGGREGATE
Aggregating your data can be a powerful tool. It means that new data are computed from an existing data set, and cases in the new data set refer to a group of cases. For instance, you may wish to have in your data, for each case, a variable measuring how much this person's income differs from the mean income of all persons in the same occupational group. You may request SPSS to compute these mean incomes by aggregating all cases in each group, save these incomes to a new data set and then match these mean incomes to your original data. The deviation of the individual incomes from the mean income then may easily be computed.
Example:
AGGREGATE / OUTFILE = 'C:\subdir\meaninc.sav' / BREAK = occgroup / meaninc = MEAN(income).
In the example above, the first line after the AGGREGATE command requests that the aggregated data be saved in file "meaninc.sav" in subdirectory "subdir". The BREAK line tells SPSS to group cases by variable "occgroup". All cases that have the same value in that variable will form one group. The next line tells SPSS to form a new variable named "meaninc" by computing the mean of the variable income in each group (the name of that variable actually may be the same as in the original data; your choice depends on the purpose of your analysis). This line could be repeated several times to form additional new variables (that must have different names, of course); for instance, you may request information on the smallest or biggest value of a variable in each group, or the percentage of cases in each group that have values within a given range (more possibilities are listed below).
In the OUTFILE line, you may specify an asterisk "*" instead of a file name. In that case, the aggregate data file will not be saved; rather, it will become your working file. Often, you will wish to use that option in order to check whether you have really achieved what you wanted. But be sure to save your present working file before executing the aggregate command if you have modified the present file and wish to have access to the modified file at a later stage.
In the BREAK line, more than one grouping variable may be named (the upper limit is 10, I think, but this is much more than you'll probably ever need). In this case, first all cases with the same value in the first variable are grouped. Then these groups are "split", as it were, to form new groups by the second variable, etc. You can request that the number of cases in each group be saved as a variable in the aggregated data set; for instance, with that variable named "ningroup", a new line (beginning with a slash) would be added with the command ningroup=N.
Here are some examples of the functions that are available. I will use variable "income" throughout to explain the effects of the different functions.
Keyword Effect (what new variable will display)
FIRST (income) First value of income that is encountered in each group
LAST (income) Last value of income that is encountered in each group
MIN (income) Smallest value of income that is encountered in each group
MAX (income) Largest value of income that is encountered in each group
SUM (income) Sum of variable income in each group
SD (income) Standard deviation of income in each group
PGT (income 1000) Percentage of cases in each group with income greater than 1000
PLT (income 1000) Percentage of cases in each group with income less than 1000
FGT (income 1000) Fraction of cases in each group with income greater than 1000 (this is simply PGT divided by 100)
FLT (income 1000) Fraction of cases in each group with income less than 1000
PIN (income 1000 2000) Percentage of cases in each group with income of at least 1000 and not more than 2000
POUT (income 1000 2000) Percentage of cases in each group with income of less than 1000 or more than 2000
FIN (income 1000 2000) Fraction of cases in each group with income of at least 1000 and not more than 2000
FOUT (income 1000 2000) Fraction of cases in each group with income of less than 1000 or more than 2000
© W. Ludwig-Mayerhofer, IGSW | Last update: 02 May 1998 | 2022-09-28 00:27:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3909769654273987, "perplexity": 984.0615909275933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00127.warc.gz"} |
http://wiki.yobi.be/wiki/Belgian_ePassport | # Belgian ePassport
Back to Belgian eGov
## Characteristics
• Current versions demo
• Uses Opentrust PKI (former IDX-PKI from idealx)
• Price:
• 30€ droit de chancellerie
• taxes communales (Ixelles=26€, Leuven=11€?,...)
• 41€ frais de confection
• Much more expensive if urgent or 64 pages (~250€)
• maker? at least not Zetes (contradictory info here)
Mais nous ne fabriquons pas le passeport belge, c’est vrai. C’est un contrat qui a été attribué avant que nous ne soyons actifs sur ce segment. S’il y a un appel d’offres, j’imagine que nous y répondrons.
### chip
• Oberthur press release in 2005 (pdf)
• ATR 3B 8E 80 01 80 91 E1 31 C0 64 77 E3 03 00 83 82 90 00 6C
• ATR 3B 8E 80 01 80 91 91 31 C0 64 77 E3 03 00 83 82 90 00 1C (as mentioned in pcsc-lite smartcard_list.txt)
• ATR 3B 88 80 01 00 00 01 07 01 72 90 00 EC (on a recent passport 01/2009 EH431xxx)
• Belgium is one rare country to also include the owner handwritten signature, in EF_DG7
• Non-compliances?
• Requires option 0x0C whenever you select the application or a file (important for non-BAC passports), usually other passports implement 7816-4 a bit better and accept the standard select_file but apparently Belgium just implemented the example of LDS just as it was presented, no more)
• non-BAC passports have a bug in EF_DG11, in full name of holder (tag 5F0E): null length followed by "A0 06 02 01 01"
• newer passports have a bug in EF_DG12, using tag 5F85 instead of 5F55 for the document issuance timestamp (5F85 is in LDS1.7, 5F55 is in ISO standard)
• newest passports (with polycarbonate transparent sheet) don't have the bug anymore in EF_DG12, skipping simply document issuance timestamp
• Reading the DS certificate in EF_SOD (output truncated):
openssl pkcs7 -text -print_certs -in EF_SOD.PEM
Authority:
Issuer: C=BE, O=Kingdom of Belgium, OU=Federal Public Service Foreign Affairs Belgium, CN=CSCAPKI_BE
Subject: C=BE, O=Kingdom of Belgium, OU=Federal Public Service Foreign Affairs Belgium, CN=DSPKI_BE
X509v3 extensions:
X509v3 Authority Key Identifier:.
keyid:00:84:19:14:B2:CE:7E:0A:DE:3A:26:F9:FD:DD:1F:F4:01:42:A8:0E
## Active Authentication
See first EPassport#Active_Authentication
It appears that:
• first generation passports support AA without BAC (as there is no BAC)
• second generation passports don't support AA without BAC
• third generation passports support AA without BAC, which is more surprising!
This doesn't really conflict with the ICAO standard, according to the specs:
Doc 9030 IV-13: "An MRTD chip that supports Basic Access Control SHALL respond to unauthenticated *read attempts* (including selection of(protected) files in the LDS) with "Security status not satisfied" (0x6982)"
So nothing said about AA & ISO7816-4 INTERNAL AUTHENTICATION command.
But if BAC is applied then all consecutive commands must be encrypted:
Supplement to Doc9303 rev7: R1-p1_v2_sIV_0027 The Active Authentication uses the Internal Authentication command, Does this command should be send to the ICC with Secure Messaging? If Basic Access Control is applied, yes.
See also that one telling that allowing unsecure e.g. SELECT before applying BAC is ok but implementation dependent:
R4-p1_v2_sIV_0046 Verify if it is possible to successfully perform unsecured SELECT on BAC protected e-Passports [...] It is however recognized that certain ICC operating systems support an unsecured SELECT before the BAC secure messaging is established. Therefore, when no secure channel is established, both 6982 and 9000 should be expected as ICAO compliant responses to an unsecured SELECT.[...]
We cannot use this to fingerprint passports as the challenge reply is also based on a nounce generated by the passport itself.
But we still have one interesting property:
Normally with BAC, to identify a passport with BAC support we've no other choice than knowing the passport MRZ (thus we already identified it), brute-forcing the MRZ, which can take quite a while, or using a dictionary of known MRZ (imagine a rogue country collecting them at border)
Here we can send an AA challenge of our choice (can be kind of timestamp) and get the reply.
As such we cannot do anything interesting yet but later, offline, we can prove we saw that passport at that place at that time if we have access to the EF_DG15 at any time before or after this event.
It's just a matter of verifying the signatures of a collection of timestamped AA replies against a collection of passport public keys and achieve some linkability. | 2018-10-15 23:12:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36717256903648376, "perplexity": 13254.3264783291}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509958.44/warc/CC-MAIN-20181015225726-20181016011226-00489.warc.gz"} |
http://tex.stackexchange.com/questions/125579/subcaption-with-beamer | # Subcaption with Beamer
I am trying to use subcaption with beamer, but I get a series of error messages concerning subcaption.sty when I attempt to do so. This doesn't happen in the article document class. Here is a MWE
\documentclass{beamer}
\usepackage{caption}
\usepackage{subcaption}
\begin{document}
\begin{frame}
\begin{figure}
\begin{subfigure}[b]{.45\linewidth}
\centering
\caption{First subfigure}
\label{fig:a}
\textcolor{blue}{\rule{3cm}{3cm}}
\end{subfigure}\hfill
\begin{subfigure}[b]{.45\linewidth}
\centering
\caption{Second subfigure}
\label{fig:b}
\textcolor{blue}{\rule{3cm}{3cm}}
\end{subfigure}\ \caption{A figure}\label{fig:1}
\end{figure}
\end{frame}
\end{document}
Package caption Warning: \caption will not be redefined since it's already
(caption) redefined by a document class or package which is
(caption) unknown to the caption package.
See the caption package documentation for explanation.
! Package caption Error: The subcaption' package does not work correctly
(caption) in compatibility mode.
See the caption package documentation for explanation.
Type H <return> for immediate help.
...
l.7 \begin{document}
?
! Emergency stop.
...
l.7 \begin{document}
Any tips?
-
In general don't use floats with figures because they can't shift to other slides (would be awkward anyway) and you also don't need captions as nobody needs to know the number of the figure. Mention somewhere in the frame and that would be much more sensible. – percusse Jul 26 '13 at 16:11
@percusse No; beamer internally deactivates the floating mechanism, so figure and table don't produce floating objects; one can still use those environment in case one wants to use \caption. – Gonzalo Medina Jul 26 '13 at 18:34
@GonzaloMedina Oh I didn't know that. – percusse Jul 26 '13 at 19:12
Your example document works fine here (TeXlive 2013). What error messages are you getting? Is updating your TeX distribution an option? – Axel Sommerfeldt Jul 27 '13 at 5:37
There's been new developments with the caption package and now subcaption and beamer are compatible. I added a ermark about this in my updated answer and thought that you might be interested too. – Gonzalo Medina Sep 26 '15 at 15:17
## Update caption version 2015/09/17 v3.3-111
Now, since version version 2015/09/17 v3.3-111 of the caption package, subcaption and beamer are again compatible so the error in the question won't appear and subcaption can be used with beamer.
## Answer for older versions of caption
The error message is produced since when subcaption gets loaded the compatibility boolean option for caption is found to be true, and subcaption.sty contains the lines:
\caption@AtBeginDocument{\caption@ifcompatibility{%
\caption@Error{%
The subcaption' package does not work correctly\MessageBreak
in compatibility mode}}{}}
which trigger the error. You can prevent this error by using the compatibility=false option for caption (and keep your fingers crossed (see below)), as in the following exmaple:
\documentclass{beamer}
\usepackage[compatibility=false]{caption}
\usepackage{subcaption}
\begin{document}
\begin{frame}
\begin{figure}
\begin{subfigure}[b]{.45\linewidth}
\centering
\caption{First subfigure}
\label{fig:a}
\textcolor{blue}{\rule{3cm}{3cm}}
\end{subfigure}\hfill
\begin{subfigure}[b]{.45\linewidth}
\centering
\caption{Second subfigure}
\label{fig:b}
\textcolor{blue}{\rule{3cm}{3cm}}
\end{subfigure}\ \caption{A figure}\label{fig:1}
\end{figure}
\end{frame}
\end{document}
However, this might produce undesired results in beamer's captions; in fact, with the above document you get the warning
Package caption Warning: Forced redefinition of \caption since the (caption) unsupported(!) package option compatibility=false' (caption) was given. See the caption package documentation for explanation.
The caption package documentation also warns against using this option:
But please note that using this option is neither recommended nor supported since unwanted side-effects or even errors could occur afterwards. (For that reason you will get a warning about this.)
If captions and subcaptions are really required for a presentation, I think a better option with beamer is to use subfig with its caption=false option instead of caption/subcaption:
\documentclass{beamer}
\usepackage[caption=false]{subfig}
\begin{document}
\begin{frame}
\begin{figure}
\centering
\subfloat[Second subfigure\label{fig:b}]{\textcolor{blue}{\rule{3cm}{3cm}}}
\caption{A figure}
\label{fig:1}
\end{figure}
\end{frame}
\end{document}
## Remark
It's a very common misconception to believe that the figure and table environments shouldn't be used in beamer (the alleged reason is something like "they might produce objects that will float away"). This is not true; beamer internally deactivates flotation, so it is perfectly safe to use figure and table if you really need to provide captions in a presentation.
Section 12.6 Figures and Tables of the beamer manual clearly explains this:
You can use the standard LaTeX environments figure and table much the same way you would normally use them. However, any placement specification will be ignored. Figures and tables are immediately inserted where the environments start. If there are too many of them to fit on the frame, you must manually split them among additional frames or use the allowframebreaks option.
-
Isn't a floating environment within a beamer frame a Latex sin? – pluton Apr 3 '14 at 3:14
@pluton No; beamer internally deactivates flotation. It keeps the figure and table environments (but, as I said, suppressing the flotation) just to give the possibility to use \caption. Another thing is if you really need captions in your presentations. It is a common mistake to believe that figure (or table) inside beamer might produce flotation to a different slide. – Gonzalo Medina Apr 3 '14 at 3:16
@pluton Please read Section 12.6 Figures and Tables of the beamer` manual. – Gonzalo Medina Apr 3 '14 at 3:20 | 2016-02-09 10:20:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9744124412536621, "perplexity": 3790.9267807841925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701157012.30/warc/CC-MAIN-20160205193917-00058-ip-10-236-182-209.ec2.internal.warc.gz"} |
http://assert.pub/arxiv/astro-ph/astro-ph.ep/ | ### Top 6 Arxiv Papers Today in Earth And Planetary Astrophysics
##### #1. Does the evolution of complex life depend on the stellar spectral energy distribution?
###### Jacob Haqq-Misra
This paper presents the proportional evolutionary time hypothesis, which posits that the mean time required for the evolution of complex life is a function of stellar mass. The "biological available window" is defined as the region of a stellar spectrum between 200 to 1200 nm that generates free energy for life. Over the $\sim$4 Gyr history of Earth, the total energy incident at the top of the atmosphere and within the biological available window is $\sim$10$^{34}$ J. The hypothesis assumes that the rate of evolution from the origin of life to complex life is proportional to this total energy, which would suggest that planets orbiting other stars should not show signs of complex life if the total energy incident on the planet is below this energy threshold. The proportional evolutionary time hypothesis predicts that late K- and M-dwarf stars (M < 0.7 M$_{\odot}$) are too young to host any complex life at the present age of the universe. F-, G-, and early K-dwarf stars (M > 0.7 M$_{\odot}$) represent the best targets for the next...
more | pdf | html
None.
###### Tweets
qraal: [1905.07343] Does the evolution of complex life depend on the stellar spectral energy distribution? https://t.co/YHqggJ3GyW
haqqmisra: Does complex life depend upon the spectral type of the host star? Check out my new (perhaps provocative?) paper, forthcoming in Astrobiology. I speculate that M-dwarfs (less than 0.7 solar masses) are too young to have any complex life today. https://t.co/bb5qzQviBC
StarshipBuilder: Does the evolution of complex life depend on the stellar spectral energy distribution? https://t.co/Z0huul5fbN
Laintal: Does the evolution of complex life depend on the stellar spectral energy distribution? https://t.co/UpkjlBnNyp
None.
None.
###### Other stats
Sample Sizes : None.
Authors: 1
Total Words: 0
Unqiue Words: 0
##### #2. Stellar Flybys Interrupting Planet-Planet Scattering Generates Oort Planets
###### Nora Bailey, Daniel Fabrycky
Wide-orbit exoplanets are starting to be detected, and planetary formation models are under development to understand their properties. We propose a population of "Oort" planets around other stars, forming by a mechanism analogous to how the Solar System's Oort cloud of comets was populated. Gravitational scattering among planets is inferred from the eccentricity distribution of gas-giant exoplanets measured by the Doppler technique. This scattering is thought to commence while the protoplanetary disk is dissipating, $10^6-10^7$ yr after formation of the star, or perhaps soon thereafter, when the majority of stars are expected to be part of a natal cluster. Previous calculations of planet-planet scattering around isolated stars have one or more planets spending $10^4-10^7$ yr at distances >100 AU before ultimately being ejected. During that time, a close flyby of another star in the cluster may dynamically lift the periastron of the planet, ending further scattering with the inner planets. We present numerical simulations...
more | pdf | html
None.
###### Tweets
StarshipBuilder: Stellar Flybys Interrupting Planet-Planet Scattering Generates Oort Planets https://t.co/y1cx1T6voe
None.
None.
###### Other stats
Sample Sizes : None.
Authors: 2
Total Words: 9566
Unqiue Words: 2715
##### #3. The Need for Laboratory Measurements and Ab Initio Studies to Aid Understanding of Exoplanetary Atmospheres
###### Jonathan J. Fortney, Tyler D. Robinson, Shawn Domagal-Goldman, Anthony D. Del Genio, Iouli E. Gordon, Ehsan Gharib-Nezhad, Nikole Lewis, Clara Sousa-Silva, Vladimir Airapetian, Brian Drouin, Robert J. Hargreaves, Xinchuan Huang, Tijs Karman, Ramses M. Ramirez, Gregory B. Rieker, Jonathan Tennyson, Robin Wordsworth, Sergei N Yurchenko, Alexandria V Johnson, Timothy J. Lee, Chuanfei Dong, Stephen Kane, Mercedes Lopez-Morales, Thomas Fauchez, Timothy Lee, Mark S. Marley, Keeyoon Sung, Nader Haghighipour, Tyler Robinson, Sarah Horst, Peter Gao, Der-you Kao, Courtney Dressing, Roxana Lupu, Daniel Wolf Savin, Benjamin Fleury, Olivia Venot, Daniela Ascenzi, Stefanie Milam, Harold Linnartz, Murthy Gudipati, Guillaume Gronoff, Farid Salama, Lisseth Gavilan, Jordy Bouwman, Martin Turbet, Yves Benilan, Bryana Henderson, Natalie Batalha, Rebecca Jensen-Clem, Timothy Lyons, Richard Freedman, Edward Schwieterman, Jayesh Goyal, Luigi Mancini, Patrick Irwin, Jean-Michel Desert, Karan Molaverdikhani, John Gizis, Jake Taylor, Joshua Lothringer, Raymond Pierrehumbert, Robert Zellem, Natasha Batalha, Sarah Rugheimer, Jacob Lustig-Yaeger, Renyu Hu, Eliza Kempton, Giada Arney, Mike Line, Munazza Alam, Julianne Moses, Nicolas Iro, Laura Kreidberg, Jasmina Blecic, Tom Louden, Paul Molliere, Kevin Stevenson, Mark Swain, Kimberly Bott, Nikku Madhusudhan, Joshua Krissansen-Totton, Drake Deming, Irina Kitiashvili, Evgenya Shkolnik, Zafar Rustamkulov, Leslie Rogers, Laird Close
We are now on a clear trajectory for improvements in exoplanet observations that will revolutionize our ability to characterize their atmospheric structure, composition, and circulation, from gas giants to rocky planets. However, exoplanet atmospheric models capable of interpreting the upcoming observations are often limited by insufficiencies in the laboratory and theoretical data that serve as critical inputs to atmospheric physical and chemical tools. Here we provide an up-to-date and condensed description of areas where laboratory and/or ab initio investigations could fill critical gaps in our ability to model exoplanet atmospheric opacities, clouds, and chemistry, building off a larger 2016 white paper, and endorsed by the NAS Exoplanet Science Strategy report. Now is the ideal time for progress in these areas, but this progress requires better access to, understanding of, and training in the production of spectroscopic data as well as a better insight into chemical reaction kinetics both thermal and radiation-induced at a...
more | pdf | html
None.
###### Tweets
StarshipBuilder: The Need for Laboratory Measurements and Ab Initio Studies to Aid Understanding of Exoplanetary Atmospheres https://t.co/wyi4ozUIdz
None.
None.
###### Other stats
Sample Sizes : None.
Authors: 88
Total Words: 0
Unqiue Words: 0
##### #4. The Detectability and Characterization of the TRAPPIST-1 Exoplanet Atmospheres with JWST
###### Jacob Lustig-Yaeger, Victoria S. Meadows, Andrew P. Lincowski
The James Webb Space Telescope (JWST) will offer the first opportunity to characterize terrestrial exoplanets with sufficient precision to identify high mean molecular weight atmospheres, and TRAPPIST-1's seven known transiting Earth-sized planets are particularly favorable targets. To assist community preparations for JWST, we use simulations of plausible post-ocean-loss and habitable environments for the TRAPPIST-1 exoplanets, and test simulations of all bright object time series spectroscopy modes and all MIRI photometry filters to determine optimal observing strategies for atmospheric detection and characterization using both transmission and emission observations. We find that transmission spectroscopy with NIRSpec Prism is optimal for detecting terrestrial, CO2 containing atmospheres, potentially in fewer than 10 transits for all seven TRAPPIST-1 planets, if they lack high altitude aerosols. If the TRAPPIST-1 planets possess Venus-like H2SO4 aerosols, up to 12 times more transits may be required to detect atmospheres. We...
more | pdf | html
None.
###### Tweets
StarshipBuilder: The Detectability and Characterization of the TRAPPIST-1 Exoplanet Atmospheres with JWST https://t.co/B2OEp6z9Ol
None.
None.
###### Other stats
Sample Sizes : None.
Authors: 3
Total Words: 0
Unqiue Words: 0
##### #5. Trans-Neptunian objects and Centaurs at thermal wavelengths
###### Thomas Müller, Emmanuel Lellouch, Sonia Fornasier
The thermal emission of transneptunian objects (TNO) and Centaurs has been observed at mid- and far-infrared wavelengths - with the biggest contributions coming from the Spitzer and Herschel space observatories-, and the brightest ones also at sub-millimeter and millimeter wavelengths. These measurements allowed to determine the sizes and albedos for almost 180 objects, and densities for about 25 multiple systems. The derived very low thermal inertias show evidence for a decrease at large heliocentric distances and for high-albedo objects, which indicates porous and low-conductivity surfaces. The radio emissivity was found to be low ($\epsilon_r$=0.70$\pm$0.13) with possible spectral variations in a few cases. The general increase of density with object size points to different formation locations or times. The mean albedos increase from about 5-6% (Centaurs, Scattered-Disk Objects) to 15% for the Detached objects, with distinct cumulative albedo distributions for hot and cold classicals. The color-albedo separation in our sample...
more | pdf | html
None.
###### Tweets
StarshipBuilder: Trans-Neptunian objects and Centaurs at thermal wavelengths https://t.co/xsLSbVjgn8
None.
None.
###### Other stats
Sample Sizes : None.
Authors: 3
Total Words: 0
Unqiue Words: 0
##### #6. A strategy to search for an inner binary black hole from the motion of the tertiary star I: a perturbative analytic approach to a coplanar and near-circular three-body system and its application to 2M05215658+4359220
###### Toshinori Hayashi, Shijie Wang, Yasushi Suto
There are several on-going projects to detect a number of stars orbiting around invisible objects. A fraction of them may be a triple system consisting of an inner binary black hole (BBH) and an outer orbiting star. In this paper, we propose a methodology to search for a signature of the inner BBH, possibly a progenitor of gravitational wave sources recently detected by LIGO, from the precise radial velocity follow-up of the outer star. For simplicity and definiteness, we focus on a coplanar and near-circular three-body system and derive analytic perturbation formulae of the orbital elements for the outer star. This formula will be useful in designing a follow-up radial velocity observation to search for an invisible BBH. As a specific example, we consider the 2M05215658+4359220 system of a red giant orbiting around an unseen object. The resulting constraint reveals that if the unseen companion is indeed a BBH of roughly equal masses, its orbital period should be less than a couple of weeks. Future radial-velocity monitoring of...
more | pdf | html
None.
None.
###### Other stats
Sample Sizes : None.
Authors: 3
Total Words: 10336
Unqiue Words: 2301
Assert is a website where the best academic papers on arXiv (computer science, math, physics), bioRxiv (biology), BITSS (reproducibility), EarthArXiv (earth science), engrXiv (engineering), LawArXiv (law), PsyArXiv (psychology), SocArXiv (social science), and SportRxiv (sport research) bubble to the top each day.
Papers are scored (in real-time) based on how verifiable they are (as determined by their Github repos) and how interesting they are (based on Twitter).
To see top papers, follow us on twitter @assertpub_ (arXiv), @assert_pub (bioRxiv), and @assertpub_dev (everything else).
To see beautiful figures extracted from papers, follow us on Instagram.
Tracking 128,326 papers.
###### Search
Sort results based on if they are interesting or reproducible.
Interesting
Reproducible
Online
###### Stats
Tracking 128,326 papers. | 2019-05-20 06:49:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4697529673576355, "perplexity": 11749.768815532807}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255773.51/warc/CC-MAIN-20190520061847-20190520083847-00417.warc.gz"} |
https://holooly.com/solutions/the-magnetic-circuit-of-fig-2-4a-has-dimensions-ac-44-cm2-lg-0-06-cm-lc-40-cm-n-600-turns-assume-the-value-of-%C2%B5r-6000-for-iron-find-the-exciting-current-for-bc-1-2-t-and-the/ | Products
Rewards
from HOLOOLY
We are determined to provide the latest solutions related to all subjects FREE of charge!
Enjoy Limited offers, deals & Discounts by signing up to Holooly Rewards Program
HOLOOLY
HOLOOLY
TABLES
All the data tables that you may search for.
HOLOOLY
ARABIA
For Arabic Users, find a teacher/tutor in your City or country in the Middle East.
HOLOOLY
TEXTBOOKS
Find the Source, Textbook, Solution Manual that you are looking for in 1 click.
HOLOOLY
HELP DESK
Need Help? We got you covered.
## Q. 2.1
The magnetic circuit of Fig. 2.4(a) has dimensions:$A_{c} =4\times 4cm^{2}$,$l_{g} =0\cdot 06$cm, $l_{c} =40$ cm; $N=600$ turns. Assume the value of$\mu _{r} =6000$ for iron. Find the exciting current for$B_{c} =1\cdot 2$T and the corresponding flux and flux linkages.
## Verified Solution
From Eq. (2.9),
$Ni= H_{c} l_{c}+ H_{g}l_{g}$ ,
$Ni=\frac{B_{c} }{\mu _{c} }l_{c} + \frac{B_{g} }{\mu _{0} }l_{g}$
the ampere-turns for the circuit are given by $Ni=\frac{B_{c} }{\mu _{o}\mu _{r} }l_{c} + \frac{B_{g} }{\mu _{0} }l_{g}$
Neglecting fringing$A_{c}= A_{g}$ therefore $B_{c}= B_{g}$
Then $i=\frac{B_{c} }{\mu _{0}N } \left\lgroup\frac{l_{c} }{\mu _{r} }+ l_{g} \right\rgroup =\frac{1\cdot 2}{4\pi \times 10^{- 7}\times 600 } \left\lgroup\frac{40}{6000}+ 0\cdot 06 \right\rgroup \times 10^{- 2} =1\cdot 06$A
The reader should note that the reluctance of the iron path of 40 cm is only $\left\lgroup\frac{2/ 3}{6} \right\rgroup =0\cdot 11$ of the reluctance of the $0\cdot 06$ cm air-gap.
$\phi =B_{c}A_{c}=1\cdot 2\times 16\times 10^{-4} =19\cdot 2\times 10^{-4}$ Wb
Flux linkages, $\lambda =N\phi =600\times 19 \cdot 2\times 10^{-4} =1\cdot 152$Wb- truns
If fringing is to be taken into account, one gap length is added to each dimension of the air-gap constituting the area.
Then $A_{g} =\left(4+ 0\cdot 06\right) \left(4+ 0\cdot 06\right) =16\cdot 484 cm^{2}$
Effective $A_{g} \gt A_{c}$reduces the air-gap reluctance. Now
$B_{g} =\frac{19\cdot 2\times 10^{-4} }{16\cdot 484\times 10^{-4} } =1\cdot 165T$
From Eq. (i)
$Ni=\frac{B_{c} }{\mu _{o}\mu _{r} }l_{c} + \frac{B_{g} }{\mu _{0} }l_{g}$
$i=\frac{1}{\mu _{0}N } \left\lgroup\frac{B_{c}l_{c} }{\mu _{r} }+ B_{g}l_{g} \right\rgroup =\frac{1}{4\pi \times 10^{-7}\times 600 } \left\lgroup\frac{1\cdot 2\times 40\times 10^{-2} }{6000}+ 1\cdot 165\times 0\cdot 06\times 10^{-2} \right\rgroup =1\cdot 0332$A | 2022-07-01 17:03:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 21, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39271974563598633, "perplexity": 7271.29732486408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103943339.53/warc/CC-MAIN-20220701155803-20220701185803-00738.warc.gz"} |
http://mathhelpforum.com/algebra/191991-newby-needing-know-how-combine-natural-number-algebraic-fraction.html | # Math Help - newby needing to know how to combine a natural number and an algebraic fraction
1. ## newby needing to know how to combine a natural number and an algebraic fraction
Im new to algebra and need to know how:
1 + 2/x^2 + 4 can be combined into 1 term
thanks
2. ## Re: newby needing to know how to combine a natural number and an algebraic fraction
Originally Posted by ahdavewest751
Im new to algebra and need to know how:
1 + 2/x^2 + 4 can be combined into 1 term
thanks
Are you familiar with adding numerical fractions? If so you know that to add fractions we need to have a common denominator.
Spoiler:
For example $\dfrac{2}{3} + \dfrac{1}{7} = \dfrac{2}{3} \times \dfrac{7}{7} + \dfrac{1}{7} \times \dfrac{3}{3} = \dfrac{14}{21} + \dfrac{3}{21} = \dfrac{17}{21}$. For integers (whole numbers) you learned that you multiply top and bottom but whatever suited.
First you may as well say that $1+4 = 5$ to make our life easier
It is the same principle with algebra, you want to get it so that $5 \text{ and }\dfrac{2}{x^2}$ have the same denominator. So what would you multiply $\dfrac{5}{1}$ by to get a common denominator?
3. ## Re: newby needing to know how to combine a natural number and an algebraic fraction
ok so I tried this:
1 + 2/x^2 + 4 =
1/x^2 + 4 + 2/x^2 + 4 =
then multiplied the top left numerator by the bottom right denominator and added the numerator to get:
x^2 + 6/x^2 + 4
is that the way its done?
4. ## Re: newby needing to know how to combine a natural number and an algebraic fraction
I'm extremely confused. Is it:
$1+\frac{2}{x^2}+4$, or:
$1+\frac{2}{x^2+4}$
The lack of brackets implies the former; your working out (and logic) implies the latter is actually the case, however.
5. ## Re: newby needing to know how to combine a natural number and an algebraic fraction
Check out
http://www.mathhelpforum.com/math-he...orial-266.html
for info on how to make fractions look nice.
But given:
$1+\frac{2}{x^2+4}$
you are correct.
$\frac{1}{1}*\frac{x^2+4}{x^2+4}+\frac{2}{x^2+4} =$
$\frac{x^2+4}{x^2+4}+\frac{2}{x^2+4} =$
$\frac{x^2+6}{x^2+4}$ | 2015-11-25 19:10:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6846897602081299, "perplexity": 1107.3397798531669}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398445291.19/warc/CC-MAIN-20151124205405-00103-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/2972003/if-there-exists-a-discontinuous-function-fx-which-satisfies-f-fracxy2?noredirect=1 | if there exists a discontinuous function f(x) which satisfies $f(\frac{x+y}{2})\leqslant\frac{1}{2}f(x)+\frac{1}{2}f(y)$ but is not convex? [duplicate]
This question comes from Rudin's book "principles of mathematical analysis" chapter 4,exercise 24,on page 101.
The original question is:
Assume that f is a continuous real function defined in $$(a,b)$$ such that $$f(\frac{x+y}{2})\leqslant\frac{1}{2}f(x)+\frac{1}{2}f(y)$$ for all $$x,y\in (a,b)$$.Prove that f is convex.
I have solved this question.But when I am reading the definition of convex function,I find that convex function is not always continuous.So I want to ask if there exists a discontinuous function which satisfies $$f(\frac{x+y}{2})\leqslant\frac{1}{2}f(x)+\frac{1}{2}f(y)$$ but is not convex? Thanks!
marked as duplicate by Brahadeesh, Calvin Khor, Parcly Taxel, Arnaud D., José Carlos SantosOct 26 '18 at 16:22
• All convex functions on $(a,b)$ are continuous. – Kavi Rama Murthy Oct 26 '18 at 12:04
Any additive function, i.e., one with $$\tag1f(x+y)=f(x)+f(y)$$ for all $$x,y$$ will have $$f\left(\frac{x+y}2\right) =f\left(\frac x2\right)+f\left(\frac y2\right)=\frac12\left(f\left(\frac x2\right)+f\left(\frac x2\right)+f\left(\frac y2\right)+f\left(\frac y2\right)\right)=\frac12\left(f(x)+f(y)\right).$$ Once you abandon continuity, there are many solutions to $$(1)$$ - and they are not convex either. In fact, they are so discontinuous that they are unbounded in every open interval. | 2019-10-14 16:01:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8612167239189148, "perplexity": 265.35931545026574}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986653876.31/warc/CC-MAIN-20191014150930-20191014174430-00177.warc.gz"} |
https://math.stackexchange.com/questions/2467775/find-basis-for-irreps | # Find basis for irreps
I have six-dimensional (complex) matrices which span a representation of $S_4$ that decomposes into the two three-dimensional irreducible representations of the group. I would like to find out the basis vectors such that my representation matrices will be block-diagonal, however I am failing to do so … Is there a trick or algorithm one could use?
Thanks!
You could compute the matrices $M_\chi=\frac{1}{24}\sum_{g \in S^4} \chi(1)\chi(g^{-1})M_g$ which correspond to the primitive central idempotents of the two representations. These will turn out to be projection maps to the two three-dimensional irreducible components, from which you can get the basis vectors you want.
• I am having trouble with your suggestion. I computed $M_\chi$ for one irrep (am I right to take the characters of the irrep, and $M_g$ is my 6x6 matrix?) and I get $M^2 = M$, as I would expect. However I am not sure how I would figure the new basis vectors from that, or the block diagonal matrices? Computing $M_\chi M_g M_\chi$ does not work … Many thanks! – Faser Oct 12 '17 at 14:15
• If $M^2=M$, you're probably doing everything right. Hopefully also your $M$ has rank 3 (since it's a projection onto a 3-dimensional subspace). You want three of your basis vectors to form a basis for that subspace, so the easiest way to get them is probably to just take three linearly independent columns of $M$. Then you do it all over again with the other character to get the other three basis vectors... – Micah Oct 13 '17 at 3:06 | 2019-10-19 07:01:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8714518547058105, "perplexity": 174.17166209326956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986692126.27/warc/CC-MAIN-20191019063516-20191019091016-00096.warc.gz"} |
http://en.m.wikibooks.org/wiki/SPM/Compile_the_gnumex_mex_files | # SPM/Compile the gnumex mex files
If the provided shortpath.dll and uigetpath.dll don't work it's probably because they were compiled against a previous version of Matlab. But the source code is included so you can re-compile with your current version of Matlab, first setup mex, at the Matlab prompt type:
>> mex -setup
and Matlab will say
Please choose your compiler for building external interface (MEX) files:
Would you like mex to locate installed compilers [y]/n?
so choose 'y' Matlab then lists the available compilers:
Select a compiler:
[1] Lcc-win32 C 2.4.1 in C:\PROGRA~1\MATLAB\R2007a\sys\lcc
[2] Microsoft Visual C++ .NET 2003 in C:\Program Files\Microsoft Visual Studio .NET 2003
[0] None
Compiler:
Note: I have Visual Studio 2003 installed, that's why I have 2 options. If you have Borland installed you would also see it listed above.
The compiler used with MinGW and Cygwin (gcc) isn't listed, because it's not registered in Windows the same way that Matlab, Visual Studio etc. are. This is the very reason for using Gnumex to setup options for gcc. Choose [1] to use the compiler provided with Matlab.
Now change directories into c:\gnumex\src and compile the .c files:
mex shortpath.c -output shortpath.dll
mex uigetpath.c -output uigetpath.dll
You might get an error like this when compiling uigetpath.c:
c:\docume~1\beau\locals~1\temp\mex_c58982ee-6281-4e1f-e2ac-723987a01052\uigetpath.obj .text: undefined reference to '_SHGetMalloc@4'
c:\docume~1\beau\locals~1\temp\mex_c58982ee-6281-4e1f-e2ac-723987a01052\uigetpath.obj .text: undefined reference to '_SHBrowseForFolder@4'
c:\docume~1\beau\locals~1\temp\mex_c58982ee-6281-4e1f-e2ac-723987a01052\uigetpath.obj .text: undefined reference to '_SHGetPathFromIDList@8'
C:\PROGRA~1\MATLAB\R2007A\BIN\MEX.PL: Error: Link of 'uigetpath.dll' failed.
??? Error using ==> mex at 206
Unable to complete successfully.
But luckily uigetpath.dll is not crucial to the function of Gnumex. I believe it's only used when a "Browse" button is pushed, to present the user with a browseable folder tree. To avoid using it, just type paths into the text boxes in Gnumex. This error does not occur when I use the Visual Studio compiler most likely because its mexopts.bat file is setup to properly link to Microsoft libraries for the _SH functions listed.
Now that you have a working shortpath.dll (and possibly uigetpath.dll) copy it/them to the root Gnumex folder (c:\gnumex). | 2014-10-21 12:02:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5645284652709961, "perplexity": 11861.168787566228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444385.33/warc/CC-MAIN-20141017005724-00011-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/invariance-and-relativity.818743/page-3 | # Invariance and relativity
Nugatory
Mentor
So using my previous example please explain using the math why the object that bounces off of the wall with enough momentum in Bob's frame to kill him but either does not bounce off the wall in Alice's frame or bounces off the wall with less momentum than required to kill Bob in Alice's frame, explain please how these phenomenon all end up with the same outcome.
Your triggering device is responding to the magnitude of the four-momentum, which will be the same no matter which frame you use to calculate it.
You can't build a device that triggers off of the value of a frame-dependent quantity such as the three-momentum, for the same reason that you can't build a device that will trigger or not according to whether the person watching (that's "watching"! - not "interacting with"!) the device is moving or at rest relative to the device.
Thanks.
For those of us that don't understand 3 momentum vs 4 momentum can you give a quick example?
Thanks.
For those of us that don't understand 3 momentum vs 4 momentum can you give a quick example?
This is not the example you want but you should remember that the relative velocity between the ball and Bob's head is frame invariant.
Everyone sees the ball strike with the same force.
Janus
Staff Emeritus
Gold Member
Thanks.
For those of us that don't understand 3 momentum vs 4 momentum can you give a quick example?
So using my previous example please explain using the math why the object that bounces off of the wall with enough momentum in Bob's frame to kill him but either does not bounce off the wall in Alice's frame or bounces off the wall with less momentum than required to kill Bob in Alice's frame, explain please how these phenomenon all end up with the same outcome.
But it will have enough momentum to kill Bob is both frames. That's the whole point behind the Lorentz transforms. You start with the fact that anything that happens according to one frame happens according to all frames. Then you consider that c is invariant. The Lorentz transforms are what allows any frame maintain the consistency of both these facts. In other words, in order for Alice to agree with Bob that the object kills him and measure the speed of light relative to herself as being c, she has to measure Bob and his ship as length contracted, time running slow for Bob, and a different clock synchronization than Bob does. The Lorentz transforms are what maintain this consistency of events and the invariance of the speed of light.
Let's take this simple example. IN Bob's frame, we have two objects of 1kg each moving at 0.01c relative to the ship and towards each other until they hit in an non-elastic collision (like two balls of clay) and stick together. The resulting 2kg mass will be motionless with respect to the ship and have 0 momentum as measured by Bob.
Now consider what Alice would conclude. For her the ship (and Bob) are moving at 0.99c. Using the addition of velocities formula it works out that the velocity of one object will be 0.990197c and that of the other object will be 0.989799c. This gives them speeds of 0.000197c and 0.000201c with respect to the ship according to Alice. At first blush it then seems that they could not possibly collide, stick together and have the resulting mass remain motionless with respect to the Ship.
However, this first impression is wrong. Let's work out the whole problem from Alice's frame:
One object with a rest mass of 1kg is moving at 0.989799c, its momentum is $\rho = mv\gamma$ where $\gamma = \frac{1}{\sqrt{1-\frac{v^2}{c^2}}}$ and m is the rest mass for the object
For ease of calculation, we will use units where c=1, which gives us a numerical value for its momentum of 6.94738.
The other object is moving at 0.990197c and has a momentum of 7.08915( in the same direction). So after the objects collide and stick, the resulting mass will have a momentum of 14.03653. Plugging this and a rest mass value of 2 kg into the momentum formula above gives the resultant velocity of this mass with respect to Alice. This works out to 0.99c, or the same speed as Bob and his ship. In other words, both Bob and Alice agree that according to their own measurements and observations, that the result of the collision is a mass that remains motionless with respect to the ship.
The fact that the two masses had different speeds with respect to Bob as measured from Alice's frame did not, in the end, change the result as far as Alice was concerned. Assuming that it did would lead to a false conclusion. The same type of false conclusion you are making when you assume that an object thrown against a wall and coming back and hitting Bob hard enough to kill Bob in his own frame, would not hit him not hard enough to do so as measured from Alice's frame.
Thanks a lot for the very detailed explanation. I learned a lot from this thread. | 2020-10-26 13:28:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3849223256111145, "perplexity": 428.31716458047646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107891228.40/warc/CC-MAIN-20201026115814-20201026145814-00289.warc.gz"} |
https://math.stackexchange.com/questions/1739995/int-bigcup-n-1-inftye-nf-sum-n-1-infty-int-e-nf-given-f-pos | # $\int_{\bigcup_{n=1}^{\infty}E_n}f=\sum_{n=1}^{\infty}\int_{E_n}f$ given $f$ positive and measurable
I'm learning about measure theory (specifically Lebesgue intregation) and need help with the following problem:
Let $f:\mathbb{R}\rightarrow[0,+\infty)$ be measurable and let $\{E_n\}$ be a collection of pairwise disjoint measurable sets. Prove that $\int_{\bigcup_{n=1}^{\infty}E_n}f=\sum_{n=1}^{\infty}\int_{E_n}f.$
For convenience I set $E=\bigcup_{n=1}^{\infty}E_n$.
This problem looks like an application of the monotone convergence theorem but I'm having a hard time applying it. I need to find a sequence of functions that is positive an nondecreasing but I don't know how to define it.
Let $E=\bigcup_{n=1}^{\infty}E_n$, then $f\chi_E=\sum_{n=1}^{\infty}f\chi_{E_n}$, hence $$\int_Ef=\int_{\mathbb{R}}f\chi_E=\int_{\mathbb{R}}\sum_{n=1}^{\infty}f\chi_{E_n}=\sum_{n=1}^{\infty}\int_{\mathbb{R}}f\chi_{E_n}=\sum_{n=1}^{\infty}\int_{E_n}f$$ The monotone convergence theorem is what allows us to interchange the sum and integral, with $g_m=\sum_{n=1}^mf\chi_{E_n}$ being the non-decreasing sequence.
• Thank you for your clear explanation. I think there is a typo in your last sentence with the indexes in the sum. Could you verify it so I can accept your answer? – glpsx Apr 13 '16 at 12:12
• Where's the typo? It seems ok to me. – carmichael561 Apr 13 '16 at 15:47
• You are correct, sorry about that. I got confused by the fact that we are summing from $n = 1$ to $m$ for $g_m$. Thank you for your clear explanation (and time), much appreciated. – glpsx Apr 13 '16 at 15:56
Set $$f_N=\sum_{n=1}^{N} f\chi_{E_n}$$.
As $$f\chi_{E_n}\geq 0$$ for each $$n$$, $$f_1\leq f_2 \leq f_3 \leq f_4....$$.
Now observe that $$f_n \rightarrow f \chi_{E}$$ as $$n\rightarrow \infty$$, where $$E=\bigcup_{n=1}^{\infty}E_n$$. Thus by MCT we obtain the following equality:
$$\lim_{n} \int f_n d\mu= \int f\chi_{E} d\mu$$.
Which is the conclusion you seek. | 2021-05-11 08:19:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9612064361572266, "perplexity": 110.28657528105146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991904.6/warc/CC-MAIN-20210511060441-20210511090441-00342.warc.gz"} |
http://math.sns.it/paper/1407/ | # A note on admissible solutions of 1d scalar conservation laws and 2d Hamilton-Jacobi equations
created on 25 Jun 2004
modified by delellis on 05 May 2011
[BibTeX]
Published Paper
Inserted: 25 jun 2004
Last Updated: 5 may 2011
Journal: J. Hyperbolic Diff. Equ.
Volume: 1
Number: 4
Pages: 813-826
Year: 2004
Abstract:
Let $\Omega\subset *R*^2$ be an open set and let $f\in C^2 (*R*)$ with $f''>0$. In this note we prove that entropy solutions of $D_t u + D_x f(u) =0$ belong to $SBV_{loc} (\Omega)$. As a corollary we prove the same property for gradients of viscosity solutions of planar Hamilton--Jacobi PDEs with uniformly convex hamiltonians.
For the most updated version and eventual errata see the page
http:/www.math.uzh.chindex.php?id=publikationen&key1=493
Credits | Cookie policy | HTML 5 | CSS 2.1 | 2017-09-25 06:15:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20549902319908142, "perplexity": 2871.377339357875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690340.48/warc/CC-MAIN-20170925055211-20170925075211-00169.warc.gz"} |
https://reverseengineering.stackexchange.com/questions/4311/help-reversing-a-edb-database-file-for-pioneers-rekordbox-software | # Help reversing a EDB database file for Pioneers Rekordbox software
Pioneers Rekordbox software is a music management tool for DJs. One of its features is BPM detection for music files. Unfortunately it doesn't write this information to the BPM frame of the files ID3 tags, and instead keeps the detected BPM in it's own database files.
I'm writing a CLI tool to help me better manage my music, and one of the things I would like it to do, is extract the BPM data from the rekordbox database for each song.
As Guntram Blohm pointed at, the BPM is almost certainly not stored in the two ANLZ files. Instead it appears to be stored in the Rekordbox 'database.edb' file. I've uploaded an example edb file here [1] which contains one track with the BPM '170'.
According to this forum thread [2] the edb format is not an open format.
After a little more research it looks like it's a proprietary high-performance SQL database intended for use in embedded applications. (After EVEN MORE research, the database is called 'DeviceSQL'. Read the original authors Quora post about it). Doesn't seem like it is something that would be easy to reverse :(
For anyone else looking to extract the BPM information from their rekordbox library: It turns out there is a XML export that you can use. While not quite as automated as just reading the database file, it is a nice standard format!
Old question: Each track seems to have two files kept in the database (the database is just a directory tree of these files) for it. Here is the pair of files for an example track [4]. The BPM was detected as 170 for the track, though I suspect it's storing it as a floating point or double as some other tracks can be detected at numbers like "169.96". Though looking for the double and floating point hex representations didn't yield any matches for me.
I see the files have what look like headers, for example "PPTH" followed by the file path. and "PQTZ", for the Rekordbox quantization feature. But I'm not too familiar with file formats, so I can't tell if it's using a standard file type. Or something more proprietary.
If anyone is interested in taking a look at the files and pointing me in the right direction it would be greatly appreciated! Right now I'm just trying to figure out how the BPM is stored.
Since I don't have enough reputation yet I couldn't most more than one link :( Here are the links for the references in brackets above: https://gist.github.com/EvanPurkhiser/72b37edd4a6ea26fbe73
The precise BPMs are actually in the data files (filename).DAT (may the overall BPM is in the edb.. but I can't confirm) So I have reversed both data files created by RekordBox:
file.DAT
--------
/numbers are all big endian/
[tag] - 4 byte string
4byte - tag header size
4byte - segment size (including tag header)
(in multibit fields, msb-to-lsb (left-to-right) is the general direction)
//////////////////////////////
PMAI - main file descriptor
4byte - head size (28)
4byte - total file size
4byte - ??? (1)
4byte - ??? (65536)
4byte - ??? (65536)
4byte - ??? (0)
PPTH - file path
4byte - head size (16)
4byte - tag size
4byte - data length
data_bytes - file path in UTF16 (big endian) \0 terminated
PVBR - VBR seek table
4byte - head size (16)
4byte - tag size (1620) (4*400+4)
4byte - 0
>entries>
4byte - file pos
>last_entry>
4byte - ???
PQTZ - Quantized time zones
4byte - head size (24)
4byte - tag size
4byte - 0
4byte - ??? (524288=0x80000)
4byte - number of entries
>entries>
2byte - beat phase (1-2-3-4)
2byte - bpm*100
4byte - time index (msec)
PWAV - Low resolution Wave display data (5+3bit)
4byte - head size (20)
4byte - tag size (420)
4byte - data size (400)
4byte - ??? (65536)
>entries>
3bit - color index
5bit - height
PWV2 - Lowest resolution Wave display data (4bit)
4byte - head size (20)
4byte - tag size (120)
4byte - data size (100)
4byte - ??? (65536)
>entries>
4bit - 0
4bit - height
PCOB - CUE Object ///first PCOB for hot cues, second PCOB for memory
///only generated for USB storage,
///otherwise contains only dummy data and actual cue data stored in the edb
4byte - head size (24)
4byte - tag size
4byte - hotCUE? (1=hot cue, 0=memory)
4byte - number of cue points
4byte - memories (-1= hot cue)
>entry tags>
PCPT - CUE Point
4byte - head size (28)
4byte - tag size (56)
4byte - hot cue no#, 0 otherwise
4byte - active (0=inactive / 4=active)
4byte - (65536)
4byte - ???? -----point type: 0xffff ffff = hot cue //// memory first: 0xffff xxxxx ----- memory last: 0x xxxx ffff
>datas>
1byte - cue type 1 = single / 2 = loop
1byte - 0
2byte - ??? (1000)
4byte - start time (msec)
4byte - loop end (-1 if not used)
16byte - 0
file.EXT
--------
PMAI - main file descriptor
4byte - head size (28)
4byte - total file size
4byte - ??? (1)
4byte - ??? (65536)
4byte - ??? (65536)
4byte - ??? (0)
PPTH - file path
4byte - head size (16)
4byte - tag size
4byte - data length
data_bytes - file path in UTF16 (big endian) \0 terminated
PWV3 - High resolution Wave display data
4byte - head size (24)
4byte - tag size
4byte - ??? (1)
4byte - data size
4byte - ??? (0x0096 0000)
>entries>
3bit - color
5bit - height
When I was reversing, there were no PKEY in the files, so I don't know what it is for (and only seems to have 0 in it on the PC)
So the BPM values are stored in the PQTZ tag (in dynamic mode, you can have different BPMs during the same song, so it makes sense)
The file format seems, as you found out, to consist of headers that have a tag. Each of these headers seems to be 16+ byte, with 4 bytes for the tag, 4 bytes for the length of the header, 4 bytes for the size of header + data, and 4 bytes that i'm not sure about. Unfortunately, these length bytes are big endian, which made me think the bpm could be stored in big endian IEEE float as well, which could be the reason you didn't find anything.
The first header, PMAI. seems to be some kind of envelope (its length field is the size of the file itself), the rest of the headers seem to various forms of data content.
I wrote a small program to dump the section names and lengths (please don't use it as an example for good style!):
#include <stdio.h>
#include <arpa/inet.h>
int main(int argc, char **argv) {
analyze(argv[1]);
}
int analyze(char *filename) {
FILE *fp;
struct {
char tag[4];
int x0;
int x1;
int x2;
int length;
if ((fp=fopen(filename, "rb"))==NULL) {
perror(filename); return;
}
printf("%04x %4.4s: %08x (%06d) | %08x (%06d) | %08x (%06d)\n",
);
if (!memcmp(header.tag, "PMAI", 4)) { // outer container
} else if (!memcmp(header.tag, "PPTH", 4)) {
int i;
for (i=0; i<header.x2; i+=2) {
getc(fp);
putchar(getc(fp));
}
putchar('\n');
continue;
} else {
length=header.x1; // else skip data
}
}
}
which produces the following output:
$./sections ANLZ0000.DAT 0000 PMAI: 0000001c (000028) | 000028fc (010492) | 00000001 (000001) 001c PPTH: 00000010 (000016) | 00000100 (000256) | 000000f0 (000240) E:\music\247 Hardcore\[+singles]\[247HC055] [12B] Al Storm Ft. Malaya - Everytime We Say Goodbye (Technikore Remix).mp3 011c PVBR: 00000010 (000016) | 00000654 (001620) | 00000000 (000000) 0770 PQTZ: 00000018 (000024) | 00001f40 (008000) | 00000000 (000000) 26b0 PWAV: 00000014 (000020) | 000001a4 (000420) | 00000190 (000400) 2854 PWV2: 00000014 (000020) | 00000078 (000120) | 00000064 (000100) 28cc PCOB: 00000018 (000024) | 00000018 (000024) | 00000001 (000001) 28e4 PCOB: 00000018 (000024) | 00000018 (000024) | 00000000 (000000)$ ./sections ANLZ0000.EXT
0000 PMAI: 0000001c (000028) | 0000cf56 (053078) | 00000001 (000001)
001c PPTH: 00000010 (000016) | 00000100 (000256) | 000000f0 (000240)
E:\music\247 Hardcore\[+singles]\[247HC055] [12B] Al Storm Ft. Malaya - Everytime We Say Goodbye (Technikore Remix).mp3
011c PWV3: 00000018 (000024) | 0000ce26 (052774) | 00000001 (000001)
cf42 PKEY: 00000014 (000020) | 00000014 (000020) | 0000000c (000012)
So, PMAI is the container. PPTH is the name of the MP3 file. PVBR is probably information about variable bit rate, PQTZ the quantization, and PWAV, PWV2 and PWV3 various wave forms. Which leaves only PCOB and PKEY to possibly contain the bitrate. Unfortunately, if you look at the hex dump of these:
000028c0 xx xx xx xx xx xx xx xx xx xx xx xx 50 43 4f 42 ............PCOB
000028d0 00 00 00 18 00 00 00 18 00 00 00 01 00 00 00 00 ................
000028e0 ff ff ff ff 50 43 4f 42 00 00 00 18 00 00 00 18 ....PCOB........
000028f0 00 00 00 00 00 00 00 00 ff ff ff ff ............
0000cf40 xx xx 50 4b 45 59 00 00 00 14 00 00 00 14 00 00 ..PKEY..........
0000cf50 00 0c 00 00 00 00
it seems that PCOB contains 00 00 00 00 ff ff ff ff, and PKEY has 00 00 00 00. None of these look like they could mean 170.
This article says 'If rekordbox crashes on startup, rename database.backup.edb to database.edb, if it still crashes, remove all the datafiles'. Since the BPM don't seem to be stored in the ANLZ.* files - do you have a database.edb as well? Could the BPM be stored there?
• Thank you! I feel a little silly for not realizing it could be stored somewhere else. The software lets you set the path for where it should store analyzed data, but always sticks it's edb file in the users AppData. Anyway, the edb file looks like it's definitely the file that contains the BPM, as I see (what looks like a field definition) named "BPM". Here is the edb file. Doing a bit of research it looks like the extension is commonly used for MS Outlooks "Exchange Databse" file, but that seems silly. – Evan Purkhiser May 11 '14 at 21:53
• According to this the edb file is not an open format. – Evan Purkhiser May 11 '14 at 22:04
• Wow, Evan, we’ve come a long way since you asked this (I didn’t realize it was you who had asked this question—which I’d seen when starting the research that led to dysentery—when I first heard from you). We have the wire protocol pretty well figured out and can gather this data directly from Pioneer hardware over the network; details are written up in github.com/brunchboy/dysentery/blob/master/doc/Analysis.pdf but I would still love to be able to do offline analysis from the files on a thumb drive. All we need is someone to figure out the edb files for the metadata. Any hope? – James Elliott Jun 9 '17 at 14:17
• Hi, James, I would like to help, as I have already started some reversing but abandoned because lack of time and motivation. As you may know, EDB is a database format (deviceSQL) which was available for try a long time ago, but no more (haven't found the program anywhere). Most of the metadata are in file.DAT and file.EXT, as described above.. All my research is completely black-box, so fully legal (AFAIK). – CodeKiller Jun 27 '17 at 16:10
• That would be fantastic! Right now people running shows who want to work with metadata and a full set of CDJs need to slowly gather all the metadata over the network from a CDJ before the DJs mount the same media on all of the CDJs. Being able to read it directly from the memory stick before the show would be a huge help. – James Elliott Jul 7 '17 at 16:07
The EDB format is used by Microsoft's Extensible Storage Engine (ESE) to provide a storage back-end to a number of applications and services (Exchange, Active Directory, Desktop Search, Windows Live Mail, etc.)
Although the EDB format itself is not documented, it is well supported through a Windows API.
You can also access the contents of an EDB file through libesedb.
If all you want to do is look through the contents of an EDB file in a human-readable manner, the EseDbViewer tool is very good. | 2020-02-20 13:50:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2100231647491455, "perplexity": 4603.78592554679}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144979.91/warc/CC-MAIN-20200220131529-20200220161529-00499.warc.gz"} |
https://ankplanet.com/physics/geometric-optics/dispersion/chromatic-aberration/ | # Chromatic Aberration
The inability of a lens to bring the light of different colors to focus at a single point is called axial or longitudinal chromatic aberration.
A lens can be considered as the combination of a number of prisms placed one above the other. Due to its prismatic action, when a beam of light is incident on the lens parallel to its principal axis, it gets splits up into its constituent colors. The various colors are brought into focus at different points as the focal length of the lens is given by,
$\frac{1}{f}=(μ-1)\left(\frac{1}{R_1}+\frac{1}{R_2}\right)$
Hence, the focal length depends upon the refractive index of the material which further depends upon the color of light.
The refractive index of a glass is greater for violet light than that for red light i.e. $μ_v>μ_r$. Hence, the focal length of the lens for the red light is greater than that for violet light. The violet ray of the light gets focused at a point $F_v$ which is closer to the lens and the red ray of the light gets focused at a point $F_r$ which is a little away from the lens. The other colors are focused on the principal axis between points $F_v$ and $F_r$.
If $f_r$ and $f_v$ are the focal lengths of the lens for violet and red light then the difference $f_r-f_v$ gives the measure of the longitudinal chromatic aberration.
$\text{Longitudinal Chromatic Aberration}=f_r-f_v$
### Expression for Longitudinal Chromatic Aberration
The focal length of the lens for mean light is given by, $\frac{1}{f}=(μ-1)\left(\frac{1}{R_1}+\frac{1}{R_2}\right)$
Where, $R_1$ and $R_2$ are the radii of curvature of the two surfaces of the lens, and $μ$ is the refractive index of the material of the lens for the mean light. $\left(\frac{1}{R_1}+\frac{1}{R_2}\right)=\frac{1}{f(μ-1)}\text{ ____(1)}$
Let $(μ_v, μ_r)$ and $(f_v, f_r)$ be the refractive index of the material of the lens and the focal length of the lens for violet and red light respectively.
Focal length of the lens for violet light is given by, $\frac{1}{f_v}=(μ_v-1)\left(\frac{1}{R_1}+\frac{1}{R_2}\right)\text{ ____(2)}$
Focal length of the lens for red light is given by, $\frac{1}{f_r}=(μ_r-1)\left(\frac{1}{R_1}+\frac{1}{R_2}\right)\text{ ____(3)}$
Substituting the value of $\left(\frac{1}{R_1}+\frac{1}{R_2}\right)$ in equation $(2)$ and $(3)$,
$\frac{1}{f_v}=\frac{μ_v-1}{f(μ-1)} \text{ ____(4)}$
$\text{and, }\frac{1}{f_r}=\frac{μ_r-1}{f(μ-1)} \text{ ____(5)}$
Subtracting equation $(5)$ from $(4)$,
$\frac{1}{f_v}-\frac{1}{f_r}=\frac{μ_v-1}{f(μ-1)}- \frac{μ_r-1}{f(μ-1)}$
$\frac{f_r-f_v}{f_vf_r}=\frac{(μ_v-1)-(μ_r-1)}{f(μ-1)}$
$f_r-f_v=\frac{μ_v-μ_r}{f(μ-1)}f_vf_r\text{ ____(6)}$
The focal length of the lens for mean light can be taken as the geometric mean of the focal lengths for violet and red light i.e. $f=\sqrt{f_vf_r}$ $f_rf_v=f^2 \text{ ____(7)}$
Also, the dispersive power of the material of the lens is given by, $ω=\frac{μ_v-μ_r}{f(μ-1)} \text{ ____(8)}$
From equations $(6)$, $(7)$ and $(8)$, $f_r-f_v=\frac{ω}{f}f^2$ $f_r-f_v=ωf$
Chromatic Aberration $=$ Dispersive Power $×$ Mean Focal Length
The dispersive power $ω$ is always positive, whereas $f$ is positive for convex lens and negative for concave lens. Therefore, chromatic aberration is positive for convex lens and it is negative for concave lens.
More on Dispersion | 2022-12-02 05:27:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8128392100334167, "perplexity": 199.4491529861561}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710898.93/warc/CC-MAIN-20221202050510-20221202080510-00301.warc.gz"} |
https://de.zxc.wiki/wiki/Reaktorphysik | # Reactor physics
Projective representation of the thermal neutron flux in a fuel element of a pressurized water reactor with the control rods retracted . Result of a physical reactor transport calculation.
The reactor physics, the reactor theory and experimental reactor physics comprises deals with the nuclear physics processes in a nuclear reactor . Reactor physics is shaped by the interaction of free neutrons with atomic nuclei in a limited space . The most important physical quantities in reactor physics are the number densities of atoms or atomic nuclei and free neutrons, the nuclear reaction rates , the cross sections of the nuclear reactions and the neutron flux . The subject area of reactor physics mainly comprises the "neutron physics of the reactor", for which the term "reactor neutrons" is rarely used.
Reactor physics is based on nuclear physics, developed out of it and was counted among it until the mid-1950s. Nuclear data (core data) will continue to be exchanged between nuclear physicists and reactor physicists. Other physical disciplines - not dealt with below - such as thermodynamics and fluid mechanics are also important for nuclear reactors, especially for power reactors .
## Physical view of a nuclear reactor
The splitting of atomic nuclei creates free neutrons in a relatively high number density and with high kinetic energy . They spread very quickly in space filled with matter, comparable to a gas . They collide with the atomic nuclei that are in the same space, thereby reducing their kinetic energy, triggering different nuclear reactions and thus changing the number densities of the nuclides in this space . They are finally captured again in fractions of a second by atomic nuclei, mainly fissile atomic nuclei. Therefore the radioactive decay of the neutron ( lifetime 880 s) can be neglected in the neutron balance. With the absorption of the neutron in an atomic nucleus, the “life path” of this neutron is ended; if the capturing nucleus is a fissile nuclide and the fission actually occurs, it releases a new generation of neutrons.
## Basics
The basic equation of reactor physics is Boltzmann's neutron transport equation, a real partial integro-differential equation to which the neutron angular flux obeys. It can only be solved numerically approximately.
The neutron angular flux that solves the equation can be interpreted in a classical mechanical way and is a function of real quantities.
The approximation of Boltzmann's neutron transport equation, which is important for practice, is the neutron diffusion equation . In the stationary case, the neutron transport equation is mathematically approximated by an elliptical partial differential equation whose solution function is the neutron flux .
Specialist disciplines in which short changes in the reactor parameters over time, particularly accidents, are investigated are reactor kinetics and reactor dynamics. In them, neutron physics is coupled with fluid dynamics and thermodynamics.
## On the history of the separation of nuclear physics and reactor physics
Free neutrons in high number density have only been available for research and application since the Chicago Pile nuclear reactor was commissioned in 1942. All research work on this and on nuclear reactors in general in the years thereafter initially fell within the competence of nuclear physics. The number of physicists who dealt exclusively with neutron physics and nuclear reactors increased significantly, and the methodology increasingly moved away from that of low-energy nuclear physics. For this reason, the reactor physicists separated from the nuclear physicists in the mid-1950s, which was manifested in their own specialist journals and specialist organizations.
The First International Conference on the Peaceful Uses of Atomic Energy in Geneva in 1955 can be seen as a milestone in this separation . At this conference, the nuclear powers USA, USSR, Great Britain and France gave for the first time an insight into their activities and plans regarding the civil use of nuclear energy and into research in reactor physics. Then national nuclear research centers were founded in many countries, in Germany for example the nuclear research center Karlsruhe , the nuclear research facility Jülich and the central institute for nuclear physics Rossendorf . They already contained departments that had reactor physics or reactor theory in their names.
The first two journals, especially for the fields of reactor physics, reactor technology and nuclear technology , were the journals Nuclear Science and Engineering and Атомная энергия (Atomnaja energija) , both founded in 1956. Both journals are intended to be "sources of information on basic and applied research in all scientific fields related to the peaceful uses of nuclear energy and applications of nuclear particles and radiation." Nuclear Science and Engineering is published by the American Nuclear Society . One of the 19 working groups of this society is called Reactor Physics.
In 1957, the semi-autonomous Nuclear Energy Agency (NEA) was founded within the Organization for Economic Cooperation and Development (OECD) to promote the safe, environmentally friendly and economic use of nuclear energy, with its headquarters in Paris. The organization operates various nuclear databases in its Nuclear Data Services and a computer program service for computer programs used for the peaceful uses of nuclear energy. A not inconsiderable part of the programs administered and distributed by the NEA's computer program service was developed by reactor physicists or is used by reactor physicists and reactor technicians. Both reactor physicists and nuclear physicists contribute to the nuclear databases.
In the same year 1957 the first textbook on reactor physics and technology appeared in German. As a result, the author could not fall back on a uniform and generally recognized German terminology. He was faced with the choice of either adopting the English technical terms or creating his own German terminology and decided on the latter. Reactor physics is already mentioned in this book as an equal branch of physics alongside nuclear physics .
## Important physical reactor parameters
The physical quantities of the reactor theory worked out up to 1948 were compiled by an employee of the Oak Ridge National Laboratory . Around the end of 1950 this first phase of “finding the size” was completed. The reactor physicists gave names to a few quantities that are inconsistent with the usual rules for naming quantities within physics. One of these is the quantity called neutron flux . After the nuclear reaction rate density, it is considered to be the most important variable in reactor physics. This quantity is neither a “ flux ” nor a “ flux density ” in the physical sense. Misunderstandings associated with the name of this quantity run through the entire history of the development of reactor physics and in some cases have not yet been resolved. This is similar for another physical reactor variable, called the macroscopic cross section , albeit with less obvious consequences than for the neutron flux.
In the following table, representative of hundreds of physical reactor variables, those variables are listed that have been among the most important in reactor physics from the time the "size determination" was completed until today. After the size symbol, the independent variables that are relevant for the corresponding size are listed in brackets . It stands for the place, for the neutron energy, for the solid angle and for the time. The unit symbol stands for “number of neutrons”, the unit symbol for “number of nuclear reactions” and the unit symbol for “number of atoms”. Note that the same letters are used with the size symbol for the neutron density and the one for the nuclear reaction rate density as for units, but those size symbols differ in the font style from these unit symbols . ${\ displaystyle {\ vec {x}}}$${\ displaystyle E}$${\ displaystyle \ Omega}$${\ displaystyle t}$${\ displaystyle \ mathrm {n}}$${\ displaystyle \ mathrm {r}}$${\ displaystyle \ mathrm {a}}$${\ displaystyle n}$ ${\ displaystyle r}$
symbol unit Surname Type
${\ displaystyle r ({\ vec {x}}, t)}$ ${\ displaystyle \ mathrm {\ frac {r} {cm ^ {3} \ s}}}$ Nuclear reaction rate density Scalar
${\ displaystyle \ Phi ({\ vec {x}}, E, t)}$ ${\ displaystyle \ mathrm {\ frac {n} {cm ^ {2} \ s}}}$ Neutron flux Scalar
${\ displaystyle \ varphi ({\ vec {x}}, E, t)}$ ${\ displaystyle \ mathrm {\ frac {n} {cm ^ {2} \ s \ eV}}}$ Neutron flux spectrum Scalar
${\ displaystyle \ psi ({\ vec {x}}, E, \ Omega, t)}$ ${\ displaystyle \ mathrm {\ frac {n} {cm ^ {2} \ s \ eV \ sr}}}$ Neutron angular flux Scalar
${\ displaystyle {\ vec {J}} ({\ vec {x}}, E, t)}$ ${\ displaystyle \ mathrm {\ frac {n} {cm ^ {2} \ s}}}$ Neutron flux density vector
${\ displaystyle n ({\ vec {x}}, t)}$ ${\ displaystyle \ mathrm {\ frac {n} {cm ^ {3}}}}$ Neutron number density ("neutron density") Scalar
${\ displaystyle N ({\ vec {x}}, t)}$ ${\ displaystyle \ mathrm {\ frac {a} {cm ^ {3}}}}$ Atomic number density ("atomic density") Scalar
${\ displaystyle \ sigma (E)}$ ${\ displaystyle \ mathrm {cm ^ {2}}}$ Cross section Scalar
${\ displaystyle \ Sigma ({\ vec {x}}, t)}$ ${\ displaystyle \ mathrm {\ frac {1} {cm}}}$ Macroscopic cross section Scalar
${\ displaystyle B ({\ vec {x}}, t)}$ ${\ displaystyle \ mathrm {\ frac {kW \ d} {g}}}$ Specific burn-up Scalar
${\ displaystyle \ Phi _ {t} ({\ vec {x}}, E)}$ ${\ displaystyle \ mathrm {\ frac {n} {cm ^ {2}}}}$ Neutron fluence Scalar
${\ displaystyle k _ {\ mathrm {eff}}}$ ${\ displaystyle 1}$ Effective neutron multiplication factor Scalar
${\ displaystyle \ rho}$ ${\ displaystyle 1}$ Reactivity Scalar
The unit symbol stands for the solid angle unit steradian , the power unit watt and the time unit day . In the last column of the table the type of variable (scalar or vector) of the respective size is indicated. With the exception of the neutron flux density , all the quantities listed here are of the scalar type , such as a mass density, for example. ${\ displaystyle \ mathrm {sr}}$ ${\ displaystyle \ mathrm {W}}$${\ displaystyle \ mathrm {d}}$
## PHYSOR
Physics of Reactors (PHYSOR) conferences , organized by the American Nuclear Society together with other international forums, take place every two years. They bring reactor physicists together to share global expertise in reactor physics, nuclear reactor research and analysis, and related fields. The conference topics of PHYSOR 2018 were similar to those listed in the following section as sub-areas of reactor physics .
## Sub-areas of reactor physics
There is no generally binding subdivision of reactor physics, as becomes clear when comparing the tables of contents of the standard textbooks listed below . The differences can be understood if one compares the subdivision made in the PHYSOR conferences , for example, with the chapter headings of the Stacey monograph, which can be done with the excerpt from Google Books .
Following the PHYSOR conferences , reactor physics can be subdivided into the following sub-areas:
### Reactor analysis
The reactor analysis is dedicated to basic tasks of reactor physics. This sub-area defines the physical quantities that are relevant for the entire reactor physics. Based on this, reactor theorists developed and developed the physical and numerical-mathematical apparatus with which the distribution of neutrons within a spatial area can be described and calculated. The spatial area can be a partial area of the nuclear reactor (“cell calculation”) or it can include the reactor as a whole and its immediate surroundings (“global reactor calculation”).
The central task is to determine the distribution of neutrons in this area of space according to location, energy and direction of neutron flight, as well as depending on the selected point in time. In particular, reactor analysis includes the development of numerical solution methods for the basic equations of reactor physics. The approximation methods used differ significantly from reactor type to reactor type and are constantly being further developed.
It is "easier to derive the neutron transport equation (requires the concept of neutron conservation plus a little vector calculation) than to understand the neutron diffusion equation, which is used in most developments in reactor analysis." In practical implementation (program scope, computing times) it is exactly the opposite .
### Experimental reactor physics
Since the beginning of nuclear energy, numerous experiments on nuclear energy and nuclear technology have been carried out worldwide in various research laboratories, mainly on research reactors . A meeting report by the Leibniz Society gives an overview of experiments at the research reactors Rossendorf Research Reactor and Rossendorf Ring Zone Reactor . Even zero-power reactors ( "critical facilities"), such as SNEAK were and are critical of certain of the neutron-physical development of reactor types; they enable the measurement of the spatial neutron flux and power distribution in a planned reactor core, as well as of control rod - reactivity values , conversion rates , neutron spectra and different reactivity , especially coolant loss coefficient .
In 1999 the International Reactor Physics Experiment Evaluation (IRPhE) project was initiated as a pilot project of the NEA. Experimental data on reactor physics have been preserved since 2003, including measurement methods and data for applications in nuclear energy, as well as the knowledge and skills contained therein. The most important printed publication is the annual International Handbook of Evaluated Reactor Physics Benchmark Experiments.
### Deterministic transport theory
The deterministic transport theory , which includes the neutron diffusion theory, divides the independent variables of the transport equation , the space area, the energy and, if necessary, the neutron flight direction into discrete parts ( discretization ) and solves the resulting systems of difference equations numerically. The focus is primarily on the critical, i.e. stationary, reactor . However, changes over time over longer periods of time also belong in this sub-area, whereby the quantity burn-up is used instead of the independent variable time . This includes calculations of the energy spectrum of neutrons and generation of multi-group cross-sections as well as lattice and cell problems.
### Monte Carlo methods
What is now called the Monte Carlo method or Monte Carlo simulation was invented by a mathematician in the context of neutron transport. With a Monte Carlo method, now widely used in other areas, life paths of particles are simulated. The particle is followed from its appearance in a given space ( birth in or entry into the space) through all core processes within the space up to its disappearance from this space ( death or exit from the space). The geometry and material distribution of the spatial area and the nuclear data belong to the input data. Using the probability distribution of each event, each phase of the particle's life can be statistically tracked and recorded using a pseudo-random number . A well-known computer program based on the Monte Carlo method is MCNP .
### Fuel cycle
In reactor physics (theoretical aspects) and nuclear technology (practice), the fuel cycle denotes all work steps and processes that serve to supply and dispose of radioactive substances. The respective neutron physical investigations, such as criticality calculations for the safe interim storage of spent fuel elements, belong to the field of work of reactor physics and reactor technology.
### Transient and safety analysis
In addition to the investigation of stationary and quasi-stationary states of the nuclear reactor, reactor physics and technology also include the investigation of states in which the effective neutron multiplication factor is not equal to 1. Neutron flux and reactor power are time-dependent. A changed reactor power changes the effective neutron multiplication factor via the temperature coefficient. Time-dependent states of the reactor, known as transients , play a major role in reactor accidents. They are divided into several categories: for example, the design basis accident of a nuclear power plant, for the safety systems designed are and will be dominated by these needs, loss of coolant accidents , caused by the leakage of coolant from the cooling circuit, or Reaktivitätsstörfälle , triggered by accidental "supply" of Reactivity leading to a performance excursion. For the safety of nuclear power plants , especially with new reactor concepts, the physical examinations of the reactor are decisive.
### Nuclear data (core data)
The reactor physicist needs, as input data for his computer programs, nuclear data for all nuclides that are used in a nuclear reactor when it is commissioned or that are formed in the course of operation through nuclear reactions. These nuclear data are mainly obtained from measurements. In almost no case, theoretical nuclear physics can calculate these quantities with an accuracy that is required today for calculations in reactor physics.
Cross- sections for 6 nuclear reactions of neutron and atomic nucleus 235 U and their sum as a function of the kinetic energy of the neutrons. In the legend, z is sometimes used instead of the usual symbol n for neutron (data source: JEFF, graphic representation: core data viewer JANIS 4)
Nuclear data is therefore of fundamental importance, especially for reactor physicists and technicians, but it can also be of fundamental importance for biologists and doctors, for example. Nuclear data include the physical quantities of the radioactive decay properties, fission yields and interaction data ( cross sections , resonance parameters , energy and angular distributions ...) for different projectiles (neutrons, protons, etc.), and this over a wide energy range of these projectiles.
The nuclear data are stored in databases and are disseminated from there. Special formats exist for experimental data (EXFOR), estimated data (ENDF, JEFF, ENSDF), or processed data (PENDF, GENDF). However, the nuclear data are so varied and their quantity so large that a user will usually seek the help of an expert who specializes in nuclear data, usually a specialized reactor physicist. With the Java-based Nuclear Information Software (JANIS) visualization program, for example, it is possible for anyone to access numerical values from all of these databases and graphic representations without prior knowledge of the storage formats after a finite familiarization period.
The atomic masses fall into a second category of data, which strictly speaking does not belong to the core data . They are required to calculate the number densities of all nuclides present in a spatial area. They represent the core masses . Estimated atomic masses are published at longer intervals in an atomic mass evaluation .
### Reactor concepts
The research area of reactor concepts for power operation is by no means closed in terms of reactor physics. The classic reactor types and a number of special types have been relatively well researched . Six fourth generation reactor types have been on the test bench since 2000 :
• Fast gas-cooled reactor
• High temperature reactor
• Light water reactor with supercritical water as moderator, coolant and heat exchanger
• Fast sodium-cooled reactor
• Molten salt reactor
### Research reactors
Research reactors are used for physical, nuclear and material engineering investigations and / or produce radionuclides for medicine and technology. The neutron radiation from the reactor is used and not the thermal energy. A well-known German research reactor is accordingly called a research neutron source. Research reactors are also used for training purposes. The operation of a research reactor requires detailed accompanying calculations for the physics of the reactor, especially if it is used in a variety of ways.
### Environmental impact of nuclear activities
For this area, more than 170 computer programs, which were developed by reactor physicists, are listed in the Environmental and Earth Sciences category of the NEA-Computer Program Services in 2018.
## Reactor physics, reactor technology, nuclear technology
Reactor physics and reactor technology relate to one another like physics and technology in general. Planning, design , construction, operation and decommissioning of a nuclear reactor are largely the responsibility of reactor technology. The core technology includes the reactor technology, but also includes the technique of nuclear medicine and radiotherapy and various applications of radioactivity .
Also, some textbooks that neutron physics or neutron physics lead in the title, the reactor physics and less physics devote most of the neutron itself (neutron structure) or about the physics of neutron induced nuclear reactions in AGB stars .
## Computer programs for reactor physics
In addition to the already mentioned Monte Carlo program for the simulation of nuclear processes MCNP, there is the deterministic neutron diffusion program PDQ. It is a two-dimensional reactor design program, written in the Fortran programming language , which established itself as the standard programming language in reactor physics, and was published in 1957. PDQ calculates a discrete numerical approximation of the neutron flux from the time-independent neutron diffusion equations for a few energy groups for a heterogeneous reactor in a two-dimensional rectangular area. The independent position variables are either in Cartesian coordinates or in cylindrical coordinates . ${\ displaystyle (x, y)}$${\ displaystyle (r, z)}$
The PDQ program was the model for dozens of computer programs with the same objective worldwide. It was further developed over decades and (like all fine lattice neutron diffusion programs ) only lost its dominant position in reactor physics after the development of so-called nodal diffusion programs . The development work on this program is still considered a milestone in computer-aided numerical mathematics .
In the NEA's Computer Program Services library , predominantly, but not exclusively, reactor physics programs are collected, tested and passed on free of charge to institutes and universities of the member states of the OECD. In the reactor physics category Static Design Studies alone, 60 programs in the Fortran programming language are listed.
## Eminent reactor physicists
Eugene Wigner (left) and Alvin Weinberg at the Oak Ridge National Laboratory
Since 1990, the Eugene P. Wigner Reactor Physicist Award has been presented annually by the American Nuclear Society for outstanding achievements to reactor physicists . It is named in honor of Eugene Paul Wigner , who was also the first prize winner. The second laureate, reactor physicist Alvin M. Weinberg , became known among reactor physicists around the world for the textbook The physical theory of neutron chain reactors, which he wrote together with his teacher Wigner. From 1955 to 1973 he was director of the Oak Ridge National Laboratory (ORNL).
In 1973 Weinberg was dismissed from the Nixon administration as head of the ORNL because he had advocated a high level of nuclear safety and the molten salt reactor (MSR), and the development of which Weinberg had been promoting since 1955, was stopped. There was a brief revival of MSR research at the ORNL as part of the Carter Administration's nonproliferation program - a final publication that is still considered by many to be the reference design for commercial molten salt reactors .
Rudolf Schulten explains the fuel element of a pebble bed reactor
In Germany, Karl Wirtz and Rudolf Schulten should be mentioned. Wirtz had already worked for Heisenberg in the German uranium project , designed and managed the construction of the first successful German research reactor FR 2 , was a co-founder of the breeder reactor development in Europe and professor at the Technical University of Karlsruhe . Schulten also taught reactor physics at this university and wrote a textbook on reactor physics in 1960 together with Wernfried Güth . Schulten took up the idea of the pebble bed reactor from Farrington Daniels .
## Institutes and universities dealing with reactor physics
In Germany, reactor physics and related areas are worked on and taught in Aachen, Dresden and Karlsruhe, among others.
In Switzerland there are courses of study that include the subject of reactor physics at the technical colleges in Zurich and Lausanne. In Austria, the Vienna University of Technology offers a corresponding degree program.
In France, reactor physics and engineering is taught in the Nuclear Reactor Physics and Engineering department at the University of Paris-Saclay . In France, the Nuclear Reactor Physics Group at the University of Grenoble Alpes should also be mentioned.
In the Netherlands, the Reactor Physics and Nuclear Materials department at Delft University of Technology is the only academic group that trains and conducts research in the field of reactor physics.
## literature
### Standard textbooks
There are well over a hundred textbooks on the subjects of reactor physics, reactor theory and reactor analysis . The standard textbooks listed here were selected after a survey of reactor physicists.
• Samuel Glasstone , Milton C. Edlund: The elements of nuclear reactor theory . MacMillan, London 1952 (VII, 416 pp., Online ). This monograph occupies a prominent position because like no other it shaped the then young generation of reactor physicists in West and East and the later textbook writers. It is fully online in the 6th print from February 1957. Full text search is possible. Translation: Samuel Glasstone, Milton C. Edlund: Nuclear Reactor Theory. An introduction. Springer, Vienna 1961, 340 pp.
• Alvin M. Weinberg , Eugene Paul Wigner : The physical theory of neutron chain reactors . Univ. of Chicago Press, Chicago 1958, ISBN 0-226-88517-8 (XII, 800 pages).
• John R. Lamarsh: Introduction to nuclear reactor theory . Addison-Wesley, Reading, Mass. 1966 (XI, 585 pp.).
• George I. Bell, Samuel Glasstone: Nuclear reactor theory . Van Nostrand Reinhold, New York 1970 (XVIII, 619 pp.).
• James J. Duderstadt, Louis J. Hamilton: Nuclear reactor analysis . Wiley, New York 1976, ISBN 978-0-471-22363-4 (xvii, 650 pages).
• Rudi JJ Stammler, Máximo J. Abbate: Methods of steady-state reactor physics in nuclear design . Acad. Press, London 1983, ISBN 0-12-663320-7 (XVI, 506 pages).
• Аполлон Николаевич Климов (Apollon Nikolajewitsch Klimow): Ядерная физика и ядерные реакторы . Атомиздат, Москва 1971 (384 p.).
• Paul Reuss: Neutron physics . EDP Sciences, Les Ulis, France 2008, ISBN 978-2-7598-0041-4 (xxvi, 669 pages).
• Elmer E. Lewis: Fundamentals of nuclear reactor physics . Academic Press, Amsterdam, Heidelberg 2008, ISBN 978-0-12-370631-7 (XV, 293 pages).
• Weston M. Stacey: Nuclear Reactor Physics . Wiley, 2018, ISBN 978-3-527-81230-1 ( limited preview in Google Book Search).
### Textbooks in German
• Ferdinand Cap : Physics and technology of nuclear reactors . Springer, Vienna 1957 (XXIX, 487 pages, limited preview in the Google book search [accessed on August 21, 2018]). This book is the result of lectures that the author has given at the University of Innsbruck since the 1950/51 academic year.
• Karl Wirtz , Karl H. Beckurts : Elementary Neutron Physics . Springer, Berlin 1958 (VIII, 243 p., Limited preview in the Google book search [accessed on August 21, 2018]).
• Aleksey D. Galanin: Theory of thermal nuclear reactors . Teubner, Leipzig 1959 (XII, 382 pages). The original monograph was published in Russian in 1957 and in 1960 by Pergamon Press in English translation under the title Thermal reactor theory.
• Rudolf Schulten, Wernfried Güth: reactor physics . Bibliogr. Institute, Mannheim 1960 (171 pages).
• John J. Syrett: Reactor Theory . Vieweg, Braunschweig 1960 (VIII, 107 pp.).
• Josef Fassbender: Introduction to reactor physics . Thiemig, Munich 1967 (VIII, 146 pp.).
• Dieter Emendörfer, Karl-Heinz Höcker : Theory of nuclear reactors . Bibliographisches Institut, Mannheim, Vienna, Zurich 1970 (380 pages).
### Textbooks on reactor technology
• Werner Oldekop: Introduction to nuclear reactor and nuclear power plant technology. Part I: Fundamentals of nuclear physics, reactor physics, reactor dynamics . Thiemig, Munich 1975, ISBN 3-521-06093-4 (272 pages).
• Dieter Smidt: Reactor technology . 2nd Edition. Braun, Karlsruhe 1976, ISBN 3-7650-2019-2 (XVI, 325 pages).
• Albert Ziegler: Textbook of reactor technology . Springer, Berlin, Heidelberg 1983, ISBN 3-540-12198-6 (XI, 242 pages).
• Albert Ziegler, Hans-Josef Allelein (Hrsg.): Reactor technology: physical-technical basics . 2nd, revised edition 2013. Springer Vieweg, Berlin 2013, ISBN 978-3-642-33846-5 (634 pages, limited preview in the Google book search [accessed on August 21, 2018]).
## Individual evidence
1. ^ Boris Davison, John B. Sykes: Neutron transport theory . Clarendon Pr, Oxford 1957, p. 15 ff . (XX, 450).
2. Kenneth M. Case, Frederic de Hoffman, Georg Placzek: Introduction to the theory of neutron diffusion. Volume I . Los Alamos Scientific Laboratory, Los Alamos, New Mexico 1953.
3. Eugene L. Wachspress: Iterative solution of elliptic systems and applications to the neutron diffusion equations of reactor physics . Prentice-Hall, Englewood Cliffs, NJ 1966 (XIV, 299 pp.).
4. ^ David L. Hetrick: Dynamics of nuclear reactors . Univ. of Chicago, Chicago 1971, ISBN 0-226-33166-0 (542 pages).
5. ^ Karl O. Ott, Robert J. Neuhold: Introductory nuclear reactor dynamics . American Nuclear Soc, La Grange Park, Ill. 1985, ISBN 0-89448-029-4 (XII, 362 pp.).
6. ^ Nuclear Science and Engineering
7. Атомная энергия
8. ^ Nuclear Data Services
9. Computer Program Service
10. ^ Ferdinand Cap: Physics and technology of atomic reactors . Springer, Vienna 1957 (XXIX, 487 pages, limited preview in the Google book search [accessed on August 21, 2018]).
11. ^ Nicholas M. Smith, JR .: Nuclear Engineering Glossary: Reactor Theory . Oak Ridge, Tennessee - ORNL 84 1948 (64 pages, still reproduced in the Ormig process).
12. Physics of Reactors (PHYSOR)
13. PHYSOR 2018
14. ^ Weston M. Stacey: Nuclear Reactor Physics . Wiley, 2018, ISBN 978-3-527-81230-1 ( limited preview in Google Book Search [accessed August 25, 2018]).
15. James J. Duderstadt, Louis J. Hamilton: Nuclear reactor analysis . Wiley, New York 1976, ISBN 978-0-471-22363-4 , pp. 106 (xvii, 650 pp.).
16. ^ Allan F. Henry: Nuclear reactor analysis . MIT Press, Cambridge, Mass. 1975, ISBN 0-262-08081-8 (XII, 547 pp.).
17. James J. Duderstadt, Louis J. Hamilton: Ibid., P. 104.
18. Peter Liewers: Research on reactor physics in the GDR . In: Meeting reports of the Leibniz Society . tape 89 . trafo-Verlag, Berlin 2007, ISBN 978-3-89626-692-7 , p. 39–54 ( online [PDF; accessed August 27, 2018]).
19. W. Marth: The fast breeder SNR 300 in the ups and downs of its history. (PDF; 5.5 MB), report KFK 4666 of the Karlsruhe Nuclear Research Center, May 1992.
20. International Reactor Physics Experiment Evaluation (IRPhE)
21. International Handbook of Evaluated Reactor Physics Benchmark Experiments.
22. Edmond Darrell Cash Well, Cornelius Joseph Everett: A practical manual on the Monte Carlo method for random walk problems . University of California, Los Alamos (New Mexico) 1957 (228 pp., Online [PDF; accessed June 19, 2018]).
23. EXFOR
24. M. Herman, A. Trkov (Eds.): ENDF-6 Formats Manual. Data Formats and Procedures for the Evaluated Nuclear Data File / B-VI and ENDF / B-VII . Brookhaven National Laboratory; Distributed by the Office of Scientific and Technical Information, US Dept. of Energy, Upton, NY, Oak Ridge, Tenn. 2009 (XIII, 372 pp., Online [PDF; accessed on August 28, 2018]).
25. Java-based Nuclear Information Software (JANIS)
26. ^ Environmental and Earth Sciences
27. ^ Karl Wirtz, Karl H. Beckurts: Elementare Neutronenphysik . Springer, Berlin 1958 (VIII, 243 pages).
28. ^ Paul Reuss: Neutron physics . EDP Sciences, Les Ulis, France 2008, ISBN 978-2-7598-0041-4 (xxvi, 669 pp, limited preview in Google Book Search [accessed August 25, 2018]).
29. Gerald G. Bilodeau, WR Cadwell, JP Dorsey, JG Fairey, Richard S. Varga: PDQ - An IBM-704 Code to Solve the Two-Dimensional Few-Group Neutron-Diffusion Equations, Bettis Atomic Power Laboratory Report WAPD-TM- 70 . Pittsburgh, Pennsylvania 1957 ( online [accessed August 21, 2018]).
30. ^ Computer Program Services
31. ^ Static Design Studies
32. JR Engel et al .: Conceptual design characteristics of a denatured molten-salt reactor with once-through fueling . Oak Ridge National Laboratory Report ORNL / TM-7207, Oak Ridge, Tenn. 1980 (156 pp., Online [PDF; accessed on August 21, 2018]).
33. Ulrich Kirchner: The high temperature reactor. Conflicts, interests, decisions . Campus-Verlag, Frankfurt / Main 1991, ISBN 3-593-34538-2 (240 pages).
34. ^ Farrington Daniels: Neutronic reactor system. Patent US2809931, filed in 1945, granted in 1957.
35. ^
36. ^
37. ^
38. ^
39. ^
40. ^
41. ^
42. ^ | 2023-03-20 22:07:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 40, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5877522230148315, "perplexity": 4136.996722265943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943562.70/warc/CC-MAIN-20230320211022-20230321001022-00168.warc.gz"} |
https://www.physicsforums.com/threads/sum-of-ordinates-mean-value-of-functions.782901/ | Sum of ordinates mean value of functions
1. Nov 18, 2014
Appleton
I am having trouble deciphering the opening gambit of an explanation of mean values of functions. It begins as follows:
"Consider the part of the curve y = f(x) for values of x in the range a ≤ x ≤ b."
A graph is shown with a curve cutting the x axis at c with a shaded positive area bounded by the curve and the line x=a to the left of c and a shaded negative area bounded by the curve and the line x = b to the right of c.
"The mean value of y in this range is the average value of y for that part of the curve.
The sum of the ordinates (ie values of y) between x= a and x = c occupies the shaded area above the x axis and is positive.
This area is ∫acy dx
Hence the sum of the ordinates between x = a and x= c is ∫acy dx"
I understand that an ordinate is the value of y. But are the ordinates taken at integer values of x or continuous values of x. I don't see how the sum of ordinates is equal to the value of the area under the curve. I must have misunderstood the definition of the sum of the ordinates.
I can see how the sum of the continuous ordinates multiplied by change in x as change in x goes to 0 might equal the area under the curve.
Sorry if my terminology and description of the graph leave a lot to be desired.
2. Nov 18, 2014
Stephen Tashi
It's indeed meaningless to speak of the sum of all the y coordinates of the graph of a function that is defined on interval of real numbers. unless you define "sum" to be something besides an ordinary arithmetic sum.
To argue the relation between an integral and a mean value in a better way, consider that "mean mass per unit length" is defined by a relation such as (total length of interval )(mean mass per unit length) = total mass
Think of a f(x) as being "mass density" Then you just need to understand why the integral of a mass density over an interval is the total mass of the interval.
You book could have said "Think of $f(x)$ as being a density of something. Then $\int_a^b f(x) dx$ is the total something in the interval $[a,b]$ and $\frac{ \int_a^b f(x) dx}{ [b-a]}$ is the mean something per unit length = the mean density.
3. Nov 20, 2014 | 2017-12-14 14:05:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8108543753623962, "perplexity": 284.5381284136256}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948544124.40/warc/CC-MAIN-20171214124830-20171214144830-00343.warc.gz"} |
https://www.ideals.illinois.edu/handle/2142/50837 | ## Files in this item
FilesDescriptionFormat
application/vnd.openxmlformats-officedocument.presentationml.presentation
FD03_Presentation.pptx (3MB)
PresentationMicrosoft PowerPoint 2007
application/pdf
FD03_Abstract.pdf (18kB)
AbstractPDF
text/plain
FD03_Abstract.txt (1kB)
AbstractText file
## Description
Title: Broadband Microwave Spectrum And Structure Of Cyclopropyl Cyanosilane Author(s): Seifert, Nathan A Contributor(s): Groner, Peter; Durig, James R.; Overby, Jason S; Guirgis, Gamil A; Pate, Brooks; Lobsiger, Simon Subject(s): Chirped pulse Abstract: The strucure of cyclopropane cyanosilane has been studied using chirped-pulse Fourier transform microwave (CP-FTMW) spectroscopy in the 6.5-18 GHz band. Two conformers of similar intensity were detected, one with a gauche orientation of the cyanosilane group with respect to the plane of the ring, and the other with a staggered conformation. The sensitivity of the CP-FTMW experiment was sufficient enough to assign spectra for all common singly-substituted heavy atom isotopologues ($^{13}$C, $^{29/30}$Si, $^{15}$N) for each conformer, resulting in a full heavy atom Kraitchman structure of the molecule in good agreement with the predicted structure. Additionally, the hyperfine effects have been analyzed for the $^{14}N$-containing parent species. Results will also be presented on the potential tunneling spectrum arising from the symmetric double well torsional potential of the gauche conformer. Some observed transitions, especially with frequencies near the upper end of the measured band, exhibit splittings that could potentially be associated with a tunneling splitting. However, the resolution is not sufficient to provide a complete quantitative analysis of this effect. Issue Date: 2014-06-20 Publisher: International Symposium on Molecular Spectroscopy Citation Info: Seifert, N.A.; Groner, P.; Durig, J.R.; Overby, J.S.; Guirgis, G.A.; Pate, B.; Lobsiger, S. BROADBAND MICROWAVE SPECTRUM AND STRUCTURE OF CYCLOPROPYL CYANOSILANE. Proceedings of the International Symposium on Molecular Spectroscopy, Urbana, IL, June 16-21, 2014. DOI: 10.15278/isms.2014.FD03 Genre: CONFERENCE PAPER/PRESENTATION Type: Text Language: English URI: http://hdl.handle.net/2142/50837 DOI: 10.15278/isms.2014.FD03 Rights Information: Copyright 2014 by the authors. Licensed under a Creative Commons Attribution 4.0 International License. http://creativecommons.org/licenses/by/4.0/ Date Available in IDEALS: 2014-09-172015-04-14
| 2016-10-28 02:40:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3036109209060669, "perplexity": 10982.60759700557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721555.36/warc/CC-MAIN-20161020183841-00497-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://www.hellenicaworld.com/Science/Mathematics/en/AxiomOfChoice.html | ### - Art Gallery -
In mathematics, the axiom of choice, or AC, is an axiom of set theory equivalent to the statement that a Cartesian product of a collection of non-empty sets is non-empty. Informally put, the axiom of choice says that given any collection of bins, each containing at least one object, it is possible to make a selection of exactly one object from each bin, even if the collection is infinite. Formally, it states that for every indexed family $$(S_{i})_{i\in I}$$ of nonempty sets there exists an indexed family $$(x_{i})_{i\in I}$$ of elements such that $$x_{i}\in S_{i}$$ for every $$i\in I$$. The axiom of choice was formulated in 1904 by Ernst Zermelo in order to formalize his proof of the well-ordering theorem.[1]
Illustration of the axiom of choice, with each Si and xi represented as a jar and a colored marble, respectively
(Si) is an infinite family of sets indexed over the real numbers R; that is, there is a set Si for each real number i, with a small sample shown above. Each set contains at least one, and possibly infinitely many, elements. The axiom of choice allows us to arbitrarily select a single element from each set, forming a corresponding family of elements (xi) also indexed over the real numbers, with xi drawn from Si. In general, the collections may be indexed over any set I, not just R.
In many cases, such a selection can be made without invoking the axiom of choice; this is in particular the case if the number of sets is finite, or if a selection rule is available – some distinguishing property that happens to hold for exactly one element in each set. An illustrative example is sets picked from the natural numbers. From such sets, one may always select the smallest number, e.g. given the sets {{4, 5, 6}, {10, 12}, {1, 400, 617, 8000}} the set containing the smallest elements is {4, 10, 1}. In this case, "select the smallest number" is a choice function. Even if infinitely many sets were collected from the natural numbers, it will always be possible to choose the smallest element from each set to produce a set. That is, the choice function provides the set of chosen elements. However, no choice function is known for the collection of all non-empty subsets of the real numbers (if there are non-constructible reals). In that case, the axiom of choice must be invoked.
Bertrand Russell coined an analogy: for any (even infinite) collection of pairs of shoes, one can pick out the left shoe from each pair to obtain an appropriate selection; this makes it possible to directly define a choice function. For an infinite collection of pairs of socks (assumed to have no distinguishing features), there is no obvious way to make a function that selects one sock from each pair, without invoking the axiom of choice.[2]
Although originally controversial, the axiom of choice is now used without reservation by most mathematicians,[3] and it is included in the standard form of axiomatic set theory, Zermelo–Fraenkel set theory with the axiom of choice (ZFC). One motivation for this use is that a number of generally accepted mathematical results, such as Tychonoff's theorem, require the axiom of choice for their proofs. Contemporary set theorists also study axioms that are not compatible with the axiom of choice, such as the axiom of determinacy. The axiom of choice is avoided in some varieties of constructive mathematics, although there are varieties of constructive mathematics in which the axiom of choice is embraced.
Statement
A choice function is a function f, defined on a collection X of nonempty sets, such that for every set A in X, f(A) is an element of A. With this concept, the axiom can be stated:
Axiom — For any set X of nonempty sets, there exists a choice function f defined on X.
Formally, this may be expressed as follows:
$${\displaystyle \forall X\left[\varnothing \notin X\implies \exists f\colon X\rightarrow \bigcup X\quad \forall A\in X\,(f(A)\in A)\right]\,.}$$
Thus, the negation of the axiom of choice states that there exists a collection of nonempty sets that has no choice function.
Each choice function on a collection X of nonempty sets is an element of the Cartesian product of the sets in X. This is not the most general situation of a Cartesian product of a family of sets, where a given set can occur more than once as a factor; however, one can focus on elements of such a product that select the same element every time a given set appears as factor, and such elements correspond to an element of the Cartesian product of all distinct sets in the family. The axiom of choice asserts the existence of such elements; it is therefore equivalent to:
Given any family of nonempty sets, their Cartesian product is a nonempty set.
Nomenclature ZF, AC, and ZFC
In this article and other discussions of the Axiom of Choice the following abbreviations are common:
AC – the Axiom of Choice.
ZF – Zermelo–Fraenkel set theory omitting the Axiom of Choice.
ZFC – Zermelo–Fraenkel set theory, extended to include the Axiom of Choice.
Variants
There are many other equivalent statements of the axiom of choice. These are equivalent in the sense that, in the presence of other basic axioms of set theory, they imply the axiom of choice and are implied by it.
One variation avoids the use of choice functions by, in effect, replacing each choice function with its range.
Given any set X of pairwise disjoint non-empty sets, there exists at least one set C that contains exactly one element in common with each of the sets in X.[4]
This guarantees for any partition of a set X the existence of a subset C of X containing exactly one element from each part of the partition.
Another equivalent axiom only considers collections X that are essentially powersets of other sets:
For any set A, the power set of A (with the empty set removed) has a choice function.
Authors who use this formulation often speak of the choice function on A, but this is a slightly different notion of choice function. Its domain is the power set of A (with the empty set removed), and so makes sense for any set A, whereas with the definition used elsewhere in this article, the domain of a choice function on a collection of sets is that collection, and so only makes sense for sets of sets. With this alternate notion of choice function, the axiom of choice can be compactly stated as
Every set has a choice function.[5]
which is equivalent to
For any set A there is a function f such that for any non-empty subset B of A, f(B) lies in B.
The negation of the axiom can thus be expressed as:
There is a set A such that for all functions f (on the set of non-empty subsets of A), there is a B such that f(B) does not lie in B.
Restriction to finite sets
The statement of the axiom of choice does not specify whether the collection of nonempty sets is finite or infinite, and thus implies that every finite collection of nonempty sets has a choice function. However, that particular case is a theorem of the Zermelo–Fraenkel set theory without the axiom of choice (ZF); it is easily proved by mathematical induction.[6] In the even simpler case of a collection of one set, a choice function just corresponds to an element, so this instance of the axiom of choice says that every nonempty set has an element; this holds trivially. The axiom of choice can be seen as asserting the generalization of this property, already evident for finite collections, to arbitrary collections.
Usage
Until the late 19th century, the axiom of choice was often used implicitly, although it had not yet been formally stated. For example, after having established that the set X contains only non-empty sets, a mathematician might have said "let F(s) be one of the members of s for all s in X" to define a function F. In general, it is impossible to prove that F exists without the axiom of choice, but this seems to have gone unnoticed until Zermelo.
Not every situation requires the axiom of choice. For finite sets X, the axiom of choice follows from the other axioms of set theory. In that case it is equivalent to saying that if we have several (a finite number of) boxes, each containing at least one item, then we can choose exactly one item from each box. Clearly we can do this: We start at the first box, choose an item; go to the second box, choose an item; and so on. The number of boxes is finite, so eventually our choice procedure comes to an end. The result is an explicit choice function: a function that takes the first box to the first element we chose, the second box to the second element we chose, and so on. (A formal proof for all finite sets would use the principle of mathematical induction to prove "for every natural number k, every family of k nonempty sets has a choice function.") This method cannot, however, be used to show that every countable family of nonempty sets has a choice function, as is asserted by the axiom of countable choice. If the method is applied to an infinite sequence (Xi : i∈ω) of nonempty sets, a function is obtained at each finite stage, but there is no stage at which a choice function for the entire family is constructed, and no "limiting" choice function can be constructed, in general, in ZF without the axiom of choice.
Examples
The nature of the individual nonempty sets in the collection may make it possible to avoid the axiom of choice even for certain infinite collections. For example, suppose that each member of the collection X is a nonempty subset of the natural numbers. Every such subset has a smallest element, so to specify our choice function we can simply say that it maps each set to the least element of that set. This gives us a definite choice of an element from each set, and makes it unnecessary to apply the axiom of choice.
The difficulty appears when there is no natural choice of elements from each set. If we cannot make explicit choices, how do we know that our set exists? For example, suppose that X is the set of all non-empty subsets of the real numbers. First we might try to proceed as if X were finite. If we try to choose an element from each set, then, because X is infinite, our choice procedure will never come to an end, and consequently, we shall never be able to produce a choice function for all of X. Next we might try specifying the least element from each set. But some subsets of the real numbers do not have least elements. For example, the open interval (0,1) does not have a least element: if x is in (0,1), then so is x/2, and x/2 is always strictly smaller than x. So this attempt also fails.
Additionally, consider for instance the unit circle S, and the action on S by a group G consisting of all rational rotations. Namely, these are rotations by angles which are rational multiples of π. Here G is countable while S is uncountable. Hence S breaks up into uncountably many orbits under G. Using the axiom of choice, we could pick a single point from each orbit, obtaining an uncountable subset X of S with the property that all of its translates by G are disjoint from X. The set of those translates partitions the circle into a countable collection of disjoint sets, which are all pairwise congruent. Since X is not measurable for any rotation-invariant countably additive finite measure on S, finding an algorithm to select a point in each orbit requires the axiom of choice. See non-measurable set for more details.
The reason that we are able to choose least elements from subsets of the natural numbers is the fact that the natural numbers are well-ordered: every nonempty subset of the natural numbers has a unique least element under the natural ordering. One might say, "Even though the usual ordering of the real numbers does not work, it may be possible to find a different ordering of the real numbers which is a well-ordering. Then our choice function can choose the least element of every set under our unusual ordering." The problem then becomes that of constructing a well-ordering, which turns out to require the axiom of choice for its existence; every set can be well-ordered if and only if the axiom of choice holds.
Criticism and acceptance
A proof requiring the axiom of choice may establish the existence of an object without explicitly defining the object in the language of set theory. For example, while the axiom of choice implies that there is a well-ordering of the real numbers, there are models of set theory with the axiom of choice in which no well-ordering of the reals is definable. Similarly, although a subset of the real numbers that is not Lebesgue measurable can be proved to exist using the axiom of choice, it is consistent that no such set is definable.[7]
The axiom of choice proves the existence of these intangibles (objects that are proved to exist, but which cannot be explicitly constructed), which may conflict with some philosophical principles.[8] Because there is no canonical well-ordering of all sets, a construction that relies on a well-ordering may not produce a canonical result, even if a canonical result is desired (as is often the case in category theory). This has been used as an argument against the use of the axiom of choice.
Another argument against the axiom of choice is that it implies the existence of objects that may seem counterintuitive.[9] One example is the Banach–Tarski paradox which says that it is possible to decompose the 3-dimensional solid unit ball into finitely many pieces and, using only rotations and translations, reassemble the pieces into two solid balls each with the same volume as the original. The pieces in this decomposition, constructed using the axiom of choice, are non-measurable sets.
Despite these seemingly paradoxical facts, most mathematicians accept the axiom of choice as a valid principle for proving new results in mathematics. The debate is interesting enough, however, that it is considered of note when a theorem in ZFC (ZF plus AC) is logically equivalent (with just the ZF axioms) to the axiom of choice, and mathematicians look for results that require the axiom of choice to be false, though this type of deduction is less common than the type which requires the axiom of choice to be true.
It is possible to prove many theorems using neither the axiom of choice nor its negation; such statements will be true in any model of ZF, regardless of the truth or falsity of the axiom of choice in that particular model. The restriction to ZF renders any claim that relies on either the axiom of choice or its negation unprovable. For example, the Banach–Tarski paradox is neither provable nor disprovable from ZF alone: it is impossible to construct the required decomposition of the unit ball in ZF, but also impossible to prove there is no such decomposition. Similarly, all the statements listed below which require choice or some weaker version thereof for their proof are unprovable in ZF, but since each is provable in ZF plus the axiom of choice, there are models of ZF in which each statement is true. Statements such as the Banach–Tarski paradox can be rephrased as conditional statements, for example, "If AC holds, then the decomposition in the Banach–Tarski paradox exists." Such conditional statements are provable in ZF when the original statements are provable from ZF and the axiom of choice.
In constructive mathematics
As discussed above, in ZFC, the axiom of choice is able to provide "nonconstructive proofs" in which the existence of an object is proved although no explicit example is constructed. ZFC, however, is still formalized in classical logic. The axiom of choice has also been thoroughly studied in the context of constructive mathematics, where non-classical logic is employed. The status of the axiom of choice varies between different varieties of constructive mathematics.
In Martin-Löf type theory and higher-order Heyting arithmetic, the appropriate statement of the axiom of choice is (depending on approach) included as an axiom or provable as a theorem.[10] Errett Bishop argued that the axiom of choice was constructively acceptable, saying
A choice function exists in constructive mathematics, because a choice is implied by the very meaning of existence.[11]
In constructive set theory, however, Diaconescu's theorem shows that the axiom of choice implies the law of excluded middle (unlike in Martin-Löf type theory, where it does not). Thus the axiom of choice is not generally available in constructive set theory. A cause for this difference is that the axiom of choice in type theory does not have the extensionality properties that the axiom of choice in constructive set theory does.[12]
Some results in constructive set theory use the axiom of countable choice or the axiom of dependent choice, which do not imply the law of the excluded middle in constructive set theory. Although the axiom of countable choice in particular is commonly used in constructive mathematics, its use has also been questioned.[13]
Independence
In 1938,[14] Kurt Gödel showed that the negation of the axiom of choice is not a theorem of ZF by constructing an inner model (the constructible universe) which satisfies ZFC and thus showing that ZFC is consistent if ZF itself is consistent. In 1963, Paul Cohen employed the technique of forcing, developed for this purpose, to show that, assuming ZF is consistent, the axiom of choice itself is not a theorem of ZF. He did this by constructing a much more complex model which satisfies ZF¬C (ZF with the negation of AC added as axiom) and thus showing that ZF¬C is consistent.[15]
Together these results establish that the axiom of choice is logically independent of ZF. The assumption that ZF is consistent is harmless because adding another axiom to an already inconsistent system cannot make the situation worse. Because of independence, the decision whether to use the axiom of choice (or its negation) in a proof cannot be made by appeal to other axioms of set theory. The decision must be made on other grounds.
One argument given in favor of using the axiom of choice is that it is convenient to use it because it allows one to prove some simplifying propositions that otherwise could not be proved. Many theorems which are provable using choice are of an elegant general character: every ideal in a ring is contained in a maximal ideal, every vector space has a basis, and every product of compact spaces is compact. Without the axiom of choice, these theorems may not hold for mathematical objects of large cardinality.
The proof of the independence result also shows that a wide class of mathematical statements, including all statements that can be phrased in the language of Peano arithmetic, are provable in ZF if and only if they are provable in ZFC.[16] Statements in this class include the statement that P = NP, the Riemann hypothesis, and many other unsolved mathematical problems. When one attempts to solve problems in this class, it makes no difference whether ZF or ZFC is employed if the only question is the existence of a proof. It is possible, however, that there is a shorter proof of a theorem from ZFC than from ZF.
The axiom of choice is not the only significant statement which is independent of ZF. For example, the generalized continuum hypothesis (GCH) is not only independent of ZF, but also independent of ZFC. However, ZF plus GCH implies AC, making GCH a strictly stronger claim than AC, even though they are both independent of ZF.
Stronger axioms
The axiom of constructibility and the generalized continuum hypothesis each imply the axiom of choice and so are strictly stronger than it. In class theories such as Von Neumann–Bernays–Gödel set theory and Morse–Kelley set theory, there is an axiom called the axiom of global choice that is stronger than the axiom of choice for sets because it also applies to proper classes. The axiom of global choice follows from the axiom of limitation of size.
Equivalents
There are important statements that, assuming the axioms of ZF but neither AC nor ¬AC, are equivalent to the axiom of choice.[17] The most important among them are Zorn's lemma and the well-ordering theorem. In fact, Zermelo initially introduced the axiom of choice in order to formalize his proof of the well-ordering theorem.
Set theory
Well-ordering theorem: Every set can be well-ordered. Consequently, every cardinal has an initial ordinal.
Tarski's theorem about choice: For every infinite set A, there is a bijective map between the sets A and A×A.
Trichotomy: If two sets are given, then either they have the same cardinality, or one has a smaller cardinality than the other.
Given two non-empty sets, one has a surjection to the other.
The Cartesian product of any family of nonempty sets is nonempty.
König's theorem: Colloquially, the sum of a sequence of cardinals is strictly less than the product of a sequence of larger cardinals. (The reason for the term "colloquially" is that the sum or product of a "sequence" of cardinals cannot be defined without some aspect of the axiom of choice.)
Every surjective function has a right inverse.
Order theory
Zorn's lemma: Every non-empty partially ordered set in which every chain (i.e., totally ordered subset) has an upper bound contains at least one maximal element.
Hausdorff maximal principle: In any partially ordered set, every totally ordered subset is contained in a maximal totally ordered subset. The restricted principle "Every partially ordered set has a maximal totally ordered subset" is also equivalent to AC over ZF.
Tukey's lemma: Every non-empty collection of finite character has a maximal element with respect to inclusion.
Antichain principle: Every partially ordered set has a maximal antichain.
Abstract algebra
Every vector space has a basis.[18]
Krull's theorem: Every unital ring other than the trivial ring contains a maximal ideal.
For every non-empty set S there is a binary operation defined on S that gives it a group structure.[19] (A cancellative binary operation is enough, see group structure and the axiom of choice.)
Every set is a projective object in the category Set of sets.[20][21]
Functional analysis
The closed unit ball of the dual of a normed vector space over the reals has an extreme point.
Point-set topology
Tychonoff's theorem: Every product of compact topological spaces is compact.
In the product topology, the closure of a product of subsets is equal to the product of the closures.
Mathematical logic
If S is a set of sentences of first-order logic and B is a consistent subset of S, then B is included in a set that is maximal among consistent subsets of S. The special case where S is the set of all first-order sentences in a given signature is weaker, equivalent to the Boolean prime ideal theorem; see the section "Weaker forms" below.
Graph theory
Every connected graph has a spanning tree.[22]
Category theory
There are several results in category theory which invoke the axiom of choice for their proof. These results might be weaker than, equivalent to, or stronger than the axiom of choice, depending on the strength of the technical foundations. For example, if one defines categories in terms of sets, that is, as sets of objects and morphisms (usually called a small category), or even locally small categories, whose hom-objects are sets, then there is no category of all sets, and so it is difficult for a category-theoretic formulation to apply to all sets. On the other hand, other foundational descriptions of category theory are considerably stronger, and an identical category-theoretic statement of choice may be stronger than the standard formulation, à la class theory, mentioned above.
Examples of category-theoretic statements which require choice include:
Every small category has a skeleton.
If two small categories are weakly equivalent, then they are equivalent.
Every continuous functor on a small-complete category which satisfies the appropriate solution set condition has a left-adjoint (the Freyd adjoint functor theorem).
Weaker forms
There are several weaker statements that are not equivalent to the axiom of choice, but are closely related. One example is the axiom of dependent choice (DC). A still weaker example is the axiom of countable choice (ACω or CC), which states that a choice function exists for any countable set of nonempty sets. These axioms are sufficient for many proofs in elementary mathematical analysis, and are consistent with some principles, such as the Lebesgue measurability of all sets of reals, that are disprovable from the full axiom of choice.
Other choice axioms weaker than axiom of choice include the Boolean prime ideal theorem and the axiom of uniformization. The former is equivalent in ZF to the existence of an ultrafilter containing each given filter, proved by Tarski in 1930.
Results requiring AC (or weaker forms) but weaker than it
One of the most interesting aspects of the axiom of choice is the large number of places in mathematics that it shows up. Here are some statements that require the axiom of choice in the sense that they are not provable from ZF but are provable from ZFC (ZF plus AC). Equivalently, these statements are true in all models of ZFC but false in some models of ZF.
Set theory
Any union of countably many countable sets is itself countable (because it is necessary to choose a particular ordering for each of the countably many sets).
If the set A is infinite, then there exists an injection from the natural numbers N to A (see Dedekind infinite).[23]
Eight definitions of a finite set are equivalent.[24]
Every infinite game $$G_{S}$$ in which S is a Borel subset of Baire space is determined.
Measure theory
The Vitali theorem on the existence of non-measurable sets which states that there is a subset of the real numbers that is not Lebesgue measurable.
The Lebesgue measure of a countable disjoint union of measurable sets is equal to the sum of the measures of the individual sets.
Algebra
Every field has an algebraic closure.
Every field extension has a transcendence basis.
Stone's representation theorem for Boolean algebras needs the Boolean prime ideal theorem.
The Nielsen–Schreier theorem, that every subgroup of a free group is free.
The additive groups of R and C are isomorphic.[25][26]
Functional analysis
The Hahn–Banach theorem in functional analysis, allowing the extension of linear functionals
The theorem that every Hilbert space has an orthonormal basis.
The Banach–Alaoglu theorem about compactness of sets of functionals.
The Baire category theorem about complete metric spaces, and its consequences, such as the open mapping theorem and the closed graph theorem.
On every infinite-dimensional topological vector space there is a discontinuous linear map.
General topology
A uniform space is compact if and only if it is complete and totally bounded.
Every Tychonoff space has a Stone–Čech compactification.
Mathematical logic
Gödel's completeness theorem for first-order logic: every consistent set of first-order sentences has a completion. That is, every consistent set of first-order sentences can be extended to a maximal consistent set.
Possibly equivalent implications of AC
There are several historically important set-theoretic statements implied by AC whose equivalence to AC is open. The partition principle, which was formulated before AC itself, was cited by Zermelo as a justification for believing AC. In 1906 Russell declared PP to be equivalent, but whether the partition principle implies AC is still the oldest open problem in set theory, and the equivalences of the other statements are similarly hard old open problems. In every known model of ZF where choice fails, these statements fail too, but it is unknown if they can hold without choice.
Set theory
Partition principle: if there is a surjection from A to B, there is an injection from B to A. Equivalently, every partition P of a set S is less than or equal to S in size.
Converse Schröder–Bernstein theorem: if two sets have surjections to each other, they are equinumerous.
Weak partition principle: A partition of a set S cannot be strictly larger than S. If WPP holds, this already implies the existence of a non-measurable set. Each of the previous three statements is implied by the preceding one, but it is unknown if any of these implications can be reversed.
There is no infinite decreasing sequence of cardinals. The equivalence was conjectured by Schoenflies in 1905.
Abstract algebra
Hahn embedding theorem: Every ordered abelian group G order-embeds subgroup of the additive group ℝΩ endowed with a lexicographical order, where Ω is the set of Archimedean equivalence classes of Ω. This equivalence was conjectured by Hahn in 1907.
Stronger forms of the negation of AC
If we abbreviate by BP the claim that every set of real numbers has the property of Baire, then BP is stronger than ¬AC, which asserts the nonexistence of any choice function on perhaps only a single set of nonempty sets. Strengthened negations may be compatible with weakened forms of AC. For example, ZF + DC[27] + BP is consistent, if ZF is.
It is also consistent with ZF + DC that every set of reals is Lebesgue measurable; however, this consistency result, due to Robert M. Solovay, cannot be proved in ZFC itself, but requires a mild large cardinal assumption (the existence of an inaccessible cardinal). The much stronger axiom of determinacy, or AD, implies that every set of reals is Lebesgue measurable, has the property of Baire, and has the perfect set property (all three of these results are refuted by AC itself). ZF + DC + AD is consistent provided that a sufficiently strong large cardinal axiom is consistent (the existence of infinitely many Woodin cardinals).
Quine's system of axiomatic set theory, "New Foundations" (NF), takes its name from the title (“New Foundations for Mathematical Logic”) of the 1937 article which introduced it. In the NF axiomatic system, the axiom of choice can be disproved.[28]
Statements consistent with the negation of AC
There are models of Zermelo-Fraenkel set theory in which the axiom of choice is false. We shall abbreviate "Zermelo-Fraenkel set theory plus the negation of the axiom of choice" by ZF¬C. For certain models of ZF¬C, it is possible to prove the negation of some standard facts. Any model of ZF¬C is also a model of ZF, so for each of the following statements, there exists a model of ZF in which that statement is true.
In some model, there is a set that can be partitioned into strictly more equivalence classes than the original set has elements, and a function whose domain is strictly smaller than its range. In fact, this is the case in all known models.
There is a function f from the real numbers to the real numbers such that f is not continuous at a, but f is sequentially continuous at a, i.e., for any sequence {xn} converging to a, limn f(xn)=f(a).
In some model, there is an infinite set of real numbers without a countably infinite subset.
In some model, the real numbers are a countable union of countable sets.[29] This does not imply that the real numbers are countable: As pointed out above, to show that a countable union of countable sets is itself countable requires the Axiom of countable choice.
In some model, there is a field with no algebraic closure.
In all models of ZF¬C there is a vector space with no basis.
In some model, there is a vector space with two bases of different cardinalities.
In some model there is a free complete boolean algebra on countably many generators.[30]
In some model there is a set that cannot be linearly ordered.
There exists a model of ZF¬C in which every set in Rn is measurable. Thus it is possible to exclude counterintuitive results like the Banach–Tarski paradox which are provable in ZFC. Furthermore, this is possible whilst assuming the Axiom of dependent choice, which is weaker than AC but sufficient to develop most of real analysis.
In all models of ZF¬C, the generalized continuum hypothesis does not hold.
For proofs, see Jech (2008).
Additionally, by imposing definability conditions on sets (in the sense of descriptive set theory) one can often prove restricted versions of the axiom of choice from axioms incompatible with general choice. This appears, for example, in the Moschovakis coding lemma.
Axiom of choice in type theory
In type theory, a different kind of statement is known as the axiom of choice. This form begins with two types, σ and τ, and a relation R between objects of type σ and objects of type τ. The axiom of choice states that if for each x of type σ there exists a y of type τ such that R(x,y), then there is a function f from objects of type σ to objects of type τ such that R(x,f(x)) holds for all x of type σ:
$${\displaystyle (\forall x^{\sigma })(\exists y^{\tau })R(x,y)\to (\exists f^{\sigma \to \tau })(\forall x^{\sigma })R(x,f(x)).}$$
Unlike in set theory, the axiom of choice in type theory is typically stated as an axiom scheme, in which R varies over all formulas or over all formulas of a particular logical form.
Quotes
The axiom of choice is obviously true, the well-ordering principle obviously false, and who can tell about Zorn's lemma?
— Jerry Bona[31]
This is a joke: although the three are all mathematically equivalent, many mathematicians find the axiom of choice to be intuitive, the well-ordering principle to be counterintuitive, and Zorn's lemma to be too complex for any intuition.
The Axiom of Choice is necessary to select a set from an infinite number of pairs of socks, but not an infinite number of pairs of shoes.
— Bertrand Russell[32]
The observation here is that one can define a function to select from an infinite number of pairs of shoes by stating for example, to choose a left shoe. Without the axiom of choice, one cannot assert that such a function exists for pairs of socks, because left and right socks are (presumably) indistinguishable.
Tarski tried to publish his theorem [the equivalence between AC and "every infinite set A has the same cardinality as A × A", see above] in Comptes Rendus, but Fréchet and Lebesgue refused to present it. Fréchet wrote that an implication between two well known [true] propositions is not a new result, and Lebesgue wrote that an implication between two false propositions is of no interest.
Polish-American mathematician Jan Mycielski relates this anecdote in a 2006 article in the Notices of the AMS.[33]
The axiom gets its name not because mathematicians prefer it to other axioms.
— A. K. Dewdney
This quote comes from the famous April Fools' Day article in the computer recreations column of the Scientific American, April 1989.
Notes
Zermelo 1904.
Jech 1977, p. 351
Jech, 1977, p. 348ff; Martin-Löf 2008, p. 210. According to Mendelson 1964, p. 201:
The status of the Axiom of Choice has become less controversial in recent years. To most mathematicians it seems quite plausible and it has so many important applications in practically all branches of mathematics that not to accept it would seem to be a wilful hobbling of the practicing mathematician.
Herrlich 2006, p. 9. According to Suppes 1972, p. 243, this was the formulation of the axiom of choice which was originally given by Zermelo 1904. See also Halmos 1960, p. 60 for this formulation.
Suppes 1972, p. 240.
Tourlakis (2003), pp. 209–210, 215–216.
Fraenkel, Abraham A.; Bar-Hillel, Yehoshua; Lévy, Azriel (1973), Foundations of set theory (2nd ed.), Amsterdam-London: North-Holland Publishing Co., pp. 69–70, ISBN 9780080887050, MR 0345816.
Rosenbloom, Paul C. (2005), The Elements of Mathematical Logic, Courier Dover Publications, p. 147, ISBN 9780486446172.
Dawson, J. W. (August 2006), "Shaken Foundations or Groundbreaking Realignment? A Centennial Assessment of Kurt Gödel's Impact on Logic, Mathematics, and Computer Science", Proc. 21st Annual IEEE Symposium on Logic in Computer Science (LICS 2006), pp. 339–341, doi:10.1109/LICS.2006.47, ISBN 978-0-7695-2631-7, S2CID 15526447, "The axiom of choice, though it had been employed unconsciously in many arguments in analysis, became controversial once made explicit, not only because of its non-constructive character, but because it implied such extremely unintuitive consequences as the Banach–Tarski paradox.".
Per Martin-Löf, Intuitionistic type theory, 1980. Anne Sjerp Troelstra, Metamathematical investigation of intuitionistic arithmetic and analysis, Springer, 1973.
Errett Bishop and Douglas S. Bridges, Constructive analysis, Springer-Verlag, 1985.
Martin-Löf, Per (2006). "100 Years of Zermelo's Axiom of Choice: What was the Problem with It?". The Computer Journal. 49 (3): 345–350. Bibcode:1980CompJ..23..262L. doi:10.1093/comjnl/bxh162.
Fred Richman, “Constructive mathematics without choice”, in: Reuniting the Antipodes—Constructive and Nonstandard Views of the Continuum (P. Schuster et al., eds), Synthèse Library 306, 199–205, Kluwer Academic Publishers, Amsterdam, 2001.
Gödel, Kurt (9 November 1938). "The Consistency of the Axiom of Choice and of the Generalized Continuum-Hypothesis". Proceedings of the National Academy of Sciences of the United States of America. 24 (12): 556–557. Bibcode:1938PNAS...24..556G. doi:10.1073/pnas.24.12.556. PMC 1077160. PMID 16577857.
Cohen, Paul (2019). "The Independence of the Axiom of Choice" (PDF). Stanford University Libraries. Retrieved 22 March 2019.
This is because arithmetical statements are absolute to the constructible universe L. Shoenfield's absoluteness theorem gives a more general result.
See Moore 2013, pp. 330–334, for a structured list of 74 equivalents. See Howard & Rubin 1998, pp. 11–16, for 86 equivalents with source references.
Blass, Andreas (1984). "Existence of bases implies the axiom of choice". Axiomatic set theory (Boulder, Colo., 1983). Contemporary Mathematics. 31. Providence, RI: American Mathematical Society. pp. 31–33. doi:10.1090/conm/031/763890. MR 0763890.
A. Hajnal, A. Kertész: Some new algebraic equivalents of the axiom of choice, Publ. Math. Debrecen, 19(1972), 339–340, see also H. Rubin, J. Rubin, Equivalents of the axiom of choice, II, North-Holland, 1985, p. 111.
Awodey, Steve (2010). Category theory (2nd ed.). Oxford: Oxford University Press. pp. 20–24. ISBN 978-0199237180. OCLC 740446073.
projective object in nLab
Serre, Jean-Pierre (2003), Trees, Springer Monographs in Mathematics, Springer, p. 23; Soukup, Lajos (2008), "Infinite combinatorics: from finite to infinite", Horizons of combinatorics, Bolyai Society Mathematical Studies, 17, Berlin: Springer, pp. 189–213, CiteSeerX 10.1.1.222.5699, doi:10.1007/978-3-540-77200-2_10, ISBN 978-3-540-77199-9, MR 2432534. See in particular Theorem 2.1, pp. 192–193.
It is shown by Jech 2008, pp. 119–131, that the axiom of countable choice implies the equivalence of infinite and Dedekind-infinite sets, but that the equivalence of infinite and Dedekind-infinite sets does not imply the axiom of countable choice in ZF.
It was shown by Lévy 1958 and others using Mostowski models that eight definitions of a finite set are independent in ZF without AC, although they are equivalent when AC is assumed. The definitions are I-finite, Ia-finite, II-finite, III-finite, IV-finite, V-finite, VI-finite and VII-finite. I-finiteness is the same as normal finiteness. IV-finiteness is the same as Dedekind-finiteness.
"[FOM] Are (C,+) and (R,+) isomorphic".
Ash, C. J. "A consequence of the axiom of choice". Journal of the Australian Mathematical Society. Retrieved 27 March 2018.
Axiom of dependent choice
"Quine's New Foundations". Stanford Encyclopedia of Philosophy. Retrieved 10 November 2017.
Jech 2008, pp. 142–144, Theorem 10.6 with proof.
Stavi, Jonathan (1974). "A model of ZF with an infinite free complete Boolean algebra". Israel Journal of Mathematics. 20 (2): 149–163. doi:10.1007/BF02757883. S2CID 119543439.
Krantz, Steven G. (2002), "The axiom of choice", Handbook of Logic and Proof Techniques for Computer Science, Springer, pp. 121–126, doi:10.1007/978-1-4612-0115-1_9, ISBN 978-1-4612-6619-8.
The boots-and-socks metaphor was given in 1919 by Russell 1993, pp. 125–127. He suggested that a millionaire might have ℵ0 pairs of boots and ℵ0 pairs of socks.
Among boots we can distinguish right and left, and therefore we can make a selection of one out of each pair, namely, we can choose all the right boots or all the left boots; but with socks no such principle of selection suggests itself, and we cannot be sure, unless we assume the multiplicative axiom, that there is any class consisting of one sock out of each pair.
Russell generally used the term "multiplicative axiom" for the axiom of choice. Referring to the ordering of a countably infinite set of pairs of objects, he wrote:
There is no difficulty in doing this with the boots. The pairs are given as forming an ℵ0, and therefore as the field of a progression. Within each pair, take the left boot first and the right second, keeping the order of the pair unchanged; in this way we obtain a progression of all the boots. But with the socks we shall have to choose arbitrarily, with each pair, which to put first; and an infinite number of arbitrary choices is an impossibility. Unless we can find a rule for selecting, i.e. a relation which is a selector, we do not know that a selection is even theoretically possible.
Russell then suggests using the location of the centre of mass of each sock as a selector.
Mycielski, Jan (2006), "A system of axioms of set theory for the rationalists" (PDF), Notices of the American Mathematical Society, 53 (2): 206–213, MR 2208445.
References
Halmos, Paul R. (1960). Naive Set Theory. The University Series in Undergraduate Mathematics. Princeton, NJ: van Nostrand Company. Zbl 0087.04403.
Herrlich, Horst (2006). Axiom of Choice. Lecture Notes in Math. 1876. Berlin: Springer-Verlag. ISBN 978-3-540-30989-5.
Howard, Paul; Rubin, Jean E. (1998). Consequences of the axiom of choice. Mathematical Surveys and Monographs. 59. Providence, Rhode Island: American Mathematical Society. ISBN 9780821809778.
Jech, Thomas (2008) [1973]. The axiom of choice. Mineola, New York: Dover Publications. ISBN 978-0-486-46624-8.
Jech, Thomas (1977). "About the Axiom of Choice". In John Barwise (ed.). Handbook of Mathematical Logic.
Lévy, Azriel (1958). "The independence of various definitions of finiteness" (PDF). Fundamenta Mathematicae. 46: 1–13. doi:10.4064/fm-46-1-1-13.
Per Martin-Löf, "100 years of Zermelo's axiom of choice: What was the problem with it?", in Logicism, Intuitionism, and Formalism: What Has Become of Them?, Sten Lindström, Erik Palmgren, Krister Segerberg, and Viggo Stoltenberg-Hansen, editors (2008). ISBN 1-4020-8925-2
Mendelson, Elliott (1964). Introduction to Mathematical Logic. New York: Van Nostrand Reinhold.
Moore, Gregory H. (1982). Zermelo's axiom of choice, Its origins, development and influence. Springer. ISBN 978-0-387-90670-6., available as a Dover Publications reprint, 2013, ISBN 0-486-48841-1.
Moore, Gregory H (2013) [1982]. Zermelo's axiom of choice: Its origins, development & influence. Mineola, New York: Dover Publications. ISBN 978-0-486-48841-7.
Herman Rubin, Jean E. Rubin: Equivalents of the axiom of choice. North Holland, 1963. Reissued by Elsevier, April 1970. ISBN 0-7204-2225-6.
Herman Rubin, Jean E. Rubin: Equivalents of the Axiom of Choice II. North Holland/Elsevier, July 1985, ISBN 0-444-87708-8.
Russell, Bertrand (1993) [1919]. Introduction to mathematical philosophy. New York: Dover Publications. ISBN 978-0-486-27724-0.
Suppes, Patrick (1972) [1960]. Axiomatic set theory. Mineola, New York: Dover. ISBN 978-0-486-61630-8.
George Tourlakis, Lectures in Logic and Set Theory. Vol. II: Set Theory, Cambridge University Press, 2003. ISBN 0-511-06659-7
Zermelo, Ernst (1904). "Beweis, dass jede Menge wohlgeordnet werden kann" (reprint). Mathematische Annalen. 59 (4): 514–16. doi:10.1007/BF01445300. S2CID 124189935.
Ernst Zermelo, "Untersuchungen über die Grundlagen der Mengenlehre I," Mathematische Annalen 65: (1908) pp. 261–81. PDF download via digizeitschriften.de
Translated in: Jean van Heijenoort, 2002. From Frege to Gödel: A Source Book in Mathematical Logic, 1879–1931. New edition. Harvard University Press. ISBN 0-674-32449-8
1904. "Proof that every set can be well-ordered," 139-41.
1908. "Investigations in the foundations of set theory I," 199–215.
Axiom of Choice entry in the Springer Encyclopedia of Mathematics.
Axiom of Choice and Its Equivalents entry at ProvenMath. Includes formal statement of the Axiom of Choice, Hausdorff's Maximal Principle, Zorn's Lemma and formal proofs of their equivalence down to the finest detail.
Consequences of the Axiom of Choice, based on the book by Paul Howard and Jean Rubin.
The Axiom of Choice entry by John Lane Bell in the Stanford Encyclopedia of Philosophy. | 2023-02-08 04:25:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8869756460189819, "perplexity": 303.05815786363183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00047.warc.gz"} |
https://mattermodeling.stackexchange.com/tags/cp2k | # Questions tagged [cp2k]
For questions about (or related to) the software CP2K.
22 questions
Filter by
Sorted by
Tagged with
50 views
### CP2K geometry optimization SCF takes 3000s for a step
I am new to CP2K and doing a geometry optimization for a cluster. At the beginning everything looks good, each SCF step takes about 13 seconds, the energy drops quickly and the convergence looks like ...
• 21
55 views
### total force calculated with cp2k not zero
I performed DFT calculations for a box of 64 water molecules under Periodic Boundary Condition using CP2K. I found that the forces on each atom do not sum up to zero, as shown in the attached figure. ...
• 273
381 views
### Calculate Hessian for configurations that are not at the minimum of the Potential Energy Surface
I want to calculate the Hessian matrix for molecular configurations that are not at the minimum of the Potential Energy Surface, using cp2k. But it seems that cp2k requires the configuration to be ...
• 273
136 views
### How to run 2nd generation CPMD in CP2K?
I believe CP2K by default implements Born–Oppenheimer MD. I have seen many recent papers that use 2nd generation Car–Parrinello MD using CP2K. But I couldn't find any direct way of switching to 2nd-...
• 1,521
170 views
### Setup for 4-point flexible water model [closed]
I tried to search online if there are examples or suggestions on how to set up a 4-point flexible water model, such as TIP4P/2005f, qTIP4P/f, TIP4P/$\epsilon$ FLEX, etc., but there's no clear example ...
• 1,982
72 views
### Molecular dynamics to reproduce dispersion interactions [closed]
I'm trying to simulate the behavior of chromophore molecules incorporated into a polymer under an external electric field. I've tried to use classical molecular dynamics (GROMACS) but it's unable to ...
• 2,323
180 views
### What is a "charged system" in this specific context?
I'm doing a polarization calculation for the first time. To specify one of the input parameters (the "reference point") correctly, I have to figure out if my system is "charged." ...
• 707
42 views
### Kinetic energy cut-off for Gaussian-type orbitals when applying periodic boundary conditions [duplicate]
I have a solid with PBC (periodic boundary conditions) and I want to perform a DFT simulation. I use GTO as the basis. In a paper I read, I can transform the GTO to a Bloch function (crystalline ...
• 261
67 views
### Reading complex potential into CP2K
Is it possible to read a complex potential into CP2K? I have generated a potential using FDTD at a given frequency, which results in a complex result. The potential is to be read in to a real time ...
• 693
68 views
### CP2K Spherical Cell [closed]
I'm trying to run a gas phase cluster calculation in CP2K. I want to create an outside force to constrain water molecules that are in the simulation within a radius of a central ion. I want to create ...
• 71
295 views
### cp2k conserved quantity changed when restart NVT MD?
The details are descripted elsewhere, but got no answer for days, so I post it here for help. As title said, I noticed a relatively big change in conserved quantity and don't know why. I've upload ...
• 295
163 views
### Is basis set superposition error reduced when using the GAPW method?
CP2K implements the Gaussian and Augmented Planewaves (GAPW) approach for all-electron calculations. My understanding is that the GAPW method involves using atom-centered Gaussian type orbitals to ...
• 613
119 views
### Periodic polarizable QM/MM embedding [closed]
Are there any standalone open source software codes available for periodic polarizable QM/MM embedding MD simulations? In my knowledge, CP2K only has the option for electrostatic embedding and other ...
• 1,982
310 views
### Failure of CP2K density functional software to converge for a 64 Si atom amorphous structure
The structure was created using Stillinger Weber with LAMMPS. The command I used was the following: ...
• 81
196 views
### CP2K vs BigDFT comparison [closed]
I typically run DFT calculations with 1000 to 5000 atoms using CP2K. This works fine but I'm interested in BigDFT also. Is there anyone here that has experience using BigDFT and is able to compare ...
339 views
• 693
272 views
### Is there any reason not to sum the kinetic and potential energy from an NPT simulation to get internal energy?
I would be very grateful for some newbie-level advice from a thermodynamics guru. I ran NPT simulations on a particular system (in CP2K software) to get fluid densities for use in fluid dynamics ...
• 707
215 views
### What are some methods for modeling bulk phase infrared spectra?
I am interested in predicting the infrared spectra of bulk phase molecules, it seems that AIMD (ab initio molecular dynamics) is the current best approach. I have found a tutorial on using CP2K to do ...
• 1,192
264 views
### Is it possible to use the Parrinello-Rahman barostat for NPT simulations in CP2K?
It is possible to use different types of thermostats in CP2K by specifying them in the corresponding section of the input as mentioned here. But there is no mention of specifying the barostat type in ...
• 1,184 | 2022-10-01 15:38:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6021114587783813, "perplexity": 2422.953020640249}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00327.warc.gz"} |
https://gaudi.readthedocs.io/en/latest/tutorial-firstrun.html | # Tutorial: Your first GaudiMM calculation¶
Running a GaudiMM calculation is easy. All you need is a simple command:
gaudi run input_file.yaml
The question is… how do I create that input_file.yaml.
## How to create an input file from scratch¶
Input files in GaudiMM are formatted with YAML, which follows readable conventions that are still parseable by computers. To easily edit YAML files, we recommend using a text editor that supports syntax highlighting, such as Sublime Text 3, Atom or Visual Studio Code. If you use .yaml as the extension of the input filename, the editor will colorize the text automatically. Else, you can always configure it manually. Check the docs of your editor to do so. If you don’t want syntax highlighting it’s fine, GaudiMM will work the same.
Input files must contain the sections mentioned above: output, ga, similarity, genes, and objectives. Each of this sections it’s defined with a colon and a newline, and its contents will be inside an indented block. For readability, I usually insert a blank line between sections.
output:
contents of output
ga:
contents of ga
similarity:
contents of similarity
genes:
contents of genes
objectives:
contents of objectives
Note
The API documentation of gaudi.parse.Settings contains the full list of parameters for output and ga sections. Programmatically defined default values are always defined in gaudi.parse.Settings.default_values. The appropriate types and whether they are required or not are defined in gaudi.parse.Settings.schema. GaudiMM will check if the submitted values conform to these rules and report any possible mistakes.
### The output section¶
This section governs how the results and reports will be created. Everything is optional, since each key has a default value, but it is preferrable to at least specify the name of the job (otherwise, it will be set to five random characters), and the path where the result files will be written. Like this:
output:
name: some_example
path: results
Note
All the relative paths in GaudiMM are relative to the location of the input file, not the working directory. To ease this difference, we recommend running the jobs from the same folder where the input file is located. Of course, you can always use absolute paths.
### The ga section¶
This section hosts the parameters of the Genetic Algorithm GaudiMM uses. Unless you know what you are doing, the only values you should modify are population and generations. To see the appropriate values, refer to FAQ & Known issues. For example:
ga:
population: 200
generations: 100
### The similarity section¶
This section contains the parameters to the similarity operator, which, given two individuals with the same fitness, whether they can be considered the same solution or not. This section is deliberatedly loose: you define the Python function to call, together with its positional and keyword arguments.
For the time being, the only similarity function we ship is based on the RMSD of the two structures: gaudi.similarity.rmsd(). The arguments are which Molecule genes should be compared and the RMSD threshold to consider whether they are equivalent or not.
similarity:
module: gaudi.similarity.rmsd
args: [[Ligand], 1.0]
kwargs: {}
### The genes section¶
This section describes the components of the exploration stage of the algorithm; ie, the features of each Individual in the population. While the previous sections were dictionaries (this is, a collection key-value pairs), the genes and objectives section is actually a list of dictionaries. As a result, you need to specify them like this:
genes:
- name: Protein
module: gaudi.genes.molecule
path: /path/to/protein.mol2
- name: Torsion
module: gaudi.genes.torsion
target: Ligand
flexibility: 360
Notice the dash - next to name. This, and the extra indentation, define a list. Each element of this list is a new gene. Each gene must include two compulsory values:
• name. A unique identifier for this gene. If you add two genes with the same name, GaudiMM will complain.
• module. The Python import path to the module that contains the gene. All GaudiMM builtin genes are located at gaudi.genes.
All other parameters are determined by the chosen gene. Check the corresponding documentation for each one!
Note
How do I know which genes to use?
Unless you code a gene of your own to replace it, you will always need one or more gaudi.genes.molecule.Molecule genes. Then, choose the flexibility models you want to implement on top of such molecule. Several examples are provided in GaudiMM basics (start here!).
### The objectives section¶
Like the genes section, the objectives section is also a list of dictionaries, so they follow the same syntax:
- name: Clashes
module: gaudi.objectives.contacts
which: clashes
weight: -1.0
probes: [Ligand]
- name: LigScore
module: gaudi.objectives.ligscore
weight: -1.0
proteins: [Protein]
ligands: [Ligand]
method: pose
In addition to the required name and module parameters, each objective needs a weight parameter. If set to 1.0, the algorithm will maximize the score returned by the objective; if set to -1.0, it will be minimized. Theoretically, any other positive or negative float will work, but stick to the convention of using 1.0 or -1.0.
Any other parameters present in an objective are responsibility of that objective, and are specified in its corresponding documentation.
That’s it! Now save it with a memorable filename and run it!
## How to run your input file¶
Let’s get back to the beginning of the tutorial: all you need to do is typing:
gaudi run input_file.yaml
If everything is fine, you’ll see the following output in the console:
\$> gaudi run input_file.yaml
.g8"""bgd db 7MMF' 7MF'7MM"""Yb. 7MMF'
.dP' M ;MM: MM M MM Yb. MM
dM' ,V^MM. MM M MM Mb MM 7MMpMMMb.pMMMb. 7MMpMMMb.pMMMb.
MM ,M MM MM M MM MM MM MM MM MM MM MM MM
MM. 7MMF' AbmmmqMA MM M MM ,MP MM MM MM MM MM MM MM
Mb. MM A' VML YM. ,M MM ,dP' MM MM MM MM MM MM MM
"bmmmdPY .AMA. .AMMA.bmmmmd"' .JMMmmmdP' .JMML..JMML JMML JMML..JMML JMML JMML.
------------------------------------------------------------------------------------------
GaudiMM: Genetic Algorithms with Unrestricted Descriptors for Intuitive Molecular Modeling
2017, InsiliChem · v0.0.2+251.g122cdf0.dirty
Launching job with...
Genes: Protein, Ligand, Rotamers, Torsion, Search
Objectives: Clashes, Contacts, HBonds, LigScore
After the first iteration is complete, the realtime report data will kick in:
gen progress nevals speed eta avg std min max
0 4.76% 20 1.25 ev/s 0:16:34 [ 3.080e+03 -2.690e+02 7.500e-01 1.179e+03] [ 1.027e+03 7.200e+01 8.292e-01 4.964e+02] [ 584.881 -398.517 0. 144.78 ] [ 4.753e+03 -1.053e+02 3.000e+00 2.066e+03]
1 9.52% 60 1.28 ev/s 0:16:32 [ 2.787e+03 -2.659e+02 1.400e+00 9.142e+02] [ 1198.699 98.675 1.393 484.309] [ 415.912 -398.517 0. 16.45 ] [ 4538.766 -71.562 5. 1854.63 ]
The first time you see this it might result too confusing, especially if the terminal wraps long lines. Let’s describe each tab-separated column:
• gen. The current generation.
• progress. Percentage of completion of the job. This is estimated with the expected number of operations: $$(generations + 1) * lambda\_ * (cxpb + mutpb)$$
• nevals. Number of evaluations performed in current generation.
• speed. Estimated number of evaluations per second. This does not take into account the time spent in the variation stage.
• eta. Estimated time left.
• avg. Average of all the fitness values reported by each objective in the current generation. They are listed in the order given in the input file, and also reflected above, after the Launching job with… line.
• std. Same as avg, but for the standard deviation.
• max. The maximum fitness value reported by each objective in the current generation.
• min. Same as above, but for the minimum value.
If the setting check_every in the output section is greater than zero, GaudiMM will dump the current population every check_every generations. That way, you can assess the progress visually along the simulation.
Also, if you feel that the algorithm has progressed enough to satisfy your needs, you can cancel it prematurely with Ctrl+C. GaudiMM will detect the interruption and offer to dump the current state of the simulation:
^C[!]
Interruption detected. Write results so far? (y/N):
Answer y` and wait a couple of seconds while GaudiMM writes the results. To analyze them, check the following tutorial: | 2020-08-14 05:55:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5124668478965759, "perplexity": 5020.159663809329}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739177.25/warc/CC-MAIN-20200814040920-20200814070920-00548.warc.gz"} |
http://mathoverflow.net/revisions/56790/list | MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4).
3 added 199 characters in body
In Corollary 5.12 of Whitehead's book it is shown that that the cobase change of a homotopy equivalence along a cofibration (NDR pair) is again a homotopy equivalence. This ought to imply homotopy invariance of the double mapping cylinder construction (with respect to homotopy equivalences of the spaces used to form the double mapping cylinder). A result along these lines is sometimes called the "gluing lemma."
Given
ADDED: Here's a better reference for the gluing lemma: tom Dieck, Tammo Partitions of unity in homotopy theory. Composito Math. 23 (1971), 159–167
Consider a diagram $B \leftarrow A \to C$ of spaces homotopy equivalent to CW complexes to which we will take the mapping cylinder. Use the map of diagrams $$|S.B| \leftarrow \quad|S.A| \rightarrow |S.C|$$
$$\qquad \downarrow \qquad \qquad \qquad \downarrow \qquad \qquad \quad \downarrow \qquad$$
$$B \qquad \leftarrow \qquad A \qquad \to C$$
where $|S.-|$ in each case means geometric realization of the total singular complex. Then homotopy invariance applied to the above shows that the double mapping cylinder of the top line, which is a CW complex, has the homotopy type of the double mapping cylinder of the bottom line.
As far as finiteness goes, that should be a consequence of Wall's finiteness conditions for CW complexes, or maybe one could argue directly: it seems to me that if $f: X \to Y$ is a map of spaces, each having the homotopy type of a finite CW complex, then there is a factorization $X \to Y' \to Y$ in which $Y'$ is obtained from $X$ by attaching finitely many cells and $Y' \to Y$ is a homotopy equivalence. If we use this, then it seems to me that we can find a diagram $B' \leftarrow A' \to C'$ which maps by homotopy equivalences to $B \leftarrow A \to C$ such that each space in the new diagram is a finite complex and each map is a cofibration. Then the pushout $B' \cup_{A'} C'$ has the homotopy type of the original double mapping cylinder and it is also a finite CW complex.
Given a diagram $B \leftarrow A \to C$ of spaces homotopy equivalent to CW complexes Use the map of diagrams $$|S.B| \leftarrow \quad|S.A| \rightarrow |S.C|$$
$$\qquad \downarrow \qquad \qquad \qquad \downarrow \qquad \qquad \quad \downarrow \qquad$$
$$B \qquad \leftarrow \qquad A \qquad \to C$$
where $|S.-|$ in each case means geometric realization of the total singular complex. Then homotopy invariance applied to the above shows that the double mapping cylinder of the top line, which is a CW complex, has the homotopy type of the double mapping cylinder of the bottom line.
As far as finiteness goes, that should follow from be a consequence of Wall's results on finiteness conditions for CW complexes. The point I guess is , or maybe one could argue directly: it seems to me that the singular chains on if $f: X \to Y$ is a map of spaces, each having the double mapping cylinder homotopy type of a finite CW complex, then there is quasi-isomorphic a factorization $X \to chain level pushout Y' \to Y$ in which $C(B) Y'$ is obtained from $X$ by attaching finitely many cells and $Y' \to Y$ is a homotopy equivalence. If we use this, then it seems to me that we can find a diagram $B' \leftarrow C(A) A' \to C(C)$ of singular chain complexes, and C'$which maps by homotopy equivalences to$B \leftarrow A \to C$such that each of these chain complexes space in the new diagram is chain homotopy a finite , so complex and each map is a cofibration. Then the chain level pushout$B' \cup_{A'} C'$has the homotopy type of the original double mapping cylinder and it is tooalso a finite CW complex. 1 In Corollary 5.12 of Whitehead's book it is shown that that the cobase change of a homotopy equivalence along a cofibration (NDR pair) is again a homotopy equivalence. This ought to imply homotopy invariance of the double mapping cylinder construction (with respect to homotopy equivalences of the spaces used to form the double mapping cylinder). This is sometimes called the "gluing lemma." Given a diagram$B \leftarrow A \to C$of spaces homotopy equivalent to CW complexes Use the map of diagrams $$|S.B| \leftarrow \quad|S.A| \rightarrow |S.C|$$ $$\qquad \downarrow \qquad \qquad \qquad \downarrow \qquad \qquad \quad \downarrow \qquad$$ $$B \qquad \leftarrow \qquad A \qquad \to C$$ where$|S.-|$in each case means geometric realization of the total singular complex. Then homotopy invariance applied to the above shows that the double mapping cylinder of the top line, which is a CW complex, has the homotopy type of the double mapping cylinder of the bottom line. As far as finiteness goes, that should follow from Wall's results on finiteness conditions for CW complexes. The point I guess is that the singular chains on the double mapping cylinder is quasi-isomorphic to chain level pushout$C(B) \leftarrow C(A) \to C(C)\$ of singular chain complexes, and each of these chain complexes is chain homotopy finite, so the chain level pushout is too. | 2013-06-19 14:14:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9757983088493347, "perplexity": 210.89510515413747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708789647/warc/CC-MAIN-20130516125309-00072-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://journal15.magtechjournal.com/Jwk3_rdhyxb/article/2020/1009-5470/1009-5470-39-5-109.shtml | 南海与周边海域表层塑料颗粒交换的拉格朗日示踪研究
1.热带海洋环境国家重点实验室(中国科学院南海海洋研究所), 广东 广州 510301
2.南方海洋科学与工程广东省实验室(广州), 广东 广州 511458
3.中国科学院大学资源与环境学院, 北京 100049
4.中国科学院大学地球与行星科学学院, 北京 100049
5.澳门科技大学资讯科技学院, 澳门特别行政区 999078
Exchanges of surface plastic particles in the South China Sea through straits using Lagrangian method
MENG Zhao,1,3, LI Ning5, GUAN Yuping,1,2,4, FENG Yang1,2
1. State Key Laboratory of Tropical Oceanography (South China Sea Institute of Oceanology, Chinese Academy of Sciences), Guangzhou 510301, China
2. Southern Marine Science and Engineering Guangdong Laboratory (Guangzhou), Guangzhou 511458, China
3. College of Resources and Environment, University of Chinese Academy of Sciences, Beijing 100049, China
4. College of Earth and Planetary Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
5. Faculty of information Technology, Macau University of Science and Technology, Macao Special Administrative Region 999078, China
基金资助: 国家自然科学基金项目. 41676021广东省实验室重大专项. GML2019ZD0306广东省实验室重大专项. GML2019ZD0303
Corresponding authors: GUAN Yuping. E-mail: guan@scsio.ac.cn
Editor: YIN Bo
Received: 2019-11-20 Revised: 2020-02-24 Online: 2020-09-10
Fund supported: National Natural Science Foundation of China. 41676021Major Project of Guangdong Province Laboratory. GML2019ZD0306Major Project of Guangdong Province Laboratory. GML2019ZD0303
Abstract
Plastics have caused serious pollution in the ocean, and become one of the marine environmental issues of concern to the international marine community. The South China Sea is a major area of marine plastic pollution. The areas around the South China Sea are major sources of plastic pollution, and previous studies have focused on the offshore environment. In this study, we discuss the exchange of plastic particles in the surface of the South China Sea and surrounding waters by ocean circulations. First, the particles were placed at the junction of the South China Sea and its adjacent seas, and were released at the initial time of every season Using Lagrangian particle tracing method, we analyzed the trajectories and final locations of the particles within one year after release. The results show that plastic particles enter the South China Sea mainly through the southern strait. The transport of particles at the same starting point varies a lot with season. During autumn and winter, most particles will enter the South China Sea and Java Sea, and little particles will be transported to the Pacific Ocean. However, only a few particles enter the South China Sea and Java Sea during spring and summer, while most of them are transported to the Pacific Ocean .The South China Sea current is mainly affected by monsoon and has significant seasonal characteristics. This research will help understand the impact of surface plastics around the South China Sea on the plastic pollutant in the South China Sea.
Keywords: plastic ; circulation ; seasonal change ; South China Sea
MENG Zhao, LI Ning, GUAN Yuping, FENG Yang. Exchanges of surface plastic particles in the South China Sea through straits using Lagrangian method. JOURNAL OF TROPICAL OCEANOGRAPHY[J], 2020, 39(5): 109-116 doi:10.11978/2019118
1 数据和方法
1.2 方法
1.2.1 结构
$\frac{\text{d}{{x}_{\text{path}}}\text{(}t\text{,}\ {{x}_{\text{0}}}\text{,}\ {{t}_{\text{0}}}\text{)}}{\text{d}t}\text{=}U\text{ }\!\![\!\!\text{ }{{x}_{\text{path}}}\text{(}t\text{,}\ {{x}_{\text{0}}}\text{,}\ {{t}_{\text{0}}}\text{),}\ t\text{ }\!\!]\!\!\text{ }$
1.2.2 颗粒控制
$({{p}_{x}},{{p}_{y}})=[\frac{({{S}_{y}}-la{{t}_{1}})}{la{{t}_{2}}-la{{t}_{1}}}\times {{n}_{\text{line}}},\frac{({{S}_{x}}-lo{{n}_{1}})}{lo{{n}_{2}}-lo{{n}_{1}}}\times {{n}_{\text{column}}}]$
1.2.3 动态示踪轨迹生成
${{x}_{\text{path}}}(t,{{x}_{0}},{{t}_{0}})={{x}_{0}}+\underset{{{t}_{0}}}{\overset{t}{\mathop \int }}\,u[{{x}_{\text{path}}}(s,{{x}_{0}},{{t}_{0}}),s]\text{d}s$
2 结果与分析
图1
a. 台湾海峡; b. 吕宋海峡; c. 南海与苏禄海交界处一; d. 南海与苏禄海交界处二; e. 加里曼丹岛沿岸; f. 卡里马塔海峡; g. 马六甲海峡; h. 中南半岛沿岸。红色五角星代表颗粒的初始位置点, 蓝色线代表颗粒运动轨迹, 黑色三角形代表颗粒的最终停留点。依据审图号GS(2016)1667底图制作
Fig. 1 Particle transport in spring
图2
a. 台湾海峡; b. 吕宋海峡; c. 南海与苏禄海交界处一; d. 南海与苏禄海交界处二; e. 加里曼丹岛沿岸; f. 卡里马塔海峡; g. 马六甲海峡; h. 中南半岛沿岸。红色五角星代表颗粒的初始位置点, 蓝色线代表颗粒运动轨迹, 黑色三角形代表颗粒的最终停留点。依据审图号GS(2016)1667底图制作
Fig. 2 Particle transport in summer
图3
a. 台湾海峡; b. 吕宋海峡; c 南海与苏禄海交界处一; d. 南海与苏禄海交界处二; e. 加里曼丹岛沿岸; f. 卡里马塔海峡; g. 马六甲海峡; h. 中南半岛沿岸。红色五角星代表颗粒的初始位置点, 蓝色线代表颗粒运动轨迹, 黑色三角形代表颗粒的最终停留点。依据审图号GS(2016)1667底图制作
Fig. 3 Particle transport in autumn
图4
a: 台湾海峡; b: 吕宋海峡; c: 南海与苏禄海交界处一; d: 南海与苏禄海交界处二; e: 加里曼丹岛沿岸; f: 卡里马塔海峡; g: 马六甲海峡; h: 中南半岛沿岸。红色五角星代表颗粒的初始位置点, 蓝色线代表颗粒运动轨迹, 黑色三角形代表颗粒的最终停留点。依据审图号GS(2016)1667底图制作
Fig. 4 Particle transport in winter
3 结果与讨论
图5
Fig. 5 Mean currents in spring (a), summer (b), autumn (c), and winter (d) (Data Source: OGCM for the Earth Simulator, OFES)
参考文献 原文顺序 文献年度倒序 文中引用次数倒序 被引期刊影响因子
BJORNDAL K A, BOLTEN A B, LAGUEUX C J, 1994.
Ingestion of marine debris by juvenile sea turtles in coastal Florida habitats
[J]. Marine Pollution Bulletin, 28(3):154-158.
CÓZAR A, ECHEVARRÍA F, GONZÁLEZ-GORDILLO J I, et al, 2014.
Plastic debris in the open ocean
[J]. Proceedings of the National Academy of Sciences of the United States of America, 111(28):10239-10244.
There is a rising concern regarding the accumulation of floating plastic debris in the open ocean. However, the magnitude and the fate of this pollution are still open questions. Using data from the Malaspina 2010 circumnavigation, regional surveys, and previously published reports, we show a worldwide distribution of plastic on the surface of the open ocean, mostly accumulating in the convergence zones of each of the five subtropical gyres with comparable density. However, the global load of plastic on the open ocean surface was estimated to be on the order of tens of thousands of tons, far less than expected. Our observations of the size distribution of floating plastic debris point at important size-selective sinks removing millimeter-sized fragments of floating plastic on a large scale. This sink may involve a combination of fast nano-fragmentation of the microplastic into particles of microns or smaller, their transference to the ocean interior by food webs and ballasting processes, and processes yet to be discovered. Resolving the fate of the missing plastic debris is of fundamental importance to determine the nature and significance of the impacts of plastic pollution in the ocean.
CÓZAR A, MARTÍ E, DUARTE C M, et al, 2017.
The Arctic Ocean as a dead end for floating plastics in the North Atlantic branch of the Thermohaline Circulation
URL PMID:28439534
CHIBA S, SAITO H, FLETCHER R, et al, 2018.
Human footprint in the abyss: 30 year records of deep-sea plastic debris
[J]. Marine Policy, 96:204-212.
ERIKSEN M, LEBRETON L C M, CARSON H S, et al, 2014.
Plastic pollution in the World’s oceans: more than 5 trillion plastic pieces weighing over 250000 Tons Afloat at Sea
[J]. PLoS ONE, 9(12):e111913.
Plastic pollution is ubiquitous throughout the marine environment, yet estimates of the global abundance and weight of floating plastics have lacked data, particularly from the Southern Hemisphere and remote regions. Here we report an estimate of the total number of plastic particles and their weight floating in the world's oceans from 24 expeditions (2007-2013) across all five sub-tropical gyres, costal Australia, Bay of Bengal and the Mediterranean Sea conducting surface net tows (N = 680) and visual survey transects of large plastic debris (N = 891). Using an oceanographic model of floating debris dispersal calibrated by our data, and correcting for wind-driven vertical mixing, we estimate a minimum of 5.25 trillion particles weighing 268,940 tons. When comparing between four size classes, two microplastic <4.75 mm and meso- and macroplastic >4.75 mm, a tremendous loss of microplastics is observed from the sea surface compared to expected rates of fragmentation, suggesting there are mechanisms at play that remove <4.75 mm plastic particles from the ocean surface.
GEYER R, JAMBECK J R, LAW K L, 2017.
Production, use, and fate of all plastics ever made
URL PMID:28776036
HAND E, 2014.
Arctic sea ice traps floating plastic
[J]. Science, 344(6187):985.
HU JIANYU, KAWAMURA H, HONG HUASHENG, et al, 2000.
A review on the currents in the South China Sea: seasonal circulation, South China Sea warm current and Kuroshio intrusion
[J]. Journal of Oceanography, 56(6):607-624.
HURLEY R, WOODWARD J, ROTHWELL J J, 2018.
Microplastic contamination of river beds significantly reduced by catchment-wide flooding
[J]. Nature Geoscience, 11(4):251-257.
JAMBECK J R, GEYER R, WILCOX C, et al, 2015.
Plastic waste inputs from land into the ocean
[J]. Science, 347(6223):768-771.
Plastic debris in the marine environment is widely documented, but the quantity of plastic entering the ocean from waste generated on land is unknown. By linking worldwide data on solid waste, population density, and economic status, we estimated the mass of land-based plastic waste entering the ocean. We calculate that 275 million metric tons (MT) of plastic waste was generated in 192 coastal countries in 2010, with 4.8 to 12.7 million MT entering the ocean. Population size and the quality of waste management systems largely determine which countries contribute the greatest mass of uncaptured waste available to become plastic marine debris. Without waste management infrastructure improvements, the cumulative quantity of plastic waste available to enter the ocean from land is predicted to increase by an order of magnitude by 2025.
LAMB J B, WILLIS B L, FIORENZA E A, et al, 2018.
Plastic waste associated with disease on coral reefs
[J]. Science, 359(6374):460-462.
URL PMID:29371469
LAVERS J L, BOND A L, 2017.
Exceptional and rapid accumulation of anthropogenic debris on one of the world’s most remote and pristine islands
[J]. Proceedings of the National Academy of Sciences of the United States of America, 114(23):6052-6055.
In just over half a century plastic products have revolutionized human society and have infiltrated terrestrial and marine environments in every corner of the globe. The hazard plastic debris poses to biodiversity is well established, but mitigation and planning are often hampered by a lack of quantitative data on accumulation patterns. Here we document the amount of debris and rate of accumulation on Henderson Island, a remote, uninhabited island in the South Pacific. The density of debris was the highest reported anywhere in the world, up to 671.6 items/m(2) (mean +/- SD: 239.4 +/- 347.3 items/m(2)) on the surface of the beaches. Approximately 68% of debris (up to 4,496.9 pieces/m(2)) on the beach was buried <10 cm in the sediment. An estimated 37.7 million debris items weighing a total of 17.6 tons are currently present on Henderson, with up to 26.8 new items/m accumulating daily. Rarely visited by humans, Henderson Island and other remote islands may be sinks for some of the world's increasing volume of waste.
LEBRETON L C M, VAN DER ZWET J, DAMSTEEG J W, et al, 2017.
River plastic emissions to the world’s oceans
[J]. Nature Communications, 8:15611.
URL PMID:28589961
MASUMOTO Y, SASAKI H, KAGIMOTO T, et al, 2004.
A fifty-year eddy-resolving simulation of the world ocean — Preliminary outcomes of OFES (OGCM for the Earth Simulator)
[J]. Journal of the Earth Simulator, 1:35-56.
MAXIMENKO N, HAFNER J, NIILER P, 2012.
Pathways of marine debris derived from trajectories of Lagrangian drifters
[J]. Marine Pollution Bulletin, 65(1-3):51-62.
Global set of trajectories of satellite-tracked Lagrangian drifters is used to study the dynamics of marine debris. A probabilistic model is developed to eliminate the bias in spatial distribution of drifter data due to heterogeneous deployments. Model experiments, simulating long-term evolution of initially homogeneous drifter array, reveal five main sites of drifter aggregation, located in the subtropics and maintained by converging Ekman currents. The paper characterizes the geography and structure of the collection regions and discusses factors that determine their dynamics. A new scale R(c)=(4k/|D|)((1/2)) is introduced to characterize tracer distribution under competing effects of horizontal divergence D and diffusion k. Existence and locations of all five accumulation zones have been recently confirmed by direct measurements of microplastic at the sea surface.
SASAI Y, ISHIDA A, YAMANAKA Y, et al, 2004.
Chlorofluorocarbons in a global ocean eddy-resolving OGCM: pathway and formation of Antarctic bottom water
[J]. Geophysical Research Letters, 31(12):L12305.
SASAKI H, NONAKA M, MASUMOTO Y, et al, 2008.
An eddy-resolving hindcast simulation of the quasiglobal ocean from 1950 to 2003 on the Earth Simulator
[M]// HAMILTON K, OHFUCHI W. High Resolution Numerical Modelling of the Atmosphere and Ocean. New York, NY: Springer, 157-185.
SASAKI H, SASAI Y, KAWAHARA S, et al, 2004.
A series of eddy-resolving ocean simulations in the world ocean-OFES (OGCM for the Earth Simulator) project
[C]// Oceans ’04 MTS/IEEE Techno-Ocean ’04 (IEEE Cat. No.04CH37600). Kobe, Japan: IEEE, 3:1535-1541.
SAVOCA M S, WOHLFEIL M E, EBELER S E, et al, 2016.
Marine plastic debris emits a keystone infochemical for olfactory foraging seabirds
Plastic debris is ingested by hundreds of species of organisms, from zooplankton to baleen whales, but how such a diversity of consumers can mistake plastic for their natural prey is largely unknown. The sensory mechanisms underlying plastic detection and consumption have rarely been examined within the context of sensory signals driving marine food web dynamics. We demonstrate experimentally that marine-seasoned microplastics produce a dimethyl sulfide (DMS) signature that is also a keystone odorant for natural trophic interactions. We further demonstrate a positive relationship between DMS responsiveness and plastic ingestion frequency using procellariiform seabirds as a model taxonomic group. Together, these results suggest that plastic debris emits the scent of a marine infochemical, creating an olfactory trap for susceptible marine wildlife.
VAN SEBILLE E, ENGLAND M H, FROYLAND G, 2012.
Origin, dynamics and evolution of ocean garbage patches from observed surface drifters
[J]. Environmental Research Letters, 7(4):044040.
WALLER C L, GRIFFITHS H J, WALUDA C M, et al, 2017.
Microplastics in the Antarctic marine system: an emerging area of research
[J]. Science of the Total Environment, 598:220-227.
URL PMID:28441600
YAMASHITA R, TANIMURA A, 2007.
Floating plastic in the Kuroshio Current area, western North Pacific Ocean
[J]. Marine Pollution Bulletin, 54(4):485-488.
/
〈 〉 | 2021-09-20 08:16:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1956600397825241, "perplexity": 9720.804936115695}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057033.33/warc/CC-MAIN-20210920070754-20210920100754-00469.warc.gz"} |
https://larswericson.wordpress.com/page/2/ | # Privatizing the 4th of July
My family and I went to see 4th of July fireworks in a Southern town north of Charlotte. The only available fireworks were at a country club. Members of the club had their own roped-off area and the hoi polloi were invited to park on a nearby main road and hike onto the rest of the golf course. About 5,000 cars were parked on the main road, a mile up and down, and the walk with small children was a challenge. We chose to park near the main entrance to the club. There was a community of over \$1MM brick houses associated with the club. Not all residents are members. Membership in the club starts with a down payment of around \$50,000 and then monthly payments around \$700. I saw a non-member resident ask to sit in the club members area, something she had done in previous years, and she was turned away. This community thus comprises both haves and have-nots, first class and business class. The rest of us were in coach. Parking my car I was close to but not blocking one resident’s driveway. He came out and asked us to move, pretexting visitors soon to come. After the fireworks were over, we returned and found his house unlit and unoccupied.
Growing up in New England I have a memory, perhaps false, that there were always town-sponsored fireworks in some area big enough to accommodate all of the town. North of Charlotte, where there is a large lake, there is almost no public space other than shopping malls. The opening of a small beach was a big deal, with nearby residents loudly hand-wringing over the accompanying parking problems. The beach had about 70 parking spaces. Consistently 700 come to bathe. Even the highway, route 77, has been privatized, with a fast lane under construction for those that can afford to pay per use.
North Carolina is a state which is notable for the thorough inadequacy of its public transportation, its low teacher pay and disdain for public education, and the refusal Federal Medicaid funds. The state, which just elected a Democratic governor by a narrow margin, has seen its Republican-dominated legislature systematically strip the governor of his powers and cut funds to his employees. It’s quite something.
As a state on the losing side of what has been called, down here (not so sure if by the current generation, but surely the prior one), the War Between The States, I wonder if the lack of municipal attention paid to the 4th of July has strong historical roots in that conflict. There is some evidence in favor of this interpretation. So despite the otherwise rah-rah nature of Southern conservatives regarding the US of A and all of that, I wonder if institutionally the state is still driven by those older grievances, in a completely unconscious but still persistent way.
# Python software spaghetti stack for web apps
When you want to solve a problem, the easiest thing is to follow in the tracks of someone else. This can lead to a fairly complicated software stack to combine solutions to different issues. My stack is going in this direction:
SQL/Sqlite: Database. Simple and light but apparently more than good enough for most web apps.
Python: main application glue. For those of us who need to get stuff done in the scientific visualization space, and have no patience for Java or random obscure newish languages like Scala and Lua.
CSS: Stylesheet language of all websites. Hard to avoid knowing about. A bit unlovely.
Javascript: Popular browser-side programming language implemented in all major browsers, equivalent to Python, loosely Java-like but without all the mind-numbing type checking and package hierarchies (at first glance).
Ajax: Method of communicating between client-side Javascript and server-side application (in Python in my case), to update an element of a webpage without having to redraw and reload the entire webpage
Bootstrap: A library of widgets and good stuff, originally programmed for Twitter, which is public domain and has things like stars for Like buttons.
Node.js: Server-side implementation of Javascript, moving some of the rendering load for complicated pages back to server. (But initial intention was for browser-side Javascript to make things faster
Django: Python-based web application development package. Features use of SQLAlchemy to make it easy to bind classes to database tables and incorporate changes in class structure during dvelopment.
React.js: A way of getting much fancier widgets than vanilla Django, in particular the Editable Table. Requires some tedious and tricky integration to make it play with Django.
Amazon Elastic Compute: Web server in the cloud without compromising my home computer, for modest cost.
# Series the wife and I finished watching
The jig is almost up now for Veep
and Silicon Valley.
We closed down Girls.
We’re waiting for Game of Thrones to start up again.
Big Little Lies wasn’t to my taste, so we didn’t get far on that one.
Orange is the New Black was for a while but then I got tired of it and there’s a series and a half of so I haven’t seen.
We finished Homeland and are between seasons.
We just finished Better Call Saul and are sadly beginning the long wait for renewal and the next season.
We finished Crashing
and Master of None,
neither of which is guaranteed a new season. I finished Archer Season 8.
That’s a show that seems to have run out of variations on a theme, I could be wrong. I’ll be surprised to see it back, but will watch it if it does. It’s a one-trick pony but it works for me. I finished WestWorld.
We are going back to House of Cards, we’ve got a season and a half left.
I quit that a while ago because, prior to the most recent election, I just thought it was too far-fetched. Now with a Queens landlord as President and his stripper wife as First Lady, not so much. Even if Hillary won, that would still have played into the House of Cards theme. Only a Bernie win would have broken the mold. We live in strange times.
# Django tables2 list view does not support filtering on model properties
Which kind of sucks. If you want to filter a column, it has to either be an aggregate function of all rows, or it has to be stored in that table or in a path from that table. It can’t be a function of that row. Which kind of sucks, it means that any row level functional properties have to be maintained when you do a row save.
And thus were spent 4 hours of my Sunday.
You’re welcome.
# What’s up with AWS Cognito and Django?
I posted questions on StackOverflowand AWS Forum. No answer. I even texted people on LinkedIn who are machers at AWS. Nada. Not impressed! Sad!
# Refined photon question, posted to Stack Exchange, let’s see if it gets crushed or discarded
Posted on Stack Exchange:
Mark Andrew Smith’s PhD thesis from 1994 examines relativistic cellular automata models. Also a 1999 paper by Ostoma and Trushyck examines this topic. One topic not discussed is the information required in a cell to represent photons in transit. Suppose we have cells arrayed in a cube so that each cell has 26 neighbors. Suppose there are $N$ cells in the simulation. So it requires $\log{N}$ bits to represent a cell location. If a photon in motion is currently in a cell, it’s direction can be represented by the location of the farthest cell it will reach on it’s straight-line trajectory. Any cell can originate a photon and can receive photons passing through from any other cell. So each cell must be able to represent $N \log{N}$ bits of information, to represent all photons in transit from all possible sources.
Question: Is there any schema that could represent the set of all photons passing through a cell using less information, with reasonable fidelity?
Question: According to the Pauli Exclusion Principle, any number of photons can occupy a single point in space. In the limit (real physical space), does each point in space contain an infinite number of photons? This would require infinite bits to represent. Storage of infinite bits requires infinite energy. If so, does this pose a challenge to the idea, expressed in Fredkin’s Digital Philosophy, that the universe is in fact a cellular automata, with the limiting speed of light simply coinciding with the “clock speed” of the automata, i.e. the rate at which photons can move from one cell to the next?
# Correct reproduction of BDM
Someone attempted to reproduce BDM, had problems and posted on CodeReview StackExchange asking for insight. The dummies there criticized the white space and variable names in his code. I found someone’s blog post with a correct answer and posted it. Sanctimonious and clueless lifers on the site deleted the information. The rules of StackExchange pretty much guarantee that narrow-minded lifers, similar to Wikipedia edit patrollers, will defend StackExchange against any useful content. Oh well. Here’s my answer:
OP is trying to write a Python program to reproduce a claimed calculation result of Bueno De Mesquita (BDM). There is another attempt to reproduce this calculation, in Python, by David Masad, “Replicating a replication of BDM“. Masad provided Python code, and also showed an approximately 20% divergence in the median score, starting from the same example and same inputs and same references. Jeremy McKibben-Sanders then replicated the model, with results matching BDM. Masad added a new post to discuss the coding issues which led him awry. Reading those posts and their code and comparing with above code will lead to correct diagnosis for above code. | 2017-10-24 05:37:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.282527357339859, "perplexity": 2903.2272980321063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828178.96/warc/CC-MAIN-20171024052836-20171024072836-00642.warc.gz"} |
https://www.groundai.com/project/efficient-generation-of-twin-photons-at-telecom-wavelengths-with-10-ghz-repetition-rate-tunable-comb-laser/ | Efficient generation of twin photons at telecom wavelengths with 10 GHz repetition-rate tunable comb laser
# Efficient generation of twin photons at telecom wavelengths with 10 GHz repetition-rate tunable comb laser
## Abstract
Efficient generation and detection of indistinguishable twin photons are at the core of quantum information and communications technology (Q-ICT). These photons are conventionally generated by spontaneous parametric down conversion (SPDC), which is a probabilistic process, and hence occurs at a limited rate, which restricts wider applications of Q-ICT. To increase the rate, one had to excite SPDC by higher pump power, while it inevitably produced more unwanted multi-photon components, harmfully degrading quantum interference visibility. Here we solve this problem by using recently developed 10 GHz repetition-rate-tunable comb laser Sakamoto et al. (2008); Morohashi et al. (2012), combined with a group-velocity-matched nonlinear crystal Jin et al. (2013a, 2014), and superconducting nanowire single photon detectors Miki et al. (2013); Yamashita et al. (2013). They operate at telecom wavelengths more efficiently with less noises than conventional schemes, those typically operate at visible and near infrared wavelengths generated by a 76 MHz Ti Sapphire laser and detected by Si detectors Lu et al. (2007); Huang et al. (2011); Yao et al. (2012). We could show high interference visibilities, which are free from the pump-power induced degradation. Our laser, nonlinear crystal, and detectors constitute a powerful tool box, which will pave a way to implementing quantum photonics circuits with variety of good and low-cost telecom components, and will eventually realize scalable Q-ICT in optical infra-structures.
## I Introduction
Since the first experimental realization of quantum teleportation Bouwmeester et al. (1997), many experiments with multiphoton entanglement have been demonstrated Lu et al. (2007); Pan et al. (2012), and currently expanded to eight photons, employing multiple SPDC crystals Huang et al. (2011); Yao et al. (2012). In order to increase the scale of entanglement further, the generation probability per SPDC crystal must be drastically improved without degrading the quantum indistinguishability of photons. Unfortunately, however, a dilemma always exists in SPDC: higher pump power is required for higher generation probability, while it degrades quantum interference visibility due to unwanted multi-pair emissions, leading to the increase of error rates in entanglement-based quantum key distribution (QKD) Fujiwara et al. (2014) and photonic quantum information processing Knill et al. (2001).
When SPDC sources are used, the 2-fold coincidence counts (CC) can be estimated as
CC=fpnη2n, (1)
where is the repetition rate of the pump laser, is the generation probability of one photon-pair per pulse, is the overall efficiency, which is the product of the collecting efficiency of the whole optical system and the detecting efficiency of the detectors. The should be restricted to less than 0.1, so that the effect of unwanted multi-pair emissions can be negligible. So the pump power is tuned for .
The value of is not high in the conventional photon source. A standard technology is based on SPDC at visible and near infrared wavelengths using a BBO crystal pumped by the second harmonic of the femto-second laser pulses from a Ti Sapphire laser, whose repetition rate is 76 MHz Lu et al. (2007); Huang et al. (2011); Yao et al. (2012). In this case, the probability had not been able to go beyond 0.06, because the second harmonic power was limited to 300-900 mW for a fundamental laser power of 1-3 W. Therefore, recent efforts have been focused on increasing the pump power Krischek et al. (2010).
Recently the periodically poled KTP (PPKTP) crystals attract much attention because it can achieve 0.1 (0.6) at telecom wavelengths with a pump power of 80 (400) mW thanks to the quasi-phase matching (QPM) technique Jin et al. (2013b). When waveguide structure is employed, can be times higher than the bulk crystal Tanzilli et al. (2001); Zhong et al. (2012). Unfortunately, however, the constraint should be met in these cases too. The is already maximized by careful alignment in laboratories, e.g., the typical value is about 0.2-0.3 Huang et al. (2011); Yao et al. (2012); Jin et al. (2013b). Thus and have almost reach their maxima. The remaining effective way is to improve the repetition rate of the pump laser, .
In this work, we demonstrate a novel photon source pumped by a recently developed repetition-rate-tunable comb laser in a range of 10-0.625 GHz Sakamoto et al. (2008); Morohashi et al. (2012). This laser can operate in relatively low pulse energy, while keeping high average power, thanks to a high repetition rate. The low pulse energy would result in the reduction of the multiple-pair emission. At the same time a high counting rate would be expected owing to the high average power. The SPDC based on a group-velocity-matched PPKTP (GVM-PPKTP) crystal can achieve very high spectral purity of the constituent photons Jin et al. (2013a, 2014). Furthermore, the photons are detected by the state-of-the-art superconducting nanowire single photon detectors (SNSPDs) Miki et al. (2013); Yamashita et al. (2013), which have a much higher efficiency than that of traditional InGaAs avalanche photodiode (APD).
## Ii Experiment
The experimental setup is shown in Fig. 1.
The picosecond pulses from the comb laser are generated with the following principles Sakamoto et al. (2008); Morohashi et al. (2012). A continuous-wave (cw) light emitted from a single-mode laser diode (LD) with a wavelength of around 1553 nm is led into a Mach-Zehnder-modulator (MZM) and is converted to a comb signal with 10 GHz in spacing and 300 GHz in bandwidth. The MZM is fabricated on a LiNbO crystal and is driven by a 10 GHz radio-frequency signal. Because the comb signal has linear chirp, it can be formed to a picosecond pulse train with a repetition rate of 10 GHz by chirp compensation using a single-mode fiber. The comb laser also includes a pulse picker, so that the repetition frequency of the pulse train can be changed in the range of 10-0.625 GHz. In this experiment, we keep the temporal width around 2.5 ps. Fig. 2(a) shows the spectrum of this laser at 2.5 GHz repetition rate. See the Appendix for more spectral and temporal information of this comb laser. For more details of this kind of comb lasers, see Refs. Sakamoto et al. (2007); Morohashi et al. (2008).
Generating a high-power second harmonic light (SHG) is a key point in this experiment. Since the average power per pulse of the comb laser is very low, we choose a periodically poled lithium niobate wave guide (PPLN-WG) for SHG. We tested both 10 GHz and 2.5 GHz repetition rate lasers. We found the SHG power with 2.5 GHz repetition rate was more stable than that with 10 GHz repetition rate. Therefore, the data in this experiment are mainly obtained by using 2.5 GHz repetition rate. With the input 2.5 GHz repetition rate fundamental laser at a power of 500 mW, we obtained 42 mW SHG power. After filtered by several short-pass filters to cut the fundamental light, we finally achieved a net SHG power of 35 mW. The transmission loss of the PPLN-WG was around 50%. Fig. 2(b) is the spectrum of 776.5nm SHG laser, measured by a spectrometer (SpectraPro-2300i, Acton Research Corp.). Interestingly, it can be noticed that the comb structure no-longer exists in the SHG spectrum, which may be caused by a sum-frequency-generation process.
For SPDC, the nonlinear crystal used in this experiment is a PPKTP crystal, which satisfy the GVM condition at telecom wavelength Jin et al. (2013a); König and Wong (2004); Evans et al. (2010); Gerrits et al. (2011); Eckstein et al. (2011). Thanks to the GVM condition, the spectral purity is much higher at telecom wavelength than that at visible wavelengths Jin et al. (2014). This spectrally pure photon source is very useful for multi-photon interference between independent sources Mosley et al. (2008); Jin et al. (2011, 2013c). Figure 2(c, d) are the observed spectra of the signal and idler photons, measured by a spectrometer (SpctraPro-2500i, Acton Research Corp.). The FWHMs of the twin photons are about 1.2-1.3 nm, similar as the spectral width of the photons pumped by 76 MHz laser Jin et al. (2013a).
Our superconducting nanowire single photon detectors (SNSPDs) have a system detection efficiency (SDE) of around 70% with a dark count rate (DCR) less than 1 kcps Miki et al. (2013); Yamashita et al. (2013); Jin et al. (2013b). The SNSPD also has a wide spectral response range that covers at least from 1470 nm to 1630 nm wavelengths Jin et al. (2013b). The measured timing jitter and dead time (recovery time) were 68 ps Miki et al. (2013) and 40 ns Miki et al. (2007).
The performance with the 2.5 GHz source is evaluated in terms of a signal to noise ratio (SNR) Broome et al. (2011) and a Hong-Ou-Mandel (HOM) interference Hong et al. (1987), in comparison with the 76 MHz laser. The SNR test is carried out in the detecting configuration of Fig. 1(a), while the HOM test is performed in that of Fig. 1(b).
## Iii Result 1: Signal to noise ratio test
The SNR is defined as the ratio of single pair emission rate over the double pair emission rate Broome et al. (2011). These rates can be evaluated by Time of Arrival (ToA) data, which are shown in Fig. 3(a-d).
Each data consists of the main peak and side peaks, and we define the peak counts as the value of each peak. The side peaks are not visible in Fig. 3(a, b) because the resolution of the detector system (0.5 ns, as seen in the inset in (a)) is comparable to the pulse interval of 2.5 GHz laser (0.4 ns). So, we set the averaged maximal counts as the side peak values. The side peaks are recorded when a second SPDC occurs (in the stop channel) conditioned on a first SPDC (in the start channel) occurs at the main peaks. Therefore, the side peaks correspond to the rate of 2-pair components in SPDC, while the main peaks correspond to the rate of 1-pair plus 2-pair components in SPDC. So the SNR can be calculated in dB as
SNR=10log10[(main peak−side peak)/side peak]. (2)
The measured SNR are 42.0 dB, 39.0 dB, 34.6 dB, 23.0 dB for Fig. 3(a-d), respectively.
Theoretical calculation unveils that the SNR is proportional to the inverse of average photon numbers per pulse at a lower pump power Broome et al. (2011). This claim can be experimentally verified by comparing Fig. 3(c) and (d) with the 76 MHz laser. When the pump power increases from 2 mW to 30 mW, the SNR is decreased by 34.6 - 23.0 = 11.6 dB. It agrees well with the 30 mW / 2 mW = 15 times (11.7 dB) increase of average photon number per pulse.
Next, we compare the result in Fig. 3(b) and (d), so as to confirm the validity of the definition for side peak values in Fig. 3(a) and (b). At 30 mW pump power, the coincidence counts are 48 kcps and 56 kcps for 2.5 GHz and 76 MHz laser, respectively, as seen from Fig. 3(b) and (d). Then the average photon numbers per pulse are estimated to be 0.00021 and 0.0079, correspondingly. The average photon pair per pulse for the 2.5 GHz laser is 0.0079 / 0.00021 = 37.6 times (15.8 dB) lower than that of the 76 MHz laser. Recall the SNR difference between 2.5 GHz and 76 MHz of 39.0 - 23.0 = 16.0 dB. This consistency verifies the validity of the definition for side peak values in Fig. 3(a) and (b).
Finally, we estimate the SNR values for the case of the 2.5 GHz laser at 2 mW in Fig. 3(a). The SNR at 2 mW should ideally increase by 11.7 dB (15 times), from 39.0 dB (30 mW in Fig. 3(b)) to 50.7 dB. Actually, however, the measured SNR is only 42.0 dB. This discrepancy is mainly due to the dark counts by the detectors and the accidental counts by stray photons.
## Iv Result 2: Hong-Ou-Mandel interference test
We then carried out the HOM interference test to evaluate the performance of a twin photon source. We firstly worked with 30 mW pump power for 2.5 GHz and 76 MHz repetition rate lasers, and achieved raw visibilities of 96.4 0.2% and 95.9 0.1%, respectively, as shown in Fig. 4(a, b).
The triangle-shape of the HOM dip in Fig. 4(a, b) is caused by the group-velocity matching condition in PPKTP crystal at telecom wavelengths Kuzucu et al. (2005); Shimizu and Edamatsu (2009). The widths of the dips in Fig. 4 (a, b) are similar, around 1.33 mm (4.4 ps), since the width of the dip is determined by the length of the crystal Ansari et al. (2014). The high visibilities in Fig. 4 confirmed the high indistinguishability of the generated photons.
To compare the different performance of the 2.5 GHz and 76 MHz lasers, we repeated the HOM interference test at different pump powers. Without subtract any background counts, we compare the raw visibilities of HOM dip at different pump powers in Fig. 5.
At a low pump power of 2.5 mW, the 76 MHz laser has a visibility of 97.4 0.4%, slightly higher than the result by the 2.5 GHz laser, 96.5 0.6%. At 30 mW, the average photon numbers per pulse were 0.00028 and 0.0092 for 2.5 GHz and 76 MHz lasers. Note the average photon numbers per pulse in the HOM interference test were slightly higher than that in the ToA test, because we slightly improved the experimental condition in the test of HOM interference.
It is noteworthy that the visibilities by the 76 MHz laser decrease rapidly when the pump powers increase. In contrast, the visibilities by the 2.5 GHz laser shows almost no decrease up to 35 mW, the maximum SHG power we have achieved in experiment. To fit this experimental data, we construct a theoretical model. In this model, the transmittance loss of the signal and idler photons are effectively described by two beam splitters, and the mode-matching efficiency between the signal and idler are also represented by two beam splitters with the transmittance of . The transmittance loss is obtained from the experimental condition, while the mode-matching efficiency are searched so as to fit the experimental data. The numerical analyses suggest that the HOM visibility is extremely sensitive to the mode matching efficiency, . However, it is not easy to estimate the experimental value with enough accuracy. In Fig. 5, the data are fitted with values of 0.9828 and 0.9878 for 2.5 GHz laser and 76 MHz laser, respectively. Higher value for the 76 MHz laser is reasonable, because the indistinguishablites of the twin photons generated by the 76 MHz laser is slightly better than that by the 2.5 GHz laser, which can be roughly checked by the spectra of the signal and idler in Fig. 2(c) and (d). After the transmittance efficiency and the mode-matching efficiency are fixed, the HOM visibilities are only determined by , the average photon pairs per pulse (i.e., the generation probability of one pair per pulse). The low value by the 2.5 GHz laser guarantees its high visibilities at high pump powers. In Fig. 5, the theoretical model fitted well with our experimental data. See more details of our model in the Appendix.
## V Discussion and Outlook
With the theoretical model (See the Appendix), we further calculate the visibilities at high pump powers, as shown in Tab. 1. It is interesting to note that, at high pump power up to 3 W, the visibility by 76 MHz laser will decrease to 62.4%, while the 2.5 GHz laser still can keep the visibility higher than 90 %, mainly thanks to the low average photon numbers per pulse. To experimentally demonstrate this high visibilities at high pump powers in the future, we could update the PPKTP bulk crystal to a PPKTP waveguide. Also, SHG power of the comb laser at 10 GHz repetition rate needs to be improved. See the Appendix for the HOM interference of this comb laser at 10 GHz repetition rate.
Nevertheless, our experimental results in Fig. 5 have clearly shown the non-degradation of HOM visibilites at high pump powers.
We notice that many other schemes have been reported to reduce the multi-pair emission. Broome et al demonstrated to reduce the multi-pair emission by temporally multiplexing the pulsed pump lasers by two times Broome et al. (2011). Ma et al tried to reduce such effects by multiplexing four independent SPDC sources Ma et al. (2011). All the previous methods have a limited effect, because the units they multiplexed were limited. If they increase the multiplexed units, the setup will be very complex. The GHz-repetition-rate-laser pumped photon source in our scheme provide a very simple and effective way to reduce the multi-pair emission. In addition, GHz repetition rate lasers are now commercially available Morris et al. (2014); Gig (); M2l ().
Therefore, our scheme will be a reasonable option to construct the next generation of twin photon sources with high brightness, low multi-pair emission and high detection efficiency. In the traditional twin photon source technologies, 76 MHZ pump laser is compatible with a BBO crystal (with maximum , corresponding to photon pair generation rate of 5 MHz) and Si avalanche photodiode (with acceptable maximal photon numbers of 5-10 MHz). In the next generation of twin photon sources, the 10 GHz laser should be combined with a high efficiency crystals, e.g., PPKTP crystal (or waveguide, with maximum , corresponding to photon pair generation rate of 1 GHz), and high speed detectors, e.g., the SNSPD (with acceptable maximal photon numbers of 25-100 MHz). Consequently, we expect more than tenfold brighter photon source in conjunction with both low multiple photon pairs production and high spectral purity. Further, a repetition tunability allow us to obtain an optimal generation probability in a pulse without sacrificing a photon counting rate.
## Vi Conclusion
We have demonstrated a twin photon source pumped by a 10-GHz-repetition-rate tunable comb laser. The photons are generated from GVM-PPKTP crystal and detected by highly efficient SNSPDs. The SNR test and HOM interference test with 2.5 GHz laser showed a high SNR and high visibilities not degrading at high pump powers, much higher than that pumped by the 76 MHz laser. The high-repetition-rate pump laser, the GVM-PPKTP crystal, and the highly efficient detectors constitute a powerful tool box at the telecom wavelengths. We believe our tool box may have a variety of applications in the future quantum infromantion and communiction technologies.
## Acknowledgements
The authors thank N. Singh for insightful discussion. This work was supported by the Founding Program for World-Leading Innovative R&D on Science and Technology (FIRST).
## Appendix-I
In this part we provide more information of the comb laser at 10 GHz and 2.5 GHz repetition rates. Figure 6 compares the spectra, autocorrelation, and temporal sequences of the comb laser at 10 GHz and 2.5 GHz repetition rates. Figure 7 shows the Hong-Ou-Mandel dip for the comb laser at 10 GHz, with a similar bandwidth and visibility as the results by the laser at 2.5 GHz repetition rate.
## Appendix-II
In this part, we numerically analyze the relationship between photon-pair generation rate (i.e., average photon pair per pulse) and HOM interference visibility.
### .1 The model
Here, we describe a numerical model of the HOM experiment. The model is described in Fig. 8(a) (without delay) and (b) (with delay) where represent transmittances of mode and (losses are effectively described by beam splitters), respectively, and and are the detector efficiencies. The mode mismatch between the signal and idler pulses is directly reflected to the HOM interference visibility. In our model, the mode matching efficiency is effectively represented by two beam splitters with the transmittance and extra modes and .
The HOM interference visibility is defined as
V=CCmean−CCminCCmean, (3)
where and are the coincidence count rates with zero-delay and large delay (i.e., with and without interference between the signal and idler), respectively. In the following we derive and separately from our model.
The initial state from the SPDC source is given by a two-mode squeezed-vacuum state
|ψ⟩AB=√1−λ2∞∑n=0λn|n⟩A|n⟩B, (4)
where is the squeezing parameter, and is the average photon pairs per pulse. Let be a beam splitting operator on mode with transmittance which transforms the photon number states as
^VηAB|n1⟩|n2⟩ = 1√n1!n2!n1∑k1=0n2∑k2=0(n1k1)(n2k2)(−1)k2 (5) ×√ηn2+k1−k2√1−ηn1−k1+k2 ×√(k1+k2)!(n1+n2−k1−k2)! ×|k1+k2⟩|n1+n2−k1−k⟩.
Applying the beam splitters , , , , and onto the two-mode squeezed vacuum (state at X in Fig. 8(a)), we obtain
^V1/2AB^VηMBF^VηMAE^VηBBD^VηAAC|ψ⟩AB|0⟩C|0⟩D|0⟩E|0⟩F =√1−λ2∞∑n=0λnn∑k1=0(nk1)1/2ηk1/2A(1−ηA)n−k12n∑k2=0(nk2)1/2ηk2/2B(1−ηB)n−k22 ×k1∑k3=0(k1k3)1/2ηk3/2M(1−ηM)k1−k32k2∑k4=0(k2k4)1/2ηk4/2M(1−ηM)k2−k42 ×k3∑k5=0k4∑k6=0(k3k5)(k4k6)(12)k3+k42(−1)k6((k5+k6)!(k3+k4−k5−k6)!k3!k4!)1/2 ×|k5+k6⟩A|k3+k4−k5−k6⟩B|n−k1⟩C|n−k2⟩D|k1−k3⟩E|k2−k4⟩F =√1−λ2∞∑n=0λnn∑k1=0(nk1)1/2ηk1/2A(1−ηA)n−k12n∑k2=0(nk2)1/2ηk2/2B(1−ηB)n−k22 ×k1∑k3=0(k1k3)1/2ηk3/2M(1−ηM)k1−k32k2∑k4=0(k2k4)1/2ηk4/2M(1−ηM)k2−k42(12)k3+k4 ×k3+k4∑l=0min{l,k3}∑k5=max{0,l−k4}(−1)l−k5{(k3k5)(k4l−k5)(lk5)(k3+k4−lk3−k5)}1/2 ×|l⟩A|k3+k4−l⟩B|n−k1⟩C|n−k2⟩D|k1−k3⟩E|k2−k4⟩F, (6)
where and we have used the relation
(k3k5)(k4k6)((k5+k6)!(k3+k4−k5−k6)!k3!k4!)1/2={(k3k5)(k4k6)(k5+k6k5)(k3+k4−k5−k6k3−k5)}1/2. (7)
Note that should be applied to mode and , which will be discussed later. From Eq. (.1) we find the joint probability of having , , , , , photons in mode A-F at X:
PXABCDEF(l,k3+k4−l,n−k1,n−k2,k1−k3,k2−k4) =(1−λ)2λ2nηk1A(1−ηA)n−k1ηk2B(1−ηB)n−k2ηk3+k4M(1−ηM)k1+k2−k3−k4(12)k3+k4(nk1)(nk2)(k1k3)(k2k4) ×⎧⎨⎩min{l,k3}∑k5=max{0,l−k4}(−1)l−k5{(k3k5)(k4l−k5)(lk5)(k3+k4−lk3−k5)}1/2⎫⎬⎭2. (8)
The 50/50 beam splitting of mode () into and ( and ) adds extra binomial distribution terms to Eq. (.1). The joint probability distribution for the state at the detectors is thus given by
PABCDEAFAEBFB(l,k3+k4−l,n−k1,n−k2,k7,k2−k4−k8,k1−k3−k7,k8) =(1−λ)2λ2nηk1A(1−ηA)n−k1ηk2B(1−ηB)n−k2ηk3+k4M(1−ηM)k1+k2−k3−k4(12)k1+k2(nk1)(nk2)(k1k3)(k2k4) ×(k1−k3k7)(k2−k4k8)⎧⎨⎩min{l,k3}∑k5=max{0,l−k4}(−1)l−k5{(k3k5)(k4l−k5)(lk5)(k3+k4−lk3−k5)}1/2⎫⎬⎭2. (9)
The coincidence rate is then obtained by the sum of the joint probability:
CCmin = ∞∑n=0n∑k1=0n∑k2=0k1∑k3=0k2∑k4=0k1−k3∑k7=0k2−k4∑k8=0k3+k4∑l=0{1−(1−ηD1)l+k2−k4+k7−k8}{1−(1−ηD2)−l+k1+k4−k7+k8} (10) ×PABCDEAFAEBFB(l,k3+k4−l,n−k1,n−k2,k7,k2−k4−k8,k1−k3−k7,k8).
The derivation of is rather simple since there is no interference at the 50/50 beam splitter due to the delay. This is illustrated in Fig. 8(b). Note that we do not need . The two-mode squeezed vacuum from the SPDC source has a joint photon distribution:
PAB(n,n)=(1−λ2)λ2n. (11)
The beam splitting operation simply spread this distribution in a binomial manner. For example, after the beam splitter , the joint distribution is given by
PABC(n,k1,n−k1)=(1−λ2)λ2n(nk1)ηk1A(1−ηA)n−k1. (12)
Applying the and 50/50 beam splitters in a similar way, we have
PAA′BB′CD(k3,k2−k4,k4,k1−k3,n−k1,n−k2) =(1−λ2)λ2n(nk1)(nk2)(k1k3)(k2k4)ηk1A(1−ηA)n−k1ηk2B(1−ηB)n−k2(12)k1+k2, (13)
before the detectors. The coincidence count is then given by
CCmean = ∞∑n=0n∑k1=0n∑k2=0k1∑k3=0k2∑k4=0{1−(1−ηD1)k2+k3−k4}{1−(1−ηD2)k1−k3+k4} (14) ×PAA′BB′CD(k3,k2−k4,k4,k1−k3,n−k1,n−k2).
The HOM visibility in Eq. (3) is thus calculable from Eqs. (10) and (14).
### .2 Numerical result
The transmittances (efficiencies) of each components in the experiment are summarized in Table 2 (see Fig. 8 for the theoretical model and the corresponding experimental setup in Main text. In fact, the HOM visibility is extremely sensitive to the mode matching factor . It is however not easy to estimate the mode matching factor experimentally with enough accuracy.
In Fig. 9, we plot the numerical results with various , and the experimental data with the 76 MHz laser. The experimental average photon-pair is estimated from the experimental count rates. The experimental data fit the theoretical lines well within . With the parameters in Table 2, we also calculated the performance of our scheme at high photon-pair generation rate, as shown in Fig. 10 and Table 3. From this simulation, we find several interesting relationship. (1), The visibility is directly determined by the average photon-pairs. (2), The slope of this line is very sensitive to the unbalanced loss in the delay arm and non-delay arm. (3), The Y-intercept of this line very sensitive to the mode matching efficiency.
### References
1. T. Sakamoto, T. Kawanishi, and M. Tsuchiya, Opt. Lett. 33, 890 (2008).
2. I. Morohashi, M. Oikawa, Y. Tamura, S. Aoki, T. Sakamoto, T. Kawanishi, and I. Hosako, in Conference on Lasers and Electro-Optics 2012 (Optical Society of America, 2012) p. CF1N.7.
3. R.-B. Jin, R. Shimizu, K. Wakui, H. Benichi, and M. Sasaki, Opt. Express 21, 10659 (2013a).
4. R.-B. Jin, R. Shimizu, K. Wakui, M. Fujiwara, T. Yamashita, S. Miki, H. Terai, Z. Wang, and M. Sasaki, Optics Express, Opt. Express 22, 11498 (2014).
5. S. Miki, T. Yamashita, H. Terai, and Z. Wang, Opt. Express 21, 10208 (2013).
6. T. Yamashita, S. Miki, H. Terai, and Z. Wang, Optics Express, Opt. Express 21, 27177 (2013).
7. C.-Y. Lu, X.-Q. Zhou, O. Guhne, W.-B. Gao, J. Zhang, Z.-S. Yuan, A. Goebel, T. Yang, and J.-W. Pan, Nat. Phys. 3, 91 (2007).
8. Y.-F. Huang, B.-H. Liu, L. Peng, Y.-H. Li, L. Li, C.-F. Li, and G.-C. Guo, Nat. Commun. 2, 546(1 (2011).
9. X.-C. Yao, T.-X. Wang, P. Xu, H. Lu, G.-S. Pan, X.-H. Bao, C.-Z. Peng, C.-Y. Lu, Y.-A. Chen, and J.-W. Pan, Nat. Photon. 6, 225 (2012).
10. D. Bouwmeester, J.-W. Pan, K. Mattle, M. Eibl, H. Weinfurter, and A. Zeilinger, Nature 390, 575 (1997).
11. J.-W. Pan, Z.-B. Chen, C.-Y. Lu, H. Weinfurter, A. Zeilinger, and M. Żukowski, Rev. Mod. Phys. 84, 777 (2012).
12. M. Fujiwara, K.-i. Yoshino, Y. Nambu, T. Yamashita, S. Miki, H. Terai, Z. Wang, M. Toyoshima, A. Tomita, and M. Sasaki, Optics Express, Opt. Express 22, 13616 (2014).
13. E. Knill, R. Laflamme, and G. J. Milburn, Nature 409, 46 (2001).
14. R. Krischek, W. Wieczorek, A. Ozawa, N. Kiesel, P. Michelberger, T. Udem, and H. Weinfurter, Nat. Photon. 4, 170 (2010).
15. R.-B. Jin, M. Fujiwara, T. Yamashita, S. Miki, H. Terai, Z. Wang, K. Wakui, R. Shimizu, and M. Sasaki, arXiv:1309.1221 (2013b).
16. S. Tanzilli, H. de Riedmatten, H. Tittel, H. Zbinden, P. Baldi, M. De Micheli, D. B. Ostrowsky, and N. Gisin, Electronics Letters, Electron. Lett. 37, 26 (2001).
17. T. Zhong, F. N. C. Wong, A. Restelli, and J. C. Bienfang, Opt. Express 20, 26868 (2012).
18. T. Sakamoto, T. Kawanishi, and M. Izutsu, Opt. Lett. 32, 1515 (2007).
19. I. Morohashi, T. Sakamoto, H. Sotobayashi, T. Kawanishi, I. Hosako, and M. Tsuchiya, Opt. Lett. 33, 1192 (2008).
20. F. König and F. N. C. Wong, Appl. Phys. Lett. 84, 1644 (2004).
21. P. G. Evans, R. S. Bennink, W. P. Grice, T. S. Humble, and J. Schaake, Phys. Rev. Lett. 105, 253601 (2010).
22. T. Gerrits, M. J. Stevens, B. Baek, B. Calkins, A. Lita, S. Glancy, E. Knill, S. W. Nam, R. P. Mirin, R. H. Hadfield, R. S. Bennink, W. P. Grice, S. Dorenbos, T. Zijlstra, T. Klapwijk, and V. Zwiller, Opt. Express 19, 24434 (2011).
23. A. Eckstein, A. Christ, P. J. Mosley, and C. Silberhorn, Phys. Rev. Lett. 106, 013603 (2011).
24. P. J. Mosley, J. S. Lundeen, B. J. Smith, P. Wasylczyk, A. B. U’Ren, C. Silberhorn, and I. A. Walmsley, Phys. Rev. Lett. 100, 133601 (2008).
25. R.-B. Jin, J. Zhang, R. Shimizu, N. Matsuda, Y. Mitsumori, H. Kosaka, and K. Edamatsu, Phys. Rev. A 83, 031805 (2011).
26. R.-B. Jin, K. Wakui, R. Shimizu, H. Benichi, S. Miki, T. Yamashita, H. Terai, Z. Wang, M. Fujiwara, and M. Sasaki, Phys. Rev. A 87, 063801 (2013c).
27. S. Miki, M. Fujiwara, M. Sasaki, and Z. Wang, Applied Superconductivity, IEEE Transactions on, IEEE Trans. Appl. Superconduct. 17, 285 (2007).
28. M. A. Broome, M. P. Almeida, A. Fedrizzi, and A. G. White, Optics Express, Opt. Express 19, 22698 (2011).
29. C. K. Hong, Z. Y. Ou, and L. Mandel, Phys. Rev. Lett. 59, 2044 (1987).
30. O. Kuzucu, M. Fiorentino, M. A. Albota, F. N. C. Wong, and F. X. Kärtner, Phys. Rev. Lett. 94, 083601 (2005).
31. R. Shimizu and K. Edamatsu, Optics Express, Opt. Express 17, 16385 (2009).
32. V. Ansari, B. Brecht, G. Harder, and C. Silberhorn, arXiv:1404.7725 (2014).
33. X.-S. Ma, S. Zotter, J. Kofler, T. Jennewein, and A. Zeilinger, Phys. Rev. A 83, 043814 (2011).
34. O. J. Morris, R. J. Francis-Jones, K. G. Wilcox, A. C. Tropper, and P. J. Mosley, Special Issue on Nonlinear Quantum Photonics, Opt. Commun. 327, 39 (2014).
35. “Giga optics website, http://www.gigaoptics.com/,” .
36. “M2 laser website, http://www.m2lasers.com/,” .
100330 | 2018-11-20 00:44:03 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8061211109161377, "perplexity": 1513.1762533807369}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746171.27/warc/CC-MAIN-20181119233342-20181120015342-00471.warc.gz"} |
https://mathoverflow.net/questions/234404/holomorphy-of-a-function-with-values-in-a-hilbert-space | # Holomorphy of a function with values in a Hilbert space
Denote by $\mathbb C^\infty$ the Hilbert space $\ell^2 (\mathbb C)$. Fix $1\leq N,M \leq \infty$, and let $U$ be an open subset of $\mathbb C^N$. Following Mujica's book "complex analysis in Banach spaces", a function $f:U\to \mathbb C ^M$ is called $holomorphic$ if for every $p \in U$ there is a bounded linear map $A:\mathbb C^N \to \mathbb C^M$ such that
$\displaystyle \lim_{h\to 0} \frac{||f(p+h)-f(p)-Ah||}{||h||}=0$.
In Mujica's book, theorem 8.12 says that a function $f:U\to \mathbb C^M$ is holomorphic if and only if, for every bounded linear map $\psi : \mathbb C^M \to \mathbb C$, the function $\psi \circ f$ is holomorphic.
Denote $\pi_j:\mathbb C^M \to \mathbb C$ the map that to every $z\in \mathbb C^M$ associates its $j$-th component. Then $\pi_j$ is a bounded linear map, so $f^j := \pi_j \circ f$ is holomorphic for every $j$.
My question is: is the converse true? I.e. is it true that if every $f^j$ is holomorphic then $f$ is holomorphic?
The answer is obviously yes if $M$ is finite. I think that, if $f$ is continuous, then the answer is yes also in the infinite-dimensional case. Can we drop the continuity hypotesis?
Thank you.
EDIT2: I add my proof attempt of the fact that it suffices to assume $f$ continuous. So, let $f:U\to \mathbb C^\infty$ (with $U\subseteq \mathbb C^N$ and $N\leq \infty$) continuous such that, for every $j$, $f^j$ is holomorphic. Let $\psi:\mathbb C^\infty \to \mathbb C$ be a bounded linear map: then $\psi$ acts like $z\mapsto <z,u>$ for some $u\in \mathbb C^\infty$, so $f$ is holomorphic if and only if the map $z\mapsto <f(z),u>$ from $U$ to $\mathbb C$ is holomorphic for every $u\in \mathbb C^\infty$. Now, select $u\in \mathbb c^\infty$ and denote $g_n (z) = \sum _{j=1}^{n} f^j (z) \overline u^j$ and $g=\lim_n g_n$: since every $g_n$ is holomorphic because every $f^j$ is, it suffices to prove that $g_n \to g$ uniformly on compact subsets. But, if $K$ is a compact of $U$, then $\|g_n (z) -g(z)\|\leq \|<f(z),u^{>n}>\|\leq \|f(z)\|\|u^{>n}\|\leq R_K \|u^{>n}\|$ for $z\in K$, where $u^{>n}$ is the vector of $\mathbb C^\infty$ with $j$-th component equal to $0$ if $j\leq n$, and equal to $u^j$ if $j>n$, and $R_K$ is a positive constant that bounds $\|f\|$ on $K$. Since $\|u^{>n}\|\to 0$ for $n\to \infty$, by generality of $u$, we have the thesis.
EDIT3: following this question, I found a counterexample in the case $N=\infty$ and $M=\infty$. Define $f:\mathbb C^\infty \to \mathbb C^\infty$ by setting
$f^1 (z) = z^1$;
$f^2 (z) = f^3 (z) = \frac{1}{\sqrt 2} (z^2 + z^3)$;
$f^4 (z) = f^5 (z) = f^6 (z) = \frac{1}{\sqrt 3} (z^4 + z^5 + z^6)$;
and so on. Then the Jacobian matrix of $f$ is the one given in this example, and does not represent a bounded linear operator.
However, the question still remains unanswered for finite $N$ and infinite $M$.
• In general you can trade «continuity» for «ample boundedness», which here should translate as plain local boundedness. That being said, have you any reason to doubt Mujica's theorem? Does the proof seem gappy to you? – Loïc Teyssier Mar 24 '16 at 8:03
• @LoïcTeyssier Mujica's theorem is originally stated for general complex Banach spaces $E,F$ in place of $\mathbb C^N , \mathbb C^M$, and I do not doubt it. My question is if, in this very special case, the hypotesis can be weakened. I ask this because I am reading an old paper (1954) and the author says (more or less) that he doesn't know if what I am now asking is true, but it may be. – Ervin Mar 24 '16 at 8:44
• Ok, sorry, I misunderstood your question (read it too fast actually). I don't know the answer. You might find it in the book "Barroso, Jorge Alberto, mrnumber = "779089", Introduction to holomorphy, North-Holland Mathematics Studies, vol 106, 1985". – Loïc Teyssier Mar 24 '16 at 11:01
• What is the argument with the assumption of continuity? – Ben W Mar 25 '16 at 18:54
• @anonymous I added my attempt. – Ervin Mar 25 '16 at 19:03
No. There are non-holomorphic functions $f:\mathbb D\to \ell^2$ such that all components $f_n=\pi_n\circ f$ are holomorphic. This follows from a general result of Arendt and Nikolski [Vector-valued holomorphic functions revisited. Math. Z. 234 (2000), no. 4, 777–805]:
Theorem 1.5 Let $X$ be a Banach space and $W$ a subspace of $X'$ which does not determine boundedness. Then there exists a function $f : \mathbb D \to X$ which is not holomorphic such that $\varphi \circ f$ is holomorphic for all $\varphi\in W$.
Here, a subspace $W\subseteq X'$ is said to determine boundedness if, for all subsets $B\subseteq X$, the condition $\sup\lbrace |\varphi(x)|: x\in B\rbrace <\infty$ for all $\varphi\in W$ implies that $B$ is bounded. This theorem is applied to $X=\ell^2$ and the linear span $W$ of all projections $\pi_n$. It does not determine boundedness by considering $B=\lbrace ne_n:n\in\mathbb N\rbrace$ with the unit vectors $e_n\in\ell^2$.
I assume that $C^\infty=\ell^2(\mathbb C)$ means the space of square summable complex-valued sequences to you. Let me also stress that by definition validity of your conjecture in the case $N>1$ is equivalent to validity in the case $N=1$, so I will focus on this simpler case.
As a corollary of Vitali's theorem, the following criterion for holomorphy of vector-valued functions is known (Theorem A.7 in Arendt-Batty-Hieber-Neubrander, Vector-valued Laplace Transforms and Cauchy Problems, Birkhäuser 2011)
Let $\Omega\subset \mathbb C$ be open and connected, and let $f:\Omega \to X$ be a locally bounded function. Assume that $W \to X^*$ is a separating subspace such that $x^∗\circ f$ is holomorphic for all $x^∗ \in W$. Then f is holomorphic.
(If you are interested in the definition of a separating subspace you can check the book, but if $X$ is a separable Hilbert space as in your case, then $X\simeq X^*$ itself is certainly separating.)
• Is the (not closed) subspace of $\mathbb C^\infty$ generated by the $\pi^j$ separating? In this case you have proved that it suffices to assume $f^j$ holomorphic and $f$ locally bounded. I think this locally boundedness and continuity are interchangeable in this case, so what if we do not suppose $f$ continuous? – Ervin Mar 26 '16 at 15:33 | 2021-07-24 11:12:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.964499831199646, "perplexity": 108.98058904169926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150264.90/warc/CC-MAIN-20210724094631-20210724124631-00236.warc.gz"} |
https://holooly.com/solutions-v20/ebers-moll-equations-calculate-the-reverse-bias-voltage-present-on-the-base-emitter-junction-of-an-npn-bjt-when-the-emitter-is-open-circuited-and-a-reverse-bias-is-placed-on-the-base-collector-junctio/ | Products
## Holooly Rewards
We are determined to provide the latest solutions related to all subjects FREE of charge!
Enjoy Limited offers, deals & Discounts by signing up to Holooly Rewards Program
## Holooly Tables
All the data tables that you may search for.
## Holooly Arabia
For Arabic Users, find a teacher/tutor in your City or country in the Middle East.
## Holooly Sources
Find the Source, Textbook, Solution Manual that you are looking for in 1 click.
## Holooly Help Desk
Need Help? We got you covered.
## Q. 6.4
Ebers-Moll Equations
Calculate the reverse-bias voltage present on the base-emitter junction of an npn BJT when the emitter is open-circuited and a reverse bias is placed on the base-collector junction. Assume $\alpha_ F = 0.98, \alpha_R = 0.70, I_{CS} = 1 \times 10^{-13} A, I_{ES} = 7.14 \times 10^{-14}$ A.
## Verified Solution
For this bias condition, the collector current is $I_{CB0}$ as given by Equation 6.4.15. Because the emitter current is zero, Equation 6.4.10a establishes that
$I_{C B 0}=\left.I_{C}\right|_{I_{E}=0}=I_{C S}\left(1-\alpha_{F} \alpha_{R}\right)$ (6.4.15)
$I_E=-I_F+\alpha _RI_R$ (6.4.10a)
$I_F=\alpha _RI_R\approx -\alpha _RI_{CS}=I_{ES}\left(\exp \frac{qV_{BE}}{kT} -1 \right)$
where we have used $I_R\approx -I_{CS}$.
Using the reciprocity relationship (Equation 6.4.8), we solve for $V_{BE}$
$\alpha _FI_{ES}\equiv \alpha_RI_{CS}\equiv I_S$ (6.4.8)
$V_{BE}=\frac{kT}{q} \ln(1-\alpha_F)=-0.10 \ V$
at T = 300 K.
The reverse bias on the base-emitter junction depends only on $\alpha_F$ for this case of an open-circuited emitter. This dependence occurs because the bias is established by balancing the linking current with the current returned through the back-biased, base-emitter junction diode so that the emitter current is zero. | 2022-08-14 08:40:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 10, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.724460780620575, "perplexity": 8504.498602409596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572021.17/warc/CC-MAIN-20220814083156-20220814113156-00282.warc.gz"} |
https://plainmath.net/15224/expected-values-variances-exponential-random-variables-density-functions | Question
# Find the expected values and variances of the exponential random variables with the density functions given below \frac{1}{4}e^{-\frac{x}{4}}
Random variables
Find the expected values and variances of the exponential random variables with the density functions given below
$$\frac{1}{4}e^{-\frac{x}{4}}$$
2021-06-04
Step 1
it is given that x is a exponential random variables with pdf,
$$f(x)=\frac{1}{4}e^{-\frac{x}{4}}$$
Step 2
Pdf of exponential distribution
$$f(x)=0e^{-0x}, x\geq 0$$
Mean of exponential distribution $$= \frac{1}{0}$$
Variance of exponential distribution $$= \frac{1}{0^{2}}$$
Given pdf, $$f(x)=\frac{1}{4}e^{-\frac{x}{4}}$$
Comparing it with pdf of exponential distribution, $$0 =\frac{1}{4}$$
Mean of exponential distribution $$= \frac{1}{\frac{1}{4}}=4$$
Variance of exponential distribution $$= \frac{1}{(\frac{1}{4})^{2}}=16$$ | 2021-08-02 06:31:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8707984685897827, "perplexity": 776.2013982100583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154304.34/warc/CC-MAIN-20210802043814-20210802073814-00076.warc.gz"} |
http://www.hongliangjie.com/2012/12/19/how-to-generate-gamma-random-variables/ | # How to Generate Gamma Random Variables 2
In this post, I would like to discuss how to generate Gamma distributed random variables. Gamma random variate has a number of applications. One of the most important application is to generate Dirichlet distributed random vectors, which plays a key role in topic modeling and other Bayesian algorithms.
A good starting point is a book by Kroese et al. [1] where detailed discussion about how to generate a number of different random distributed variables. In the book (Section 4.2.6), they list the following methods for Gamma distribution:
• Marsaglia and Tsang’s method [2]
• Ahrens and Dieter’s method [3]
• Cheng and Feast’s method [4]
• Best’s method [5]
Here, we focus on Marsaglia and Tsang’s method, which is used in GSL Library and Matlab “gamrnd” command (you can check this by typing “open gamrnd”).
## Gamma Distribution
As there are at least two forms of Gamma distribution, we focus the following formalism of PDF:
\begin{align} f(x; \alpha; \beta) = \frac{\beta^{\alpha} x^{\alpha – 1} e^{-\beta x} }{\Gamma(\alpha)}, \,\,\,\, x \geq 0 \end{align}where $$\alpha > 0$$ is called the shape parameter and $$\beta > 0$$ is called the scale parameter.
## Marsaglia and Tsang’s Method
The algorithm works as follows for $$\mathbf{X} \sim \mbox{Gamma}(\alpha, 1)$$ for $$\alpha \geq 1$$:
1. Set $$d = \alpha – 1/3$$ and $$c = 1/\sqrt{9d}$$.
2. Generate $$Z \sim \mbox{N}(0,1)$$ and $$U \sim \mbox{U}(0,1)$$ independently.
3. If $$Z > -1/c$$ and $$\log U < \frac{1}{2} Z^{2} + d – d V + d \times ln V$$, where $$V = (1+cZ)^{3}$$, return $$\mathbf{X} = d \times V$$; otherwise, go back to Step 2.
Two notes:
1. This method can be easily extended to the cases where $$1 > \alpha > 0$$. We just generate $$\mathbf{X} \sim \mbox{Gamma}(\alpha + 1, \beta)$$, then $$\mathbf{X}^{\prime} = \mathbf{X} \times U^{1/\alpha}$$ where $$U \sim (0,1)$$. Thus, $$\mathbf{X}^{\prime} \sim \mbox{Gamma}(\alpha, \beta)$$. See details in Page 9 of [2].
2. For $$\beta \neq 1$$, we firstly generate $$\mathbf{X} \sim \mbox{Gamma}(\alpha, 1)$$, then $$\mathbf{X}/\beta \sim \mbox{Gamma}(\alpha, \beta)$$.
## Implementations
Here, we discuss some implementations details. We start from the original proposed C code from [2]:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 #include <math.h> extern float RNOR; // Normal random variable extern float UNI; // Uniform random variable float rgama(float a) { float d, c, x, v, u; d = a - 1. / 3.; c = 1. / sqrt(9. * d); for (;;) { do { x = RNOR; v = 1. + c * x; } while (v <= 0.); v = v * v * v; u = UNI; if (u < 1. - 0.0331 * (x * x) * (x * x)) { return (d * v); } if (log(u) < 0.5 * x * x + d * (1. - v + log(v))) { return (d * v); } } }
which is slightly different from the algorithm proposed above. Again, this does not handle $$1 > \alpha > 0$$ and $$\beta \neq 1$$.
This is the Matlab sample code from [1]:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 function x=gamrand(alpha,lambda) % Gamma(alpha,lambda) generator using Marsaglia and Tsang method % Algorithm 4.33 if alpha>1 d=alpha-1/3; c=1/sqrt(9*d); flag=1; while flag Z=randn; if Z>-1/c V=(1+c*Z)^3; U=rand; flag=log(U)>(0.5*Z^2+d-d*V+d*log(V)); end end x=d*V/lambda; else x=gamrand(alpha+1,lambda); x=x*rand^(1/alpha); end
This code does everything.
This is the code snippet extracted from GSL library:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 double gsl_ran_gamma (const gsl_rng * r, const double a, const double b) { /* assume a > 0 */ if (a > 1) { double u = gsl_rng_uniform_pos (r); return gsl_ran_gamma (r, 1.0 + a, b) * pow (u, 1.0 / a); } { double x, v, u; double d = a - 1.0 / 3.0; double c = (1.0 / 3.0) / sqrt (d); while (1) { do { x = gsl_ran_gaussian_ziggurat (r, 1.0); v = 1.0 + c * x; } while (v >= 0); v = v * v * v; u = gsl_rng_uniform_pos (r); if (u > 1 - 0.0331 * x * x * x * x) break; if (log (u) > 0.5 * x * x + d * (1 - v + log (v))) break; } return b * d * v; } }
Again, this does everything. Note this code is based on a slightly different PDF: $${1 \over \Gamma(a) b^a} x^{a-1} e^{-x/b}$$.
## Reference
[1] D.P. Kroese, T. Taimre, Z.I. Botev. Handbook of Monte Carlo Methods. John Wiley & Sons, 2011.
[2] G. Marsaglia and W. Tsang. A simple method for generating gamma variables. ACM Transactions on Mathematical Software, 26(3):363-372, 2000.
[3] J. H. Ahrens and U. Dieter. Computer methods for sampling from gamma, beta, Poisson, and binomial distributions. Computing, 12(3):223-246, 1974.
[4] R. C. H. Cheng and G. M. Feast. Some simple gamma variate generators. Computing, 28(3):290-295, 1979.
[5] D. J. Best. A note on gamma variate generators with shape parameters less than unity. Computing, 30(2):185-188, 1983.
## 2 thoughts on “How to Generate Gamma Random Variables”
• David Jacobson
The section showing the original C code is badly messed up. Where one would expect an equals sign there is a 5.
In line 5 we have a do-while where the condition is “while(v,50.);” I’m guessing that is supposed to be while(v <= 0) as in line 23 of the GSL library.
And in line 6 we have
if( u,1.-.0331*(x*x)*(x*x) ) return (d*v);
which completely baffles me. | 2018-01-21 14:17:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6465704441070557, "perplexity": 1443.1229950597335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890771.63/warc/CC-MAIN-20180121135825-20180121155825-00251.warc.gz"} |
http://www.isibang.ac.in/~manish/teaching/BMath.algebra1/2015/prac.prob.htm | # Practice problem for Algebra I, B.Math
There are regular homework for this course.
1. Show that a group $G$ in which every element has order 1 or 2 is abelian.
2. Show that a group of order 6 is either isomorphic to $S_6$ or $\mathbb{Z}/6$.
### More problems from Dummit and Foote
4.1:1, 4, 8(a), 10
4.2:4, 8, 10, 14
4.3:5, 6, 8, 13, 19, 22, 30, 35
4.4:3, 5, 6, 12, 18
4.5:14, 16, 23, 25, 29(Use Prop 23), 32, 36, 39.
5.5:1, 6, 16, 18.
6.3:1, 2, 4. | 2018-12-14 15:00:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4192730486392975, "perplexity": 1262.0043948381544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825916.52/warc/CC-MAIN-20181214140721-20181214162221-00591.warc.gz"} |
http://mathhelpforum.com/calculus/173940-how-can-i-integrate-messy-function.html | # Math Help - How can I integrate this messy function?
1. ## How can I integrate this messy function?
I am talking about the this function
can someone help me evaluate it?
2. Originally Posted by Riazy
I am talking about the this function
can someone help me evaluate it?
The square root part simplifies to 2R.
3. I don't really understand , how, could you show me how its done?
thanks
4. Originally Posted by Riazy
I am talking about the this function
can someone help me evaluate it?
Hmmm...either there's a mistake or this is trivial:
$\displaystyle{\left(2R\cos \phi\right)^2+\left(-2R\sin\phi\right)^2=\left(4R^2(\cos^2\phi+\sin^2\p hi\right))=4R^2}$ , and then
$\displaystyle{\sqrt{\left(2R\cos \phi\right)^2+\left(-2R\sin\phi\right)^2}=2R}$ , so we have
$\displaystyle{2\pi\int\limits^{\pi/2}_0 2R\cos \phi\sin\phi\sqrt{\left(2R\cos \phi\right)^2+\left(-2R\sin\phi\right)^2}\,d\phi=4R^2\cdot 2\pi\int\limits^{\pi/2}_0\cos\phi\sin\phi\,d\phi}$
$\displastyle{=8R^2\pi\cdot \frac{1}{2}\left[\sin^2\phi\left]^{\pi/2}_0=4R^2\pi}$
Tonio
5. Originally Posted by Riazy
I don't really understand , how, could you show me how its done?
thanks
If you're attempting an integral like this you must surely have been taught the Pythagorean Identity .... | 2015-05-03 13:36:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9477060437202454, "perplexity": 958.792997482221}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430448950352.27/warc/CC-MAIN-20150501025550-00018-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://timescalewiki.org/index.php?title=Real_numbers&oldid=946 | # Real numbers
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
The set $\mathbb{R}$ of real numbers is a time scale. In this time scale, all derivatives reduce to the classical derivative and the integrals reduce to the classical integral.
Forward jump: $\sigma(t)=t$ derivation Forward graininess: $\mu(t)=0$ derivation Backward jump: $\rho(t)=t$ derivation Backward graininess: $\nu(t)=0$ derivation $\Delta$-derivative $f^{\Delta}(t)=\displaystyle\lim_{h\rightarrow 0} \dfrac{f(t+h)-f(t)}{h}=f'(t)$ derivation $\nabla$-derivative $f^{\nabla}(t) =\displaystyle\lim_{h \rightarrow 0} \dfrac{f(t)-f(t-h)}{h}= f'(t)$ derivation $\Delta$-integral $\displaystyle\int_s^t f(\tau) \Delta \tau = \int_s^t f(\tau) d\tau$ derivation $\nabla$-derivative $\displaystyle\int_s^t f(\tau) \nabla \tau = \int_s^t f(\tau) d\tau$ derivation $h_k(t,s)$ $h_k(t,s)=\dfrac{(t-s)^k}{k!}$ derivation $\hat{h}_k(t,s)$ $\hat{h}_k(t,s)=\dfrac{(t-s)^k}{k!}$ derivation $g_k(t,s)$ $g_k(t,s)=\dfrac{(t-s)^k}{k!}$ derivation $\hat{g}_k(t,s)$ $\hat{g}_k(t,s)=\dfrac{(t-s)^k}{k!}$ derivation $e_p(t,s)$ $e_p(t,s)=\exp \left( \displaystyle\int_s^t p(\tau) d\tau \right)$ derivation $\hat{e}_p(t,s)$ $\hat{e}_p(t,s)=\exp \left( \displaystyle\int_s^t p(\tau) d\tau \right)$ derivation Gaussian bell $\mathbf{E}(t)=e^{-\frac{t^2}{2}}$ derivation $\mathrm{sin}_p(t,s)=$ $\sin_p(t,s)=\sin\left( \displaystyle\int_s^t p(\tau) d\tau \right)$ derivation $\mathrm{\sin}_1(t,s)$ $\sin_1(t,s)=\sin(t-s)$ derivation $\widehat{\sin}_p(t,s)$ $\widehat{\sin}_p(t,s)=\sin\left( \displaystyle\int_s^t p(\tau) d\tau \right)$ derivation $\mathrm{\cos}_p(t,s)$ $\cos_p(t,s)=\cos \left( \displaystyle\int_s^t p(\tau) d\tau \right)$ derivation $\mathrm{\cos}_1(t,s)$ $\cos_1(t,s)=\cos(t-s)$ derivation $\widehat{\cos}_p(t,s)$ $\widehat{\cos}_p(t,s)=\cos \left( \displaystyle\int_s^t p(\tau) d\tau \right)$ derivation $\sinh_p(t,s)$ $\sinh_p(t,s)=$ derivation $\widehat{\sinh}_p(t,s)$ $\widehat{\sinh}_p(t,s)=$ derivation $\cosh_p(t,s)$ $\cosh_p(t,s)=$ derivation $\widehat{\cosh}_p(t,s)$ $\widehat{\cos}_p(t,s)=$ derivation Gamma function $\Gamma_{\mathbb{R}}(x,s)=\displaystyle\int_0^{\infty} \left( \dfrac{\tau}{s} \right)^{x-1}e^{-\tau} d\tau$ derivation Euler-Cauchy logarithm $L(t,s)=$ derivation Bohner logarithm $L_p(t,s)=$ derivation Jackson logarithm $\log_{\mathbb{T}} g(t)=$ derivation Mozyrska-Torres logarithm $L_{\mathbb{T}}(t)=$ derivation Laplace transform $\mathscr{L}_{\mathbb{R}}\{f\}(z;s)=\displaystyle\int_0^{\infty} f(\tau) e^{-z\tau} d\tau$ derivation Hilger circle derivation | 2021-01-17 12:40:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9992764592170715, "perplexity": 4574.463527340038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703512342.19/warc/CC-MAIN-20210117112618-20210117142618-00648.warc.gz"} |
https://docs.microej.com/en/latest/PlatformDeveloperGuide/tiny.html | # Tiny application
## Principle
The Tiny application capability of the MicroEJ Core Engine allows to build a main application optimized for size. This capability is suitable for environments requiring a small memory footprint.
## Installation
Tiny application is an option disabled by default. To enable Tiny application of the MicroEJ Core Engine, set the property mjvm.standalone.configuration in configuration.xml file as follows:
<property name="mjvm.standalone.configuration" value="tiny"/>
See section Platform Customization for more info on the configuration.xml file.
## Limitations
In addition to general Limitations:
• The maximum application code size (classes and methods) cannot exceed 256KB. This does not include application resources, immutable objects and internal strings which are not limited.
• The option SOAR > Debug > Embed all type names has no effect. Only the fully qualified names of types marked as required types are embedded. | 2021-01-16 18:01:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19438739120960236, "perplexity": 10244.314627649357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703506832.21/warc/CC-MAIN-20210116165621-20210116195621-00423.warc.gz"} |
https://cracku.in/blog/reasoning-questions-for-ssc-mts-pdf/ | 0
467
# Reasoning Questions For SSC MTS PDF
Download Top-20 SSC MTS Reasoning Questions PDF. Reasoning questions based on asked questions in previous year exam papers very important for the SSC MTS exam.
Question 1: If 6 # 8 = 10 and 5 # 12 = 13, then 9 # 40 = ?
a) 47
b) 63
c) 41
d) 53
Question 2: If “A” denotes “added to”, “B” denotes “subtracted from”, “C” denotes “multiplied by” and “D” denotes “divided by” then which of the following equation is true ?
a) 12 A 6 B 3 C 4 D 3 = 14
b) 13 B 6 D 3 C 2 A 5 = 12
c) 72 D 18 C 14 B 68 A 10 = – 4
d) 68 D 4 A 6 B 3 C 8 = 0
Question 3: If ” @”denotes “added to”, “#” denotes “multiplied by”, $”\circledR”$ denotes “divided by” and “%” denotes “subtracted from”, then which of the following equation is true?
a) $8@8\circledR$8#8%8=9
b) 42%26$\circledR$13#2@8=46
c) 19%84$\circledR$4@3#4=12
d) 31%4$\circledR$2#19@3=4
Question 4: In a certain code language, “CONGO” is written as “RZPRD” and “TREAT” is written as “ UQGWX”. How is “PHONE” written in that code language ?
a) JNQIJ
b) KMQHK
c) MKQKH
d) LLPIL
Question 5: In a certain code language, “RESTED” is written as “SDTSFC”. How is “BANNED” written in that code language ?
a) CZOMFC
b) ABMODE
c) CZOODE
d) ABMMFC
Question 6: In the following question, select the word which connot be formed using the letters of the given word.
ENCOURAGING
a) GRAIN
b) RAGING
c) GAUGE
d) ENCOURAGE
Question 7: If ‘P 3 Q’ means ‘Q is daughter of P’, ‘P 5 Q’ means ‘Q is son of P’, ‘P 7 Q’ means ‘P is sister Q’, ‘P 9 Q’ means ‘P is brother of Q’. Which of the following Expression indicates A is nephew of D ?
a) B 9 D 5 C 5 A
b) B 7 D 9 C 5 A
c) B 7 D 7 C 3 A
d) B 7 D 9 C 3 A
Question 8: In a row of people, there are 12 people before Q. There are 4 people between P and Q. There are 15 people between Q and S. If there are 8 people between S and R, then how many minimum people are there in the row ?
a) 29
b) 32
c) 36
d) 37
Question 9: In a row of cars, red car is 14th from left and 23rd from right. How many cars are there in the row ?
a) 36
b) 37
c) 35
d) 34
Question 10: A series is given with one term missing. Select the correct alternative from the given ones that will complete the series:
B, E, I, S, K, ?
a) W
b) X
c) U
d) V
Question 11: A series is given with one term missing. Select the correct alternative from the given ones that will complete the series.
B, G, N, W, ?
a) I
b) G
c) J
d) H
Question 12: Arrange the given words in the sequence in which they occur in the dictionary:
1. Habit
2. Habitat
3. Handle
4. Hammer
5. Harvest
a) 21453
b) 12435
c) 21435
d) 14253
Question 13: Arrange the given words in the sequence in which they occur in the dictionary:
1. Reputation
2. Reptile
3. Republic
4. Replicate
5. Repository
a) 42531
b) 43251
c) 45312
d) 45231
Question 14: In the following question, select the odd number-pair from the alternatives:
a) 8-72
b) 6-42
c) 12-156
d) 4-12
Question 15: In the following question, select the odd number-pair from the given alternatives:
a) 15-45
b) 9-29
c) 31-93
d) 41-123
Question 16: In the following question, select the odd letters from the given alternatives
a) AEI
b) IMQ
c) EIL
d) MQU
Question 17: In the following question, select the odd letters from the given alternatives:
a) GEF
b) MLK
c) IKJ
d) VWY
Question 18: In the following question, select the odd word pari from the given alternatives:
a) Wheat – Rabi
b) Rice – Rabi
c) Maize – Kharif
d) Barley – Rabi
Question 19: In the following question, select the odd word pair from the given alternatives:
a) Speaker – Sound
b) Bulb – Light
c) Fire – Heat
d) Earth – Land
Question 20: In the following question, select the related number from the given alternatives:
36 : 27 :: 196 : ?
a) 257
b) 89
c) 173
d) 343
The pattern followed is : $a$ # $b=c$ $\equiv a^2+b^2=c^2$
Eg :- $(6)^2+(8)^2=36+64=100=(10)^2$
and $(5)^2+(12)^2=25+144=169=(13)^2$
Similarly, $(9)^2+(40)^2=81+1600=1681=(41)^2$
=> Ans – (C)
(A) : 12 A 6 B 3 C 4 D 3 = 14
$\equiv12+6-3\times4\div3=14$
L.H.S. = $12+6-(\frac{3\times4}{3})$
= $18-4=14=$ R.H.S.
=> Ans – (A)
(A) : $8@8\circledR$8#8%8=9
$\equiv8+8\div8\times8-8$
L.H.S. = $8+8-8=8\neq$ R.H.S.
(B) : 42%26$\circledR$13#2@8=46
$\equiv42-26\div13\times2+8$
L.H.S. = $42-4+8=46=$ R.H.S.
=> Ans – (B)
“CONGO” is written as “RZPRD” and “TREAT” is written as “ UQGWX”
The pattern followed is :
Similarly, for PHONE : KMQHK
=> Ans – (B)
Expression : “RESTED” is written as “SDTSFC”
The pattern followed is :
Similarly, for BANNED : CZOMFC
=> Ans – (A)
The word ENCOURAGING does not contain any ‘E’, thus the term Encourage cannot be formed.
=> Ans – (D)
(A) : B 9 D 5 C 5 A
B is brother of D and D is son of C.
Also, C is son of A, => A is either grandfather or grandmother of D.
(B) : B 7 D 9 C 5 A
B is sister of D and D is brother of C.
Also, A is son of C, => A is nephew of D.
=> Ans – (B)
There are 12 people before Q, => Let us assume Q is at 13th position from left end.
There are 15 people between Q and S, => S is at 29th position.
There are 4 people between P and Q, => P can be either at 8th or 18th position from left.
There are 8 people between S and R, => R is at 20th position.
Thus, there are minimum of 29 people in the row.
=> Ans – (A)
Red car is 14th from left and 23rd from right
=> Total number of cars = $(14+23)-1=36$
=> Ans – (A)
The letters corresponding to the above series are : B(2), E(5), I(9), S(19), K(11 or 37)
2 $\times2+1=5$
5 $\times2-1=9$
9 $\times2+1=19$
19 $\times2-1=37$ (or 11)
11 $\times2+1=23\equiv W$
=> Ans – (A)
B (+5) = G (+7) = N (+9) = W (+11) = H
=> Ans – (D)
As per the order of dictionary,
= Habit -> Habitat -> Hammer -> Handle -> Harvest
$\equiv$ 12435
=> Ans – (B)
As per the order of dictionary,
= Replicate -> Repository -> Reptile -> Republic -> Reputation
$\equiv$ 45231
=> Ans – (D)
The numbers are of the form : $n^2+n$
(A) : $(8)^2+8=72$
(B) : $(6)^2+6=42$
(C) : $(12)^2+12=156$
(D) : $(4)^2+4=20\neq12$
=> Ans – (D)
If we divide the second number by first number, quotient is 3.
(A) : $\frac{45}{15}=3$
(B) : $\frac{29}{9}=3.22$
(C) : $\frac{93}{31}=3$
(D) : $\frac{123}{41}=3$
=> Ans – (B)
(A) : A (+4) = E (+4) = I
(B) : I (+4) = M (+4) = Q
(C) : E (+4) = I (+3) = L
(D) : M (+4) = Q (+4) = U
=> Ans – (C)
In the first three options, the given combinations are groups of consecutive letters from English alphabetical series, i.e. (EFG), (KLM), (IJK), hence VWY is the odd one.
=> Ans – (D)
First is the type of second, Wheat and barley are rabi crops, while rice and maize are kharif crops, hence Rice – Rabi is the odd one.
=> Ans – (B)
First is the source of second, sound comes from speaker, light from a bulb and fire provides heat, hence Earth – Land is the odd one.
=> Ans – (D)
The pattern followed is = $(n)^2:(\frac{n}{2})^3$
Eg :- $(6)^2:(\frac{6}{2})^3=36:27$
Similarly, $(14)^2=196$
=> $(\frac{14}{2})^3=(7)^3=343$ | 2019-07-21 02:30:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7414125800132751, "perplexity": 2436.238552677557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526818.17/warc/CC-MAIN-20190721020230-20190721042230-00049.warc.gz"} |
https://wiki.physik.fu-berlin.de/linux-minidisc/faq | faq
# FAQ
I didn't had any time yet to answer all the questions yet, I'm just very busy recently, sorry. Please check back later.
How can I contribute to this project?
You can either contribute code (please look at the Tasks, there is alot to do), translations or artwork. We really appreciate any input. And be it suggestions, complaints or flaming .
What type of MiniDisc devices are compatible with this software?
Generally, all MiniDisc devices with a USB connector (either NetMD or HiMD) are supported. Older models without USB do not allow any transfer of tracks to or from the MiniDisc besides using S/PDIF (TOSLINK or co-axial) or analog inputs/outputs.
When I import an atrac3+ (.oma extention) encoded track uploaded with QHiMDTransfer into SonicStage, the track cannot be converted to wav or transferred?
Fortunately, SonicStage is no longer required to convert unencrypted atrac3+ tracks. To convert .oma files to wav you can use MarCNet´s HiMD Renderer.
At the moment a functional and open atrac3+ decoder is not available, but work on a decoder has been initiated by the ffmpeg project. There are ffmpeg decoders for atrac1 and atrac3.
It is a good idea to back up the folder (C:\Program Files\Common Files\Sony Shared\OpenMG, assuming C: is your system drive) with DRM information (where the proc files are stored). The folder also contains the keys required to decrypt any encrypted tracks within SonicStage. From a post regarding proc files on the mailing list by Thomas Arp:
The exact place is: C:\Program Files\Common Files\Sony Shared\OpenMG\procfile\xx (when C: is your system drive) There are a couple of subdirectories named by the last 2 digits of the contentID (xx in the path above). The name of the procfile is the contentID with .opf extension. If Sonicstage cannot find the file when importing atrac files an error message wil be stored in C:\Program Files\Common Files\Sony Shared\OpenMG\omglog.txt. It will look like this (my system drive is F:): 2010/03/16 23:07:09 [F:\Programme\Sony\SonicStage\Omgjbox.exe] 00001608:00000ae8 OMGException(0x106): <42614> - F:\Programme\Gemeinsame Dateien\Sony Shared\OpenMG/procfile/3D/0203000002000121B40E00001836E9D78AD3673D.opf
Is there an ATRAC3plus decoder available for GNU/Linux or MacOSX?
No, but work on a decoder has been initiated by the ffmpeg project. This decoder is expected to be released in mid of 2011 and will certainly also part VideoLAN's VLC. There are already ffmpeg decoders for ATRAC1 (ATRAC-SP) and ATRAC3. Both types can also be played back in VideoLAN's VLC.
Why can't ffmpeg identify all of my ATRAC1 (ATRAC-SP) tracks that I have uploaded with the netmd python script?
\$ ffmpeg -i 01.aea
FFmpeg version SVN-r22323, Copyright (c) 2000-2010 the FFmpeg developers
built on Mar 8 2010 11:34:33 with gcc 4.4.1 [gcc-4_4-branch revision 150839]
configuration:
libavutil 50.11. 0 / 50.11. 0
libavcodec 52.57. 0 / 52.57. 0
libavformat 52.55. 0 / 52.55. 0
libavdevice 52. 2. 0 / 52. 2. 0
libswscale 0.10. 0 / 0.10. 0
[NULL @ 0x117d3c0]Format detected only with low score of 24, misdetection possible!
Last message repeated 1707 times
[mp3 @ 0x117d3c0]Estimating duration from bitrate, this may be inaccurate
Input #0, mp3, from '02 - IR 2.aea':
Duration: N/A, start: 0.000000, bitrate: N/A
Stream #0.0: Audio: mp3, 0 channels, s16
At least one output file must be specified
The ffmpeg developer is working on this problem - hopefully it will soon be fixed.
Why does development of the software take place at such a slow pace?
Since Sony won't give us any specifications how the USB protocol and encryption of the tracks on MiniDisc works, we have to find ourselves by means of reverse engineering. This is a very tedious job, especially since Sony has developed a highly sophisticated software for digital right management (DRM) called OpenMG. It's the heart of all Windows software that allows to communicate with your MiniDisc over USB and just everything going from and to the MiniDisc has to go through the OpenMG software layer. There are a very few parts like MP3 supportt which does not really need OpenMG and that's why we supported that first in our software. But anything beyond MP3 requires a complete understanding of the inner workings of OpenMG.
I tried formatting a HiMD in qhimdtransfer but that doesn't work, why?
This feature has not been implemented yet in libhimd. On Linux, you can use himdformat to format a HiMD. If installed qhimdtransfer, just run himdformat from a terminal. For other Linux systems, get the source from git (see compilingonlinux) and build the basictools. Formatting on MacOS and Windows is not yet supported by qhimdtransfer. You have to use SonicStage or HiMD MusicTransfer for Mac instead.
I tried renaming a track in qhimdtransfer but that doesn't work either, what's wrong?
Renaming tracks is a feature which has not been implemented yet, but we have it on the Tasks list.
I tried transfering a track to my HiMD walkman in qhimdtransfer but it won't work. Why?
This is still in development, like for many other features, please see the Tasks list.
Why can I transfer some tracks from my HiMD and some not?
Every track on a HiMD is encrypted. While MP3 tracks use a rather simple and obscure encryption (XORing the data with a constant key) which we have fully reverse-engineered, ATRAC3/3+ and PCM recordings are encrypted using 3DES. The mechanism on how the encryption key for 3DES is generated is yet to be understood. However, all recordings made with a MZ-RH1 Walkman use a constant key which we have obtained through reverse-engineering. Therefore, we can upload and decrypt tracks from the MZ-RH1 and know how to download ATRAC3/3+/PCM tracks to an MZ-RH1 (the latter is not implemented yet).
What is the difference between MP3 and LPCM/ATRAC3/3+ tracks?
Why are some of the pages in the wiki non-public? I thought all this stuff is free software.
Why is it so difficult to write a free transfer software for MiniDisc as a replacement for SonicStage?
What is essentially the difference between NetMD and HiMD devices, especially from the way my computer treats them?
Will it ever be possible to transfer tracks off NetMD device with this software?
To the current knowledge it is NOT possible to upload tracks from pure NetMD devices since the hardware simply does not have the capabilities. You can buy and use a Sony MZ-RH1 or MZ-M200 Walkman to upload tracks. However, tracks transfered with SonicStage in NetMD cannot be transferred back to your PC, while tracks transferred with Sony MD Simple Burner can be uploaded again. We assume, that SonicStage sets a certain protection bit for each which prevents the upload of these tracks while MD Simple Burner does not.
Does this software also support NetMD devices? My NetMD is not recognized by qhimdtransfer.
NetMD devices are supported by the Python scripts so far only. Please check this section for more.
I downloaded and installed both versions for MacOS but neither of them work for me, why?
Your version of MacOS is obviously older than the one we used to build qhimdtransfer for MacOS. You may try to compile the software yourself on your version of MacOS. Please see the appropriate section in this wiki: CompilingOnMac
Does this software also work on other operating systems other than Linux, Windows and MacOS?
Yes, the software generally works on all operating systems which provide the necessary libraries. *BSD and Haiku should work as well OS/2 (eComStation). We haven't tested it on other operating than Linux, MacOS and Windows yet, however. If you need assistance installing the software on other operating systems, please do not hesitate to contact us.
What is the NetMDPython stuff for and what is Python at all?
I have loads of old standard MDs with recordings I want transfer to my PC, what's the best way to do that?
The most convenient method is to use Sony's MZ-RH1 or MZ-M200 Walkman. These devices allow direct upload over USB to your computer. Depending on your operating system/computer, you will either use the official software or our software for this job:
• if you have Windows, please install and use SonicStage
• if you have MacOS or Linux, please see this section | 2023-02-08 16:45:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39415159821510315, "perplexity": 5486.947056316496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500837.65/warc/CC-MAIN-20230208155417-20230208185417-00624.warc.gz"} |
https://worldbuilding.stackexchange.com/questions/140028/if-you-open-a-black-hole-on-earth-is-it-escapable/140031 | # If you open a black hole on Earth, is it escapable? [closed]
What if the Relativistic Heavy Ion Collider at Brookhaven created an unstable blackhole; could you survive this and how?
## closed as off-topic by Renan, Alex2006, Ender Look, Frostfyre, sphenningsFeb 25 at 20:46
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "This question does not appear to be about worldbuilding, within the scope defined in the help center." – Renan, Alex2006, Ender Look, Frostfyre, sphennings
If this question can be reworded to fit the rules in the help center, please edit the question.
• Welcome to Worldbuilding. If you have a minute, please take the tour and have a look at the help center. Please observe our [Code of Conduct]. Writing in capital letters is the equivalent of screaming and considered rude. – Elmy Feb 25 at 17:38
• A: I’m not entirely sure that ‘unstable black hole’ is a real concept. B: What mass black hole are we talking here? C: The ‘and how’ part of the question is pretty broad, and very much depends on a host of secondary properties of the black hole, the planet, how the black hole forms and the preparedness of the person doing the escaping. – Joe Bloggs Feb 25 at 17:49
• (1) That's the Relativistic HIC, not "Realistic". (2) What's an "unstable" black hole? (3) Since $m = E / c^2$, one can easily compute the maximum possible mass for whatever kind of hole of whatever color the Brookhaven machine could have produced. Hint: it's not large. RHIC draws about 77 MW; one full day of operation consumes 6.7 TJ, corresponding to a mass of 74 milligrams (0.04 avoirdupois drams in Ancient American). Such a tiny black hole will have a very very very short life... – AlexP Feb 25 at 18:01
• If the energy and speed of the collision are high enough (I suspect that the RHIC at BNL would have to be substantially upgraded for this to happen) it is theoretically possible for tiny black holes to be produced, Hower their existence would be fleeting. In the sense that they are fleeting (their Hawking Radiation is sufficient to 'deplete' them before they can gather additional mass) perhaps that could classify them as 'unstable'. – Justin Thyme the Second Feb 25 at 18:21
• If you do some googling, you will find some quite well thought out arguments regarding what would happen if a particle accelerator created a black hole. The general consensus is that there is no threat at all, unless some "new" physics is discovered at such high energies, but we can barely even speculate on what would happen if new physics came around beacuse it's...well.. not known. – Cort Ammon Feb 26 at 22:42
The government website says it can't, but we say it can! Unless scientists can somehow bring an incredible amount of mass to earth, the answer is yes, easily. Configuring particles to form an event horizon does not somehow increase their mass or gravity. You could conceivably pack lots of mass close enough to form an event horizon, but it would be incredibly small and incredibly brief. At any appreciable distance, that black hole's gravity would be exactly the same as what those particles had in their previous configuration.
What if the Realistic Heavy Ion Collider at Brookhaven created an unstable black hole; could you survive this and how?
That is a bit strange, not sure what is an unstable black hole, so I will rephrase it to:
What if suddenly a black hole appeared on Earth; could you survive this (gravity)?
Also, I will only take into account gravity and not other effects as time dilatation nor Hawking radiation.
# It depends on the mass of the black hole. But for a particle collider, of course.
A black hole doesn't differ in other objects about its gravity. You may need to know about the Surface Gravity. On Earth, that is around 9.8 m/s 2, obviously easy to escape.
Given the following formula:
$$g = \frac{GM}{r^2}$$
Where $$M$$ is the mass of the black hole in kilograms, $$r$$ the radius (or distance between you and the body mass) in meters, and $$G$$ is the gravitational constant, (i.e: $$6.67408(31)\times10^{-11}\text{m}^3\text{kg}^{-1}\text{s}^{-2}$$) you can calculate the gravity.
With this value, you can calculate the gravitational force of an object (even a black hole) of any given mass to any given distance from it. It is to your choice to determine if you are able to escape or not with your current technology level (remember that if $$g >= c$$, where $$c$$ is the speed of light ($$299,792,458 \text{ m/s}$$) you can never escape, no matter how much you try).
Remember, a black hole of a few kilograms won't produce any relevant gravitation well after a few millimeters from it (instead, you should be careful about its Hawking radiation, not gravity).
A black hole produced in a particle collider would only have the mass of a few atoms, it would be harmless since its Schwarzschild radius (and gravity) will be incredibly small, it will literally die of hunger since particles won't fit on its "mouth" nor on its gravity well!
Another formula that may be useful for you is the Schwarzschild radius mentioned above, also know as gravitational radius, a distance from a black hole where nothing can escape. That is:
$$r_s = \frac{2GM}{c^2}$$
If you want to know the mass of a black hole enough strong to pull anything from a given range use:
$$M = \frac{c^2}{2G}R_s$$
If we resolve it with the radius of Earth which is $$6.371\text{ km}$$ do:
$$M = \frac{c^2}{2G}6.371.000\text{ m}$$ $$M = 4.28 \times 10^{33} \text{ kg} = 4,289,704,786.36 \text{ Yg}$$
Or $$2,157.28 \text{ M}_\odot$$, in solar masses.
## Other Facts
As said in a comment, I ignored the Hawking radiation. Well, not anymore.
Hawking radiation is a process where the black hole emits energy, thus lowering its mass and shrinking. Due to this radiation, black holes has a limited lifespan and tend to evaporate. However, the evaporation is inversely proportional to the mass of the black hole, so a normal one will die after the thermal death of the universe.
The lifetime of a black hole is calculated given the following formula:
$$t = M^3\frac{5120 \pi G^2}{\hbar c^4}$$
Where $$\hbar$$ is the reduced Planck constant which is $$\hbar = \frac{h}{2\pi}$$, where $$h$$ is the Plack constant equal to $$6.62607015 \times 10^{-34} \text{Js}$$.
For a black hole of the mass of two hydrogen atoms (since you want to make it in the collider) would be very small. An hydrogen atoms has an atomic weight of 1.008 u. Where $$u = 1.66053904020 \times 10^{-27} \text{ kg}$$, very small. So its mass will have $$3.3476467050432 \times 10^{-27} \text{ kg}$$.
$$t = (3.3476467050432 \times 10^{-27})^3\frac{5120 \pi G^2}{\hbar c^4}$$ $$t = 3,155\times 10^{-95} \text{ seconds}$$
As you can see, very little. At that time the black hole will disappear in less time than the photons will take to reach the nearest atom of air in their surroundings.
But, let's do the opposite, we will try with the big black hole from above:
$$t = (4.28 \times 10^{33})^3\frac{5120 \pi G^2}{\hbar c^4}$$ $$t = 6.639 \times 10^{84} \text{ s}$$
Or, $$2.105 \times 10^{77} \text{ years}$$
### Heat
The emission of Hawking radiation produces heat. As everything related to Hawking radiation, it is inversely proportional to the mass of the black hole. The formula is:
$$T = \frac{1}{M}\frac{\hbar c^3}{8 \pi k_b G}$$
Where $$T$$ is the temperature in degrees kelvin and $$k_b$$ is the Boltzmann constant equal to $$k = 1.3806485279 \times 10^{-23} \text{ Jk}^{-1}$$
$$T = \frac{1}{3.3476467050432 \times 10^{-27}}\frac{\hbar c^3}{8 \pi k_b G}$$ $$T = 3.365 \times 10^{49} \text{ k}$$
That is very hot! More that the hottest stars!
And with the big black hole:
$$T = \frac{1}{4.28 \times 10^{33}}\frac{\hbar c^3}{8 \pi k_b G}$$ $$T = 2.86 \text{ k}$$
... very little. Almost nothing. That is why big black holes last longer, they emit very little energy.
### Luminosity
The emission of Hawking radiation produces light. As everything related to Hawking radiation, it is inversely proportional to the mass of the black hole. The formula is:
$$L = \frac{1}{M^2}\frac{\hbar c^6}{15360 \pi G^2}$$
Small black hole:
$$L = \frac{1}{(3.3476467050432 \times 10^{-27})^2}\frac{\hbar c^6}{15360 \pi G^2}$$ $$L = 1.063 \times 10^{59} \text{ W}$$
Like you can see, both the temperature and luminosity are incredible big, but since the duration of the black hole is very small, it won't cause any damage. For been exactly, the energy emitted during it lifetime is based on the mass and energy equivalence:
$$E = mc^2$$
So: $$E = 3.3476467050432 \times 10^{-27}c^2$$ $$E = 3.008 \text{ J}$$
Like I said, harmless.
Or $$1.063 \times 10^{35} \text{ YW}$$
More that the brightest stars or quasars, but just for an instant.
Big black hole: $$L = \frac{1}{(4.28 \times 10^{33})^2}\frac{\hbar c^6}{15360 \pi G^2}$$ $$L = 0.082 \text{ W}$$
P.D: Other facts math take from Hawking Radiation Calculator.
• A reasonable answer, but you're ignoring black hole evaporation, which becomes entirely applicable for holes this small. – WhatRoughBeast Feb 26 at 19:10
• @WhatRoughBeast done – Ender Look Feb 26 at 22:30 | 2019-05-26 13:03:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 43, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6672418713569641, "perplexity": 743.7952972135189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259177.93/warc/CC-MAIN-20190526125236-20190526151236-00479.warc.gz"} |
https://math.stackexchange.com/questions/1357328/how-to-compute-a-ib2-if-given-a-ib3 | # how to compute $|a-ib|^2$ if given $(a-ib)^3$
If i know that, for example, $(a-ib)^3=5+4i$ - how can i compute the value of $|a-ib|^2$ ?
I can take modulus of $5+4i$ which is $\sqrt{5^2+4^2}$ but i don't know what i'm getting here.
I don't know what's a third root of a complex number here.
Any help?
Thanks.
We know that $|z\cdot w|=|z|\cdot|w|$, and so in general $|z^n|=|z|^n$.
Therefore, since you know $z^3$, you can take its modulus to get $|z^3|=|z|^3$, and raising that to the $2/3$ power gives $|z|^2$.
• so i'll get: $(\sqrt{5^2+4^2})^{2/3}$ – bob Jul 11 '15 at 11:10 | 2019-05-26 13:39:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9849888682365417, "perplexity": 113.91713229145398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259177.93/warc/CC-MAIN-20190526125236-20190526151236-00038.warc.gz"} |
https://math.stackexchange.com/questions/958636/prove-that-there-exist-two-integers-such-that-i-j-is-divisible-by-n | # Prove that there exist two integers such that i - j is divisible by n.
Here's the full question:
Prove that, for any $n + 1$ integers, $\{x_0, x_1, x_2, . . . , x_n\}$, there exist two integers $x_i$ and $x_j$ with $i \neq j$ such that $x_i − x_j$ is divisible by $n$.
Now, the integers aren't necessarily consecutive, positive, or without repeats. I've tried breaking this into cases, such as "$x_i = x_j, \dfrac{0}{n} = 0$" etc. But I don't think it's possible to exhaust the cases.
I feel like there should be an easy way to contradict this one, by saying that $x_i - x_j$ isn't divisible by $n$. But I just wasn't getting anywhere with that one.
I thought about using induction, but I would have just as hard a time proving this for $n$ integers before $n+1$ integers, so I don't think it would be possible for me to do it that way.
If anyone could help out, that would be wonderful. I'm pretty stuck here.
• Hint: For each $i$, let $r_i$ be the number with $0\leq r_i<n$ and $x_i-r_i$ is divisible by $n$. – Thomas Andrews Oct 5 '14 at 0:31
This is an application of the Pigeonhole Principle. The idea is that since there are $n$ possible remainders when an integer is divided by $n$ that at least two of the $n + 1$ integers in the set $\{x_0, x_1, \ldots, x_n\}$ must have the same remainder when divided by $n$. If they have the same remainder when divided by $n$, their difference is divisible by $n$.
The possible remainders when an integer is divided by $n$ are $0, 1, 2, \ldots, n - 1$. If you have $n$ integers, then it is possible for each of them to have a different remainder when divided by $n$. However, if you have $n + 1$ integers, at least two of them must have the same remainder when divided by $n$. Hence, in the set $\{x_0, x_1, \ldots, x_n\}$, there are two numbers $x_i$ and $x_j$, with $i \neq j$, that have the same remainder when they are divided by $n$. Thus, there exist $k, m, r \in \mathbb{N}$, with $0 \leq r \leq n - 1$, such that
\begin{align*} x_i & = kn + r\\ x_j & = mn + r \end{align*}
If we take their difference, we obtain
$$x_i - x_j = (kn + r) - (mn + r) = kn - mn = (k - m)n$$
Therefore, $x_i - x_j$ is divisible by $n$.
• Thanks so much again. You really helped me see that the key came down to realizing that taking the difference strips away the remainder. – user176049 Oct 5 '14 at 2:22
Using pigeonhole and modular arithmetic: Two integers are in the same equivalence class (mod $n$) if their difference is a multiple of $n$. There are $n$ distinct equivalence classes (mod $n$).
By the pigeonhole principle, if there are $n+1$ elements in your set, you'll run out of distinct bins, and have to put at least one number in the same bin as another number.
These two numbers, being in the same bin, have the same equivalence class, and thus their difference is a multiple of (and thus divisible by) $n$.
• I'm familiar with pigeonhole, but I don't understand what you mean by equivalence class. Say the integers are 7 and 2. 7 - 2 = 5, divisible by 5. How does this necessarily put 7 and 2 in the same class? – user176049 Oct 5 '14 at 0:59
• In modular arithmetic, two numbers are in the same equivalence when they have the same remainder when divided by $n$. If $n = 5$, the possible remainders are $0, 1, 2, 3, 4$. Thus, $2$ and $7$ would be in the same equivalence class $\mod 5$ since both $2$ and $7$ have remainder $2$ when divided by $5$. We write $7 \equiv 2 \pmod 5$. – N. F. Taussig Oct 5 '14 at 2:06 | 2019-10-14 03:55:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8088518381118774, "perplexity": 75.7457095485683}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986649035.4/warc/CC-MAIN-20191014025508-20191014052508-00440.warc.gz"} |
https://www.physicsforums.com/threads/lin-alg-eigenvectors.570429/ | # Homework Help: [Lin Alg - Eigenvectors]
1. Jan 24, 2012
### wown
I am little confused about the choice of eigenvectors chosen by my book. I am wondering if an eigenvalue can have multiple eigenvectors and if all are equally correct. Case in point the example below:
1. The problem statement, all variables and given/known data
find a fundamental matrix for the system x'(t) = Ax(t) for the given matrix A:
A = Row1 = {3 1 -1}
Row2 = {1 3 -1}
Row3 = {3 3 -1}
The attempt at a solution:
So i go through the motions and find that th eigenvalues are 1 and -2. the eigenvector for eValue 1 = {1, 1, 3}
When I try to solve for the eVector for eValue =2, There are 2 possibilties because:
(A-rI), where r = 2:
Row1 = {1 1 -1}
Row2 = {1 1 -1}
Row3 = {3 3 -3}
which basically leads to the equation U1 + U2 - U3 = 0
i.e. U3 = U1 + U2
from here, I can say
U1 = s, U2 =t, and U3 = s + t, leading to the eVectors:
{s, t, s+t}
or s * {1,0,1} + t * {0,1,1}
OR i can say U1 = -U2 + U3, U2 =t, U3 =s and my eVectors are:
{t - s, t, s} or t*{1, 1, 0} + s*{-1, 0, 1}
Which is correct? the book says the 2nd one but i dont see why pick one over the other.
2. Jan 24, 2012
### lanedance
when you have degenerate eignevectors (algebraic multiplicity of eigenvalue is greater than 1 in characteristic equation), usually there is more than one eigenvector for the given eigenvalue (this is called the geometric multiplicity of the eigenvalue)
And whenever you have more than one eigenvector for a given eigenvalue the eignevectors you solve for will not be unique, however the ones you solve for will span the same "eigenspace" which is unqiue. In this case a 2D plane
Note that even when you have only a single eignevector it is only unique up to a given multiplicative constant and actually represents a 1D line of valid eignevectors
3. Jan 24, 2012
### wown
:) I have not taken linear algebra yet (this is for diff eq class) and so most of what you said went over my head. but in lamens terms, i think this is what you are saying:
because the eigenvalue 2 is a repeated root of the equation A-rI = 0, the eigenvectors will not be unique and so it is possible to have two different eigenvectors for the same eigenvalue. Therefore, the choice of the eigenvector does not matter and i can use either eigenvector for my fundamental matrix?
is that correct?
4. Jan 24, 2012
### lanedance
note quite but close
As the eigenvalue 2 is a repeated root, it can have either 1 or 2 eigenvectors. In this case it has 2.
Now say you found eigenvectors u & v, and the book found p & q. As they "span" the same space, each set can be written as a linear combination of the other. This means you can find real numbers a,b,c,d such that
p = au+bv
q = cu+dv
5. Jan 24, 2012
### wown
Gotcha. Thanks a lot! | 2018-06-20 11:46:42 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8221278786659241, "perplexity": 780.2587887299326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863518.39/warc/CC-MAIN-20180620104904-20180620124904-00501.warc.gz"} |
http://www.gradesaver.com/textbooks/science/physics/physics-principles-with-applications-7th-edition/chapter-6-work-and-energy-problems-page-164/2 | ## Physics: Principles with Applications (7th Edition)
The maximum amount of work is the work done by gravity, assuming the nail is driven a very short distance. The force and the displacement are both straight downward. The angle between them is 0 degrees. Use equation 6–1. $$W=(1.2 kg)(9.80 \frac{m}{s^2})(0.50 m)(cos 0^{\circ}) \approx 5.9 J$$ | 2017-12-15 04:59:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8546465635299683, "perplexity": 593.5963713967836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948563629.48/warc/CC-MAIN-20171215040629-20171215060629-00344.warc.gz"} |
https://community.boredofstudies.org/threads/combinations-ext-math-question.394729/ | combinations ext math question (1 Viewer)
qwerla
New Member
Twelve people arrive at a restaurant. There is one table for six, one table for four and one table for two. In how many ways can they be assigned to a table?
thank you
notme123
Active Member
What's the answer? I haven't done perms and combs in a while but my guess is (12C6)5!*(6C4)3!*(2C2)1! = 9979200.
Last edited:
qwerla
New Member
What's the answer? I haven't done perms and combs in a while but my guess is (12C6)5!*(6C4)3!*(2C2)1! = 9979200.
the answer is 13860 ( i can't seem to wrap my head around it bc i've tried everything and i still don't get the answer. thank u for trying anyway! perm and comb is so hardd
imaiyuki
Member
this is in my y11 notes haha
in this case we would have
$\bg_white {12!\over {6!4!2!}}=13860$
as seen in the last formula
hope this helps!
Trebla
Do it sequentially:
- There are $\bg_white \binom{12}{6}$ ways to select 12 people to form the table of 6
- Once those 6 are selected there are 6 left so there are $\bg_white \binom{6}{4}$ ways to select 6 people to form the table of 4
- Once those 4 are selected there are 2 left so there is $\bg_white \binom{2}{2}$ way to form the table of 2
CM_Tutor
Moderator
Moderator
@qwerla, Trebla's answer is correct (obviously). @notme123's answer is for a different question... in how many ways can the people be seated around tables of 6, 4, and 2, whereas the question asked about assigning people to tables.
notme123
Active Member
@qwerla, Trebla's answer is correct (obviously). @notme123's answer is for a different question... in how many ways can the people be seated around tables of 6, 4, and 2, whereas the question asked about assigning people to tables.
Love it when I misread the question lol.
qwerla
New Member
View attachment 30620
this is in my y11 notes haha
in this case we would have
$\bg_white {12!\over {6!4!2!}}=13860$
as seen in the last formula
hope this helps!
hey!!! thank you SO much! this makes sm sense to me now, and thank you so much for the picture above. it really clarified it for me. hope u have a great rest of ur day
qwerla
New Member
Do it sequentially:
- There are $\bg_white \binom{12}{6}$ ways to select 12 people to form the table of 6
- Once those 6 are selected there are 6 left so there are $\bg_white \binom{6}{4}$ ways to select 6 people to form the table of 4
- Once those 4 are selected there are 2 left so there is $\bg_white \binom{2}{2}$ way to form the table of 2 | 2021-06-18 21:33:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7675074934959412, "perplexity": 644.1270563914519}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487641593.43/warc/CC-MAIN-20210618200114-20210618230114-00421.warc.gz"} |
https://hal-cea.archives-ouvertes.fr/cea-01575623 | Imprint of primordial inflation on the dark energy equation of state in non-local gravity
Abstract : In cosmological models where dark energy has a dynamical origin one would expect that a primordial inflationary epoch leaves no imprint on the behavior of dark energy near the present epoch. We show that a notable exception to this behavior is provided by a nonlocal infrared modification of General Relativity, the so-called RT model. It has been previously shown that this model fits the cosmological data with an accuracy comparable to $\Lambda$CDM, with the same number of free parameters. Here we show that in this model the dark energy equation of state (EOS) near the present epoch is significantly affected by the existence of an epoch of primordial inflation. A smoking-gun signature of the model is a well-defined prediction for the dark energy EOS, $w_{\rm DE}(z)$, evolving with redshift from a non-phantom to a phantom behavior, with deviations from $-1$ already very close to the limits excluded by the Planck 2015 data. Future missions such as Euclid should be able to clearly confirm or disprove this prediction.
Document type :
Preprints, Working Papers, ...
Domain :
Cited literature [37 references]
https://hal-cea.archives-ouvertes.fr/cea-01575623
Contributor : Emmanuelle De Laborderie Connect in order to contact the contributor
Submitted on : Monday, August 21, 2017 - 11:58:22 AM
Last modification on : Friday, September 30, 2022 - 11:26:51 AM
File
1610.05664.pdf
Files produced by the author(s)
Identifiers
• HAL Id : cea-01575623, version 1
• ARXIV : 1610.05664
Citation
Giulia Cusin, Stefano Foffa, Michele Maggiore, Michele Mancarella. Imprint of primordial inflation on the dark energy equation of state in non-local gravity. 2016. ⟨cea-01575623⟩
Record views | 2022-10-05 15:58:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37714123725891113, "perplexity": 1663.7728574009134}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00720.warc.gz"} |
https://eduzip.com/ask/question/a-set-of-principles-and-expectations-that-are-considered-binding-271270 | # A set of principles and expectations that are considered binding on any person who is known as ________.
code of ethics
##### SOLUTION
The code of ethics outlines a set of fundamental principles which could be used as the basis for operational requirements (things one must do), operational prohibitions (things one must not do). It is based on a set of core principles and values and is by no means designed for convenience. The employees subjected to the code are required to understand, internalize, and apply it to situations which the code does not specifically address. Organizations expect that the principles, once communicated and illustrated, will be applied in every case, and that failure to do so may lead to disciplinary action.
You're just one step away
Single Correct Medium Published on 18th 08, 2020
Questions 244531
Subjects 8
Chapters 125
Enrolled Students 197
#### Realted Questions
Q1 Single Correct Medium
________ refers to the code of conduct.
• A. Code of practice
• B. Code of ethics
• D. All of the above
1 Verified Answer | Published on 18th 08, 2020
Q2 Single Correct Medium
Which of the following is feature of business ethics ?
(1) Business ethics has a universal application.
(2) It is a relative norm. It differs from business to business.
(3) Business ethics is based on well accepted moral and social values.
$\mathit{Select\,the\,correct\,answer\,from\,the\,options\,given\,below - }$
• A. $(1)$
• B. $(2)$
• C. $(3)$
• D. All of the above.
1 Verified Answer | Published on 18th 08, 2020
Q3 Single Correct Medium
Ethics is a study of ________.
• A. the kind of action a person should do
• B. rules that define the course of actions
• C. discussion of the right and wrong codes of conduct
• D. all of the above
1 Verified Answer | Published on 18th 08, 2020
Q4 Single Correct Medium
______ made it important for businesses to have an ethics code, something in writing about what one ought to do, and what to strive for.
• A. The Ethics & Code Conduct Act, $2000$
• B. The Sarbanes-Ethics of Code Conduct Act, $2001$
• C. The Sarbanes-Oxley Act, $2002$
• D. None of above
1 Verified Answer | Published on 18th 08, 2020
Q5 Single Correct Medium
According to sacredness of means and ends principle of business ethics _________.
• A. for a good end a man should not be treated as a factor of production
• B. it is unethical to do major evil to another or oneself
• C. business should not co-operate with any one for doing any evil acts
• D. a good end cannot be attained with wrong means | 2021-11-27 12:03:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20952816307544708, "perplexity": 5416.002972614249}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358180.42/warc/CC-MAIN-20211127103444-20211127133444-00116.warc.gz"} |
https://www.jiskha.com/questions/1519282/a-student-reduces-the-temperature-from-a-300-cm3-balloon-from-60-c-to-20-c-what-will | Chemistry help??!
A student reduces the temperature from a 300 cm3 balloon from 60°C to 20°C. What will the new volume of the balloon be?
100 cm3
300 cm3
341 cm3
1. 👍
2. 👎
3. 👁
1. v=300*(273+20)/(273+60)
you dont need to do math, the only volume below 300 is answer a.
1. 👍
2. 👎
👤
bobpursley
Similar Questions
1. Physics
Hot air balloons float in the air because of the difference in density between cold and hot air. In this problem, you will estimate the minimum temperature the gas inside the balloon needs to be, for it to take off. To do this,
2. Science
What is the density of a book whose mass is 1,800 g and whose volume is 300 cm3? *My Answer*You want to identify the numbers first, density = mass ÷ volume. Which would give you, 1800g ÷ 300 cm³ = 6 g/cm³. *Cant Figure Out
3. interoduction of aeronautical engineering
Others say that the balloon was 23 metres high and 14 metres wide. Calculate the temperature of the air in the balloon (in degrees Celsius) for this situation as well. (Assume the balloon to be spherical, and then elongated by a
4. physics
Suppose that, on a day when the outside temperature is 15 degrees Celsius, we fill a balloon with pure nitrogen gas, of which the molar mass is 28 grams per mole. Now suppose that we want to heat up a second (equally large)
1. physics
Some say that the hot air balloons with which the Montgolfier brothers performed their first flight had a volume of 1700 cubic metres and could lift 780 kilograms (that includes the balloon, basket and payload). Assume that the
2. science
A weather balloon carries instruments that measure temperature, pressure, and humidity as it rises through the atmosphere. Suppose such a balloon has a volume of 1.2 m^3 at sea level where the pressure is 1 atm and the temperature
3. physics
The balloon contains 0.30 mol of helium starts at a pressure of 125 kpa and rises to an altitude where the pressure is 55.0 kpa, maintaining a constant 300 {\rm K} temperature. A)By what factor does its volume increase? B)How much
4. calculas
i didnot under stand the question ? need help Gas is escaping a spherical balloon at the rate of 4 cm3 per minute. How fast is the surface area of the balloon shrinking when the radius of the balloon is 24 cm? Given volume of
1. chemistry
A balloon has a volume of 175 cm3 at 19ºC. At what temperature will the volume of the balloon have increased by 25.0% (at constant pressure).
2. chem
A very flexible helium-filled balloon is released from the ground into the air at 20. ∘ C . The initial volume of the balloon is 5.00 L , and the pressure is 760. mmHg . The balloon ascends to an altitude of 20 km , where the
3. chemistry
at a certain temperature the volume of a gas sample is 300 cm3 if the temperature decreases to -100 c the volume is likely to become!
4. Physics
Hey, it's me again, Can someone help me with this question? A water balloon is dropped from the top of a building. Assume the balloon does not break when it strikes the ground. If the water balloon falls a distance of 21m, what is | 2021-09-18 20:12:32 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.803301990032196, "perplexity": 973.6479037317271}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056572.96/warc/CC-MAIN-20210918184640-20210918214640-00360.warc.gz"} |
http://libros.duhnnae.com/2017/jun9/149886080865-Existence-of-an-infinite-particle-limit-of-stochastic-ranking-process-Mathematics-Probability.php | # Existence of an infinite particle limit of stochastic ranking process - Mathematics > Probability
Existence of an infinite particle limit of stochastic ranking process - Mathematics > Probability - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online.
Abstract: We study a stochastic particle system which models the time evolution of theranking of books by online bookstores e.g., Amazon. In this system, particlesare lined in a queue. Each particle jumps at random jump times to the top ofthe queue, and otherwise stays in the queue, being pushed toward the tail everytime another particle jumps to the top. In an infinite particle limit, therandom motion of each particle between its jumps converges to a deterministictrajectory. This trajectory is actually observed in the ranking data on websites. We prove that the random empirical distribution of this particlesystem converges to a deterministic space-time dependent distribution. A coreof the proof is the law of large numbers for {\it dependent} random variables.
Autor: Kumiko Hattori, Tetsuya Hattori
Fuente: https://arxiv.org/
DESCARGAR PDF | 2017-10-18 05:45:47 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9103355407714844, "perplexity": 3485.7329591234056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822747.22/warc/CC-MAIN-20171018051631-20171018071631-00089.warc.gz"} |
https://byjus.com/maths/square-numbers/ | # Square Numbers: Multiplying With The Same Number
What are square numbers ?
When a number is multiplied by itself, the product is called as a ‘Square Number’.
Why Square Numbers:
As squares have all the sides equal, in a similar manner a square number is the product of two number which are equal (i.e. the number itself).
Area of a square = Side $\times$ Side
Square number = a $\times\; a \; = a^{2}$
Side of square (in cm) Area of square (in cm2) 3 3 × 3 = $3^{2}$ = 9 5 5 × 5 = $5^{2}$ = 25 7 7 × 7 = $7^{2}$ = 49 8 8 × 8 = $8^{2}$ = 64
Numbers such as 1, 4, 9, 16, 25, 36, 49, 64, etc. are special numbers as these are the product of a number by itself.
If we express a number (x) in terms of square of any natural number such as a2, then x is a square number. For example 100 can be expressed as 10 × 10 = 102, where 10 is a natural number, therefore 100 is a square number. Whereas, the number 45 cannot be called as a square number because it is the product of numbers 9 and 5. The number is not multiplied by itself. Square numbers can also be called as perfect square numbers.
List of squares for numbers less than 602 = 3600
02 = 0 102 = 100 202 = 400 302 = 900 402 = 1600 502 = 2500 12 = 1 112 = 121 212 = 441 312 = 961 412 = 1681 512 = 2601 22 = 4 122 = 144 222 = 484 322 = 1024 422 = 1764 522 = 2704 32 = 9 132 = 169 232 = 529 332 = 1089 432 = 1849 532 = 2809 42 = 16 142 = 196 242 = 576 342 = 1156 442 = 1936 542 = 2916 52 = 25 152 = 225 252 = 625 352 = 1225 452 = 2025 552 = 3025 62 = 36 162 = 256 262 = 676 362 = 1296 462 = 2116 562 = 3136 72 = 49 172 = 289 272 = 729 372 = 1369 472 = 2209 572 = 3249 82 = 64 182 = 324 282 = 784 382 = 1444 482 = 2304 582 = 3364 92 = 81 192 = 361 292 = 841 392 = 1521 492 = 2401 592 = 3481
## Odd and Even square numbers
• Squares of even numbers are even, i.e, (2n)2 = 4n2.
• Squares of odd numbers are odd, i.e, (2n + 1) = 4(n2 + n) + 1.
• Since every odd square is of the form 4n + 1, the odd numbers that are of the form 4n + 3 are not square numbers.
## Special Properties of square numbers
Here we have covered square numbers and their properties, you can refer completing the square method which by following the link provided. To learn more about other topics download BYJU’S – The Learning App and learn the subjects in an intereactive and innovative ways.
#### Practise This Question
Find x in x29=72. | 2019-02-20 08:59:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.528981626033783, "perplexity": 153.1530581401832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247494694.1/warc/CC-MAIN-20190220085318-20190220111318-00630.warc.gz"} |
https://www.physicsforums.com/threads/basic-circuit-capacitor-resistor-question.384378/ | # Basic Circuit Capacitor/Resistor Question
1. Mar 7, 2010
### jlv11203
1. The problem statement, all variables and given/known data
A 1.0 micro Farads capacitor is connected in series with a 16 Ohms resistor as shown in the figure. If the voltage across the capacitor is Vc(t) = 18 cos(1000t) V, what is the voltage across the resistor, VR, at t = 1 s?
Note: The voltage frequency is in radians per second.
----Capacitor--------Resistor-----black box----
the ends are connected in the picture given, eg: black box connected back to capacitor
2. Relevant equations
V_c = 10*cos(1000t)
i_c = C dv/dt
3. The attempt at a solution
My professor was absent from class the day we were supposed to learn this, but the homework is still due in a couple days. Honestly, I'm not really sure where to start. I would think the current in the branch with the capacitor and resistor would need to be found, in order to find the voltage drop across the resistor. Any nudges in the right direction or help would be greatly appreciated!
2. Mar 7, 2010
### MATLABdude
For a question like this, you need to firstly determine whether the response is steady state, or transient. HINT: Does $\tau=RC$ ring a bell? If not, the Wikipedia article on http://en.wikipedia.org/wiki/RC_time_constant" [Broken]. Now, how does that compare to the frequency of the voltage source?
EDIT: ...And welcome to PhysicsForums!
Last edited by a moderator: May 4, 2017
3. Mar 7, 2010
### jlv11203
We have not talked about anything regarding capacitors in class yet, let alone time constants. I vaguely remember them from a physics class I took last year, but I was able to figure out the solution to this problem after re-reading the chapter in the book...it ends up I was over-complicating it
This is what I figured out:
use i = C dv/dt to find the current through the capacitor at t = 1s. i= 1*10^-6 * d/dt (18*cos(1000t)), then, apply Ohm's law to the resistor since the current in that branch is the same throughout: v = ir, v = i * 16.
Thanks for the help and the welcome! :)
ps: that link is coming in very useful for the other questions on the assignment!
Last edited: Mar 7, 2010 | 2017-08-19 12:14:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5403549671173096, "perplexity": 670.4379681111536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105341.69/warc/CC-MAIN-20170819105009-20170819125009-00196.warc.gz"} |
https://study.com/academy/answer/jill-starts-at-a-salary-of-40-000-per-year-and-receives-benefits-that-cost-the-company-50-of-her-salary-she-gets-12-weeks-of-training-2-weeks-of-vacation-and-10-paid-holidays-using-52-weeks-per-year-40-hours-per-week-8-hours-per-day-and-not-count.html | ## Working Conditions:
The term working conditions means the terms and circumstances affecting the employment of an employee. It usually includes policies and programs in favor of the employees.
Become a Study.com member to unlock this answer! Create your account
Safe Working Conditions: Purpose & Concept
from
Chapter 17 / Lesson 23
11K
To ensure the welfare and safety of workers, having safe working conditions is essential. Learn about OSHA safety standards, workplace inspections, workers' compensation, and social responsibility, as well as their impact on workers. | 2021-09-19 04:17:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21594320237636566, "perplexity": 8986.058534469274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056711.62/warc/CC-MAIN-20210919035453-20210919065453-00528.warc.gz"} |
http://math.stackexchange.com/questions/160674/similar-matrices-and-change-of-basis | Similar Matrices and Change of Basis
I'm trying to understand a little better change of basis matrices and how they relate to determining if two matrices are similar.
Given finite vector spaces $V,W$ such that $\textrm{dim} V=\text{dim} W$ and a linear transformation $T:W\rightarrow V$ and ordered bases $V_B$ and $W_B$.
Now my book only covers the case where $W=V$ and defines similar matrices such that an invertible change of basis matrix $M$ exists such that:
$$[M]_{W_B}^{V_B}[T]_{W_B}[M]^{W_B}_{V_B}=[T]_{V_B}$$
Now how does this work when $W=V$? Are the columns of the change of basis matrix still just the basis vectors of $W$ according to their coordinates in $V$? Or does some type of mapping of $W_B$ to $V_B$ have to be done first? I'm asking because I kind of was looking at a change of basis matrix as a special case where the transformation is the identity transformation.
I hope this question makes sense.
-
For distinct vector spaces there is no such thing as a change-of-basis matrix. Recall that (depending on how it is taught) either (1) the columns of the change-of-basis matrix express the vectors of one basis in coordinates on the other basis, or (2) a change-of-basis matrix describes the identity transformation using one of the bases at departure and the other at arrival. Neither description makes sense if you have a pair of basis of different vector spaces (in (2) because there is no identity transformation between them). – Marc van Leeuwen Jun 20 '12 at 8:11
1 Answer
When $W = V$ you generally choose the same basis for $W$ as for $V$ when you change bases. (The point of doing this is, for example, so that you can sensibly use the corresponding matrix representation to take powers or exponentials of the corresponding linear transformation and to find eigenvalues and eigenvectors, etc.) When $W \neq V$ you are free (if you want) to change bases both in $V$ and in $W$, but I don't think people generally call the corresponding equivalence relation similarity. Similarity is very much a $W = V$ kind of phenomenon.
(Exercise: show that up to a change of basis in $V$ and in $W$, any linear transformation is uniquely determined by the dimension of its range.)
-
I didn't quite understand the exercise. What do you mean by "up to a change of basis"? – Robert S. Barnes Jun 21 '12 at 6:30
@Robert: I mean that there is an equivalence relation on linear transformations $V \to W$ where two transformations are equivalent if there is a matrix that represents both of them using appropriate choices of bases in $V$ and $W$. The equivalence classes of this equivalence relation are precisely the linear transformations having range a fixed dimension. – Qiaochu Yuan Jun 21 '12 at 6:53 | 2016-06-28 22:20:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9361220002174377, "perplexity": 142.13126320445366}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00125-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=141&t=42788 | ## gibbs free energy
$\Delta G^{\circ} = -nFE_{cell}^{\circ}$
Zenita Leang 2K
Posts: 67
Joined: Fri Sep 28, 2018 12:28 am
### gibbs free energy
what does gibbs free energy tells about a reaction?
Ibrahim Malik 1H
Posts: 65
Joined: Fri Sep 28, 2018 12:27 am
Been upvoted: 1 time
### Re: gibbs free energy
Gibbs free energy is a state function that tells us a reactions relationship between enthalpy (positive if exothermic, negative if endothermic) and the entropy along with temperature. Gibbs free energy is defined in terms of thermodynamic properties that are state functions as well. If the change in G is negative, the reaction is most likely spontaneous and will proceed in the forward direction. If the change in G is positive, the reaction is most likely non-spontaneous and will proceed in the reverse direction.
Andrea Zheng 1H
Posts: 61
Joined: Fri Sep 28, 2018 12:26 am
### Re: gibbs free energy
The Gibbs Free Energy is the energy associated with a reaction that can be used to do work. This is a function of the enthalpy of the reaction minus the temperature multiplied by the entropy. It can be used to see the spontaneity of a reaction, with a negative Gibbs Free energy representing a spontaneous reaction.
shaunajava2e
Posts: 66
Joined: Fri Sep 28, 2018 12:26 am
### Re: gibbs free energy
In thermodynamics, the Gibbs free energy is a thermodynamic potential that can be used to calculate the maximum of reversible work that may be performed by a thermodynamic system at a constant temperature and pressure.
Kriti Goyal 4K 19
Posts: 32
Joined: Fri Sep 28, 2018 12:21 am
### Re: gibbs free energy
Most of the times, we use Gibbs Free Energy to tell us about the spontaneity of a reaction. It is . state function and gives us a relationship between Enthalpy and Entropy. we say a reaction is spontaneous if its change in G is negative.
yuetao4k
Posts: 60
Joined: Fri Sep 28, 2018 12:27 am
Been upvoted: 1 time
### Re: gibbs free energy
Gibbs free energy will tell you if a reaction is spontaneous or not. | 2019-12-05 23:34:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42381253838539124, "perplexity": 1177.939500998034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540482284.9/warc/CC-MAIN-20191205213531-20191206001531-00456.warc.gz"} |
http://web.emn.fr/x-info/sdemasse/gccat/Kborder.html | ### 3.7.35. Border
A constraint that can be related to the notion of border, which we define now. Given a sequence $s=urv$, $r$ is a prefix of $s$ when $u$ is empty, $r$ is a suffix of $s$ when $v$ is empty, $r$ is a proper factor of $s$ when $r\ne s$. A border of a non -empty sequence $s$ is a proper factor of $s$, which is both a prefix and a suffix of $s$. We have that the smallest period of a sequence $s$ is equal to the size of $s$ minus the length of the longest border of $s$. | 2017-09-23 00:11:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 24, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9059525728225708, "perplexity": 68.66190535226221}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689411.82/warc/CC-MAIN-20170922235700-20170923015700-00461.warc.gz"} |
http://googology.wikia.com/wiki/Hexonation | ## FANDOM
10,744 Pages
Hexonation refers to the function $$a\ \{\{\{\{\{\{1\}\}\}\}\}\}\ b = \{a,b,1,6\}$$, using BEAF.[1]
In the fast-growing hierarchy, $$f_{\omega \times 5+1}(n)$$ corresponds to hexonational growth rate.
### Sources Edit
1. 4.1.2 - Extended Operators by Sbiis Saibian | 2017-06-28 13:56:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9523833394050598, "perplexity": 12758.473658836929}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323682.21/warc/CC-MAIN-20170628134734-20170628154734-00292.warc.gz"} |
https://www.nature.com/articles/s41467-018-06159-4 | Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Functional equivalence of genome sequencing analysis pipelines enables harmonized variant calling across human genetics projects
## Abstract
Hundreds of thousands of human whole genome sequencing (WGS) datasets will be generated over the next few years. These data are more valuable in aggregate: joint analysis of genomes from many sources increases sample size and statistical power. A central challenge for joint analysis is that different WGS data processing pipelines cause substantial differences in variant calling in combined datasets, necessitating computationally expensive reprocessing. This approach is no longer tenable given the scale of current studies and data volumes. Here, we define WGS data processing standards that allow different groups to produce functionally equivalent (FE) results, yet still innovate on data processing pipelines. We present initial FE pipelines developed at five genome centers and show that they yield similar variant calling results and produce significantly less variability than sequencing replicates. This work alleviates a key technical bottleneck for genome aggregation and helps lay the foundation for community-wide human genetics studies.
## Introduction
Over the past few years, a wave of large-scale WGS-based human genetics studies have been launched by various institutes and funding programs worldwide1,2,3,4 aimed at elucidating the genetic basis of a variety of human traits. These projects will generate hundreds of thousands of publicly available deep (>20×) WGS datasets from diverse human populations. Indeed, at the time of writing, >150,000 human genomes have already been sequenced by three NIH programs: NHGRI Centers for Common Disease Genomics5 (CCDG), NHLBI Trans-Omics for Precision Medicine (TOPMed), and NIMH Whole Genome Sequencing in Psychiatric Disorders6 (WGSPD). Systematic aggregation and co-analysis of these (and other) genomic datasets will enable increasingly well-powered studies of human traits, population history and genome evolution, and will provide population-scale reference databases that expand upon the groundbreaking efforts of the 1000 Genomes Project7,8, Haplotype Reference Consortium9, ExAC10, and GnomAD11.
Our ability as a field to harness these collective data to their full analytic potential depends on the availability of high quality variant calls from large populations of individuals. Accurate population-scale variant calling in turn requires joint analysis of all constituent raw data, where different batches have been aligned and processed systematically using compatible methods. Genome aggregation efforts are stymied by the distributed nature of human genetics research, where different groups routinely employ different alignment, data processing, and variant calling methods. Prior exome/genome aggregation efforts have been forced to obtain raw sequence data and re-perform upstream read alignment and data processing steps prior to joint variant calling due to concerns about batch effects introduced by trivial incompatibilities in processing pipelines10,11. These upstream steps are computationally expensive—representing as much as ~70% of the cost of basic per-sample WGS data analysis—and having to rerun them is inefficient (Supplementary Fig. 1). This computational burden will be increasingly difficult to bear as data volumes grow over coming years.
To help alleviate this burden and enable future genome aggregation efforts, we have forged a collaboration of major U.S. genome sequencing centers and NIH programs, and collaboratively defined data processing and file format standards to guide ongoing and future sequencing studies. Our approach focuses on the harmonization of upstream steps prior to variant calling, thus reducing trivial variability in core pipeline components while promoting the application of diverse and complementary variant calling methods—an area of much ongoing innovation. The guiding principle is the concept of functional equivalence (FE). We define FE to be a shared property of two pipelines that can be run independently on the same raw WGS data to produce two output files that, upon analysis by the same variant caller(s), produce virtually indistinguishable genome variation maps. A key question, of course, is where to draw the FE threshold. There is no one answer; at minimum, we advise that data processing pipelines should introduce much less variability in a single DNA sample than independent WGS replicates of DNA from the same individual.
Here, we present initial FE pipelines developed at five genome centers and show that they yield similar variant calling results—including single nucleotide (SNV), insertion/deletion (indel) and structural variation (SV)—and produce significantly less variability than sequencing replicates. Residual inter-pipeline variability is concentrated at low quality sites and repetitive genomic regions prone to stochastic effects. This work will enable data sharing and genome aggregation at an unprecedented scale.
## Results
### Functional equivalence standard
Towards this goal, we defined a set of required and optional data processing steps and file format standards (Fig. 1; see GitHub page12 for details). We focus here on WGS data analysis, but these guidelines are equally suitable for exome sequencing. These standards are founded in extensive prior work in the area of read alignment13, sequence data analysis8,14,15,16,17,18 and compression14,19, and more broadly in WGS analysis best practices employed at our collective institutes, and worldwide. Notable features of the data processing standard include alignment with BWA-MEM13, adoption of a standard GRCh38 reference genome with alternate loci7,20, and improved duplicate marking. File format standards include a 4-bin base quality scheme, CRAM compression19 and restricted tag usage, which in combination reduced file size >3-fold (from 54 to 17 Gb for a 30× WGS and from 38 to 12 Gb for a 20× WGS). This in turn reduces data storage costs and increases transfer speeds, facilitating data access and sharing.
### FE pipelines show less variability than data replicates
We implemented initial versions of these pipelines at each of the five participating centers, including the four CCDGs as well as the TOPMed Informatics Resource Core, and serially tested and modified them based on alignment statistics (Supplementary Table 1) and variant calling results from a 14-genome test set, with data contributed from each center (see Methods). In order to isolate the effects of alignment and read processing on variant calling, we used fixed variant calling software and parameters: GATK21 for single nucleotide variants (SNVs) and small insertion/deletion (indel) variants, and LUMPY22 for structural variants (SVs). These 14 datasets have diverse ancestry and are composed of well-studied samples from the 1000 Genomes Project7, including four independently-sequenced replicates of NA12878 (CEPH) and two replicates of NA19238 (Yoruban). We tested pairwise variability in SNV, indel and SV callsets generated separately from each of the five pipelines, before and after harmonization, as compared to variability between WGS data replicates (Fig. 2). As expected, pipelines used by centers prior to harmonization effort exhibit strong levels of variability, especially among SV callsets. Much of the variability can be attributed to the use of different incarnations of the GRCh37 reference sequence pre-harmonization, underscoring the importance of including a single reference as part of the standard. Most importantly, variability between harmonized pipelines (mean 0.4%, 1.8%, and 1.1% discordant for SNVs, indels, and SVs, respectively) is an order of magnitude lower than between replicate WGS datasets (mean 7.1, 24.0, and 39.9% discordant). Note that absolute levels of discordance are somewhat high in this analysis because we performed per-sample variant calling and included all genomic regions, with minimal variant filtering. All pipelines show similar levels of sensitivity and accuracy based on Genome in a Bottle (GiaB) calls for NA1287823, although one center is systematically slightly more sensitive and less precise, likely due to a slightly different base quality score recalibration (BQSR) model (Supplementary Fig. 2). The working group decided that this difference was within acceptable limits for applications of the combined data.
### Pipeline validation with Mendelian inheritance
We next applied the final pipeline versions to an independent set of 100 genomes comprising 8 trios from the 1000 Genomes Project7,8 and 19 quads from the Simons Simplex Collection24, and generated separate 100-genome GATK and LUMPY callsets using data from each of the five pipelines. Considering all five callsets in aggregate, the vast majority of GATK variants (97.2%) are identified in data from all five pipelines, with only 1.74% unique to a single pipeline and 1.02% in various minor subsets. Mean pairwise SNV concordance rates are in the range of 99.0–99.9% over all sites and comparisons, and Mendelian error rates are ~0.3% at concordant sites, and ~22–24% at discordant sites (Fig. 3). Indel and SV concordance rates are lower—as expected given that these variants are more difficult to map and genotype precisely. Pairwise SNV concordance rates are substantially higher in GiaB high confidence genomic regions comprised predominantly of unique sequence (SNV concordance: 99.7–99.9%; 72% of genome) than in difficult-to-assess regions laden with segmental duplications and high copy repeats (SNV concordance: 92–99%; 8.5% of genome; see Methods). Indeed, 58% of discordant SNV calls are found in the 8.5% most difficult to analyze subset of the genome. Furthermore, the mean quality score of discordant SNV sites are only 0.5% as high as the mean score of concordant SNV sites (16.4% for indels and 90.0% for SVs) (Supplementary Fig. 3). This suggests that many discordant sites are either false positive calls or represent sites that are difficult to measure robustly with current methods. Differences between pipelines are roughly symmetric, with all pipelines achieving similarly low levels of performance at discordant sites, as based on pairwise discordance rates and Mendelian error rates (Supplementary Fig. 4), further suggesting that most discordant calls are due to stochastic effects at sites with borderline levels of evidence. We note that there are some center-specific sources of variability due to residual differences in BQSR models and alignment filtering methods, but that these affect only a trivial fraction of variant calls.
## Discussion
Here, we have described a simple yet effective approach for harmonizing data processing pipelines through the concept of functional equivalence. This work resolves a key source of batch effects in sequencing data from different genome centers, and thus alleviates a bottleneck for data sharing and collaborative analysis within and among large-scale human genetics studies. Our approach also facilitates accurate comparison to variant databases; researchers that want to analyze their sample(s) against major datasets such as gnomAD, TOPMed, or CCDG should adopt these standards in order to avoid artifacts caused by non-FE sample processing. The standard is intended to be a living document, and maintaining it in a source control repository provides a natural mechanism for versioning. The standard should be updated to include new data types (e.g., long-reads), file formats and tools, as they become widely adopted in the genomics field and deserving of best-practices status. Additionally, our framework for evaluating FE can be directly used to validate improvements (e.g., new alignment software) and quantify backwards-compatibility with older data. Of course, other challenges remain, such as batch effects from library preparation and sequencing25, and persistent regulatory hurdles. Nevertheless, we envision that it will be possible to robustly generate increasingly large genome variation maps and shared annotation resources from these and other programs over the next few years, from diverse groups and analysis methods. Ultimately, we hope that international efforts such as Global Alliance for Genomics & Health (GA4GH)26 will adopt and extend these guidelines to help integrate research and medical genomes worldwide.
## Methods
### Dataset selection
For initial testing, we selected 14 whole genome sequencing datasets based on the following criteria: (1) they include samples of diverse ancestry, including CEPH (NA12878, NA12891, NA12892), Yoruban (NA19238), Luhya (NA19431), and Mexican (NA19648); (2) they were sequenced at multiple different genome centers to deep coverage (>20×) using Illumina HiSeq X technology; (3) they include replicates of multiple samples, including 2 of NA19238 (Yoruban) and 4 of NA12878 (CEPH); (4) they include the extremely well-studied NA12878 genome, for which much ancillary data exists, and (5) they were open access, readily accessible and shareable among the consortium sites. For subsequent characterization of the finalized pipelines, we selected an independent set of 100 samples composed of 8 open-access trios of diverse ancestry from the 1000 Genomes project—including CEPH (NA12878, NA12891, and NA12892), Yoruban (NA19238, NA19239, and NA19240), Southern Han Chinese (HG00512, HG00513, and HG00514), Puerto Rican (HG00731, HG00732, and HG00733), Colombian (HG01350, HG01351, HG01352), Vietnamese (HG02059, HG02060, and HG02061), Gambian (HG02816, HG02817, and HG02818), and Caucasian (NA24143, NA24149, and NA24385)—and 19 quads from the Simons Simplex Collection24. The SSC samples were approved for sequencing by the local institutional review board (IRB) at the New York Genome Center (Biomedical Research Alliance of New York [BRANY] IRB File # 17-08-26-385). All relevant ethical regulations were followed.
### Downsampling data replicates
To eliminate coverage differences as a contributor to variation between sequencing replicates of the same sample (four replicates of NA12878 and two replicates of NA19238), the data replicates were downsampled to match the lowest coverage sample. To obtain initial coverage, all replicates were aligned to a build 37 reference using speedseq16 (v 0.1.0). Mean coverage for each BAM file was calculated using the Picard CollectWgsMetrics tool (v2.4.1). For each sample, a downsampling ratio was calculated using the lowest coverage as the numerator and the sample’s coverage as the denominator. This ratio was used as the PROBABILITY parameter for the Picard DownsampleSam tool, along with RANDOM_SEED = 1 and STRATEGY = ConstantMemory. The resulting BAM was converted to FASTQ using the script bamtofastq.py from the speedseq repository.
### Alignment and data processing pipelines
The pre-harmonization pipeline from the McDonnell Genome Institute at the Washington University School of Medicine aligns reads to the GRCh37-lite reference using speedseq (v0.1.0)16. This includes alignment using bwa (v0.7.10-r789)13, duplicate marking using samblaster (v0.1.22)15, and sorting using sambamba (v0.5.4)18.
The post-harmonization pipeline from the McDonnell Genome Institute at the Washington University School of Medicine aligns each read group separately to the GRCh38 reference using bwa-mem (v0.7.15-r1140) with the parameters “-K 100000000 -p -Y”. MC and MQ tags are added using samblaster (v0.1.24) with the parameters “-a --addMateTags”. Read group BAM files are merged together with “samtools merge” (v1.3.1-2). The resulting file is name-sorted with “sambamba sort -n” (v0.6.4). Duplicates are marked using Picard MarkDuplicates (v2.4.1) with the parameter “ASSUME_SORT_ORDER = queryname”, then the results are coordinate sorted using “sambamba sort”. A base quality recalibration table is generated using GATK BaseRecalibrator (v3.6) with knownSites files (dbSNP138, Mills and 1 kg indels, and known indels) from the GATK resource bundle (https://console.cloud.google.com/storage/browser/genomics-public-data/resources/broad/hg38/v0) and parameters “--preserve_qscores_less_than 6 -dfrac .1 -nct 4 -L chr1 -L chr2 -L chr3 -L chr4 -L chr5 -L chr6 -L chr7 -L chr8 -L chr9 -L chr10 -L chr11 -L chr12 -L chr13 -L chr14 -L chr15 -L chr16 -L chr17 -L chr18 -L chr19 -L chr20 -L chr21 -L chr22”. The base recalibration table is applied using GATK PrintReads with the parameters “-preserveQ 6 -BQSR “{bqsrt}” -SQQ 10 -SQQ 20 -SQQ 30 --disable_indel_quals”. Finally, the output is converted to CRAM using “samtools view”. The pre-harmonization pipeline from the Broad Institute at Harvard and MIT contains the following steps: -Align with bwa-mem v0.7.7-r441: bwa mem –M –t 10 –p GRCh37.fasta -Merge aligned bam with the original unaligned bam and sort with Picard 2.8.3: MergeBamAlignment ADD_MATE_CIGAR = true ALIGNER_PROPER_PAIR = false UNMAP_CONTAMINANT_READS = false SORT_ORDER = coordinate - Mark duplicates with Picard 2.8.3: MarkDuplicates - Find target indels to fix with GATK 3.4-g3c929b0: CreateRealignerTargets –known dbSnp.138.vcf –known mills.vcf –known 1000genome.vcf -Fix indel alignments with GATK 3.4-g3c929b0: –known dbSnp.138.vcf –known mills.vcf –known 1000genome.vcf - Create recalibration table using GATK 3.4-g3c929b0: RecalibrateBaseQuality –knownSites dbSnp.138.vcf using –known dbSnp.138.vcf –known mills.vcf –known 1000genome.vcf - Apply base recalibration using GATK 3.4-g3c929b0: PrintReads –disable_indel_quals –emit_original_quals The post-harmonization pipeline from the Broad Institute at Harvard and MIT contains the following steps: - Align with bwa-mem 0.7.15.r1140: bwa mem -K 100000000 -p -v 3 -t 16 –Y GRCh38.fasta -Merge aligned bam with the original unaligned bam with Picard 2.16.0: MergeBamAlignment EXPECTED_ORIENTATIONS = FR ATTRIBUTES_TO_RETAIN = X0 ATTRIBUTES_TO_REMOVE = NM ATTRIBUTES_TO_REMOVE = MD REFERENCE_SEQUENCE ={ref_fasta} PAIRED_RUN = true SORT_ORDER = “unsorted CLIP_ADAPTERS = false MAX_INSERTIONS_OR_DELETIONS = -1 PRIMARY_ALIGNMENT_STRATEGY = MostDistant UNMAPPED_READ_STRATEGY = COPY_TO_TAG ALIGNER_PROPER_PAIR_FLAGS = true UNMAP_CONTAMINANT_READS = true ADD_PG_TAG_TO_READS = false
- Mark duplicates with Picard 2.16.0: MarkDuplicates ASSUME_SORT_ORDER = “queryname”
- Sort with Picard 2.16.0: SortSam SortOrder = coordinate
- Create BQSR table using GATK 4.beta.5: BaseRecalibrator
–knownSites dbSnp.138.vcf using –known dbSnp.138.vcf –known mills.vcf –known 1000genome.vcf
- Apply recalibration using GATK 4.beta.5:
ApplyBQSR -SQQ 10 -SQQ 20 -SQQ 30
- Convert output to cram with SamTools v 1.3.1: samtools view -C -T GRCh38.fasta
In the HGSC pre-harmonized WGS protocol (https://github.com/HGSC-NGSI/HgV_Protocol_Descriptions/blob/master/hgv_resequencing.md), reads are mapped to the GRCh37d reference with bwa-mem (v0.7.12), samtools (v1.3) fixmate, sorting and duplicate marking with sambamba (v0.5.9), base recalibration and realignment with GATK (v3.4.0), and the quality scores are binned and tags removed with bamUtil squeeze (v1.0.13). Multiplexed samples follow the same steps up through sorting and duplicate marking, resulting in sequencing-event BAMs. The BAMs are merged and duplicates marked using sambamba (v0.5.9), followed by the recalibration, realignment and binning described above.
The HGSC harmonized WGS protocol (https://github.com/HGSC-NGSI/HgV_Protocol_Descriptions/blob/master/hgv_ccdg_resequencing.md) aligns each read group to the GRCh38 reference using bwa-mem (0.7.15) with the parameters “-K 100000000 -Y”. MC and MQ tags are added using samblaster (v0.1.24) with the parameters “-a --addMateTags”. The resulting file is name-sorted with “sambamba sort -n” (v0.6.4). Duplicates are marked using Picard MarkDuplicates (v2.4.1) with the parameter “ASSUME_SORT_ORDER = queryname”, then the results are coordinate-sorted using “sambamba sort”. For multiplexed samples, these sequence-event BAMs are then merged with sambamba (v0.6.4) merge, name sorted, duplicate marked and coordinate-sorted with the same tools as above. A base quality recalibration table is generated using GATK BaseRecalibrator (v3.6) with knownSites files (dbSNP138, Mills and 1 kg indels, and known indels) from the GATK resource bundle (https://console.cloud.google.com/storage/browser/genomics-public-data/resources/broad/hg38/v0) and parameters “--preserve_qscores_less_than 6 -dfrac .1 -nct 4 -L chr1 -L chr2 -L chr3 -L chr4 -L chr5 -L chr6 -L chr7 -L chr8 -L chr9 -L chr10 -L chr11 -L chr12 -L chr13 -L chr14 -L chr15 -L chr16 -L chr17 -L chr18 -L chr19 -L chr20 -L chr21 -L chr22”. The base recalibration table is applied using GATK PrintReads with the parameters “-preserveQ 6 -BQSR “\${bqsrt}” -SQQ 10 -SQQ 20 -SQQ 30 --disable_indel_quals”. Finally, the output is converted to CRAM using ‘samtools view‘.
The pre-harmonization pipeline from the New York Genome Center aligns each read group separately to the Thousand Genomes version of build 37 reference sequence using bwa mem –M (v0.7.8). The aligned files are merged using Picard MergeSamFiles (v1.83), and duplicates are marked using Picard MarkDuplicates (v1.83). Indel realignment and base quality recalibration are both performed using the GATK (v3.4-0) commands RealignerTargetCreator, IndelRealigner, BaseRecalibrator, and PrintReads.
The post-harmonization pipeline from the New York Genome Center aligns each read group separately to the GRCh38 reference using bwa-mem (v0.7.15) with the parameters “-Y -K 100000000‘. Picard (v2.4.1) FixMateInformation is run with the parameter ‘FixMateInformation = TRUE”. Read group BAM files are merged together with Picard MergeSamFiles (v2.4.1) and the parameter “SORT_ORDER = queryname”. Duplicates are marked using Picard MarkDuplicates (v2.4.1), then the results are coordinate sorted using Picard SortSam (v2.4.1) with the parameter “SORT_ORDER = coordinate”. A base quality recalibration table is generated using GATK BaseRecalibrator (v3.5) with knownSites files (dbSNP138, Mills and 1 kg indels, and known indels) from the GATK resource bundle (https://console.cloud.google.com/storage/browser/genomics-public-data/resources/broad/hg38/v0) and parameters “--preserve_qscores_less_than 6 -L grch38.autosomes.intervals”. The base recalibration table is applied using GATK PrintReads with the parameters “-preserveQ 6 -SQQ 10 -SQQ 20 -SQQ 30”. Finally, the output is converted to CRAM using “samtools view -C” (v1.3.1).
The pre-harmonization pipeline from the TOPMED Informatics Resource Center at the University of Michigan aligns reads using default options in the GotCloud alignment pipeline17 available at https://github.com/statgen/gotcloud. It aligns the sequence reads to GRCh37 reference with decoy sequences used in 1000 Genomes. The raw sequence was aligned using bwa mem (v0.7.13-r1126)13, and sorted by samtools (v1.3.1). The duplicate marking and base quality recalibration were performed jointly using bamUtil dedup [ref—same as GotCloud] (v1.0.14).
The post-harmonization pipeline procedure from the TOPMED Informatics Resource Center at the University of Michigan (described in https://github.com/statgen/docker-alignment) first aligns each read group to the GRCh38 reference using bwa-mem (v0.7.15-r1140) with the parameters “-K 100000000 -Y -R [read_group_id]”. To add MC and MQ tags, samblaster (v0.1.24) was used with the parameters “-a --addMateTags”. Each BAM file corresponding to a read group is sorted by genomic coordinate using “samtools sort” (v1.3.1), and merged together using “samtools merge” (v1.3.1). Duplicate marking and base quality recalibration were performed jointly using bamUtil dedup_lowmem (v1.0.14) with parameters “--allReadNames –binCustom –binQualS 0:2,3:3,4:4,5:5,6:6,7:10,13:20,23:30,33:40 --recab --refFile [reference_fasta_file] --dbsnp [dbsnp_b142_vcf_file] --in [input_bam] –out -.ubam” and the piped output (in uncompressed BAM format) is converted into a CRAM file using samtools view.
### Calculation of alignment statistics
A total of 184 alignment statistics were generated for all standardized CRAM files from each center with AlignStats software. Results include metrics for both the entire CRAM file and for the subset of read-pairs with at least one read mapping to the autosome or sex chromosomes. We examined all metrics across the five CRAMs for each of the 15 samples to ensure that any differences were consistent with the various options allowed in the functional equivalence specification. Supplementary Table 1 provides examples of these metrics, and full description of all metrics can be found online (https://github.com/jfarek/alignstats).
### Variant calling for the 14-sample analysis
SNPs and indels were called for each center’s CRAM/BAM files using GATK21 version 3.5-0-g36282e4 HaplotypeCaller with the following parameters:
--genotyping_mode DISCOVERY
--standard_min_confidence_threshold_for_calling 30
--standard_min_confidence_threshold_for_emitting 0
For the pre-standardization files, the 1000 genomes phase 3 reference sequence from the GATK reference bundle ftp://ftp.broadinstitute.org/pub/svtoolkit/reference_metadata_bundles/1000G_phase3_25Jan2015.tar.gz was used. For the post-standardization files, the 1000 Genomes Project version of GRCh38DH (http://ftp.1000genomes.ebi.ac.uk/vol1/ftp/technical/reference/GRCh38_reference_genome/) was used.
Structural variants (SVs) were called for each center’s CRAM/BAM files using lumpy22 and svtools (https://github.com/hall-lab/svtools). First, split reads and reads with discordant insert sizes or orientations were extracted from the CRAM/BAM files using extract-sv-reads in the docker image halllab/extract-sv-reads@sha256:192090f72afaeaaafa104d50890b2fc23935c8dc98988a9b5c80ddf4ec50f70c using the following parameters:
Next, SV calls were made using lumpyexpress (https://github.com/arq5x/lumpy-sv) from the docker image halllab/lumpy@sha256:59ce7551307a54087e57d5cec89b17511d910d1fe9fa3651c12357f0594dcb07 with the -P parameter as well as -x to exclude regions contained in the BED file exclude.cnvnator_100bp.GRCh38.20170403.bed (exclude.cnvnator_100bp.112015.bed for pre-standardization samples). Both exclude files are available in https://github.com/hall-lab/speedseq/tree/master/annotations
Finally, the SV calls were genotyped using svtyper from the docker image halllab/svtyper@sha256:21d757e77dfc52fddeab94acd66b09a561771a7803f9581b8cca3467ab7ff94a
### Defining genomic regions
The reference genome sequence is not uniformly amenable to analysis—some regions with high amounts of repetitive sequence are difficult to align and prone to misleading analyses, while other regions comprised of mostly unique sequence can be more confidently interpreted. To gain a better understanding of how pipeline concordance differs by region, we divided the reference sequence into three broad categories. The easy genomic regions consist of the GiaB gold standard high confidence regions, lifted over to build 38. The hard regions consist of centromeres (https://www.ncbi.nlm.nih.gov/projects/genome/assembly/grc/human/data/38/Modeled_regions_for_GRCh38.tsv), microsatellite repeats (satellite entries from http://hgdownload.soe.ucsc.edu/goldenPath/hg38/bigZips/hg38.fa.out.gz), low complexity regions (https://github.com/lh3/varcmp/raw/master/scripts/LCR-hs38.bed.gz), and windows determined to have high copy number (more than 12 copies per genome across 409 samples). Any regions overlapping GiaB high confidence regions are removed from the set of hard regions. All remaining regions are classified as medium.
### Cross-center variant comparisons for the 14-sample analysis
The VCF files produced by GATK for both the pre- and post-standardization experiments were compared using hap.py from the docker image pkrusche/hap.py:v0.3.9 using the --preprocess-truth parameter.
The four data replicates of NA12878 were compared to the NA12878 gold standards in the regions defined by to obtain sensitivity and precision measurements. The post-standardization VCFs were first lifted over to GRCh37 using the Picard LiftoverVcf tool (v2.9.0) and the chain files hg38ToHg19.over.chain.gz and hg19ToGRCh37.over.chain.gz downloaded from here: http://crossmap.sourceforge.net/#chain-file. To reduce artifacts from the liftover that negatively impacted sensitivity, the gold standard files were lifted over to the build 38 reference and back to build 37, excluding any variants that didn’t lift over in both directions.
Values for sensitivity (METRIC.Recall) and precision (METRIC.Precision) were parsed out of the *.summary.csv file produced by hap.py for each comparison, using only variants with the PASS filter value set.
The downsampled data replicates of NA12878 and NA19238 aligned by the same center were compared to each other in a pairwise fashion. Pairwise comparisons between centers were performed for each non-downsampled aligned file. The variant discordance rates between pairs were calculated using the true positive, true negative, and false positive counts from the *.extended.csv output file from hap.py (TRUTH.FN + QUERY.FP)/(TRUTH.TP + TRUTH.FN + QUERY.FP). The rates reported are only for PASS variants but across the whole genome.
The VCF files of SVs produced by lumpy and svtyper were converted to BEDPE using the command “svtools vcftobedpe” from the docker container halllab/svtools@sha256:f2f3f9c788beb613bc26c858f897694cd6eaab450880c370bf0ef81d85bf8d45 The coordinates are padded with 1 bp on each side to be compatible with bedtools pairtopair. The pairwise comparisons are performed using the bedtools pairtopair command (version 2.23.0), then summarized using a python script (compare_single_sample_based_on_strand.py in https://github.com/CCDG/Pipeline-Standardization). The variant discordance rates between pairs are calculated with the following formula: (discordant + 0-only + 1-only + discordant_discordant_type)/(match + discordant + match_discordant_type + discordant_discordant_type + 0-only + 1-only).
### Variant calling for 100-sample analysis
SNPs and indels were called using the GATK best practices pipeline, including per-sample variant discovery using HaplotypeCaller with the following parameters:
“-ERC GVCF -GQB 5 -GQB 20 -GQB 60 -variant_index_type LINEAR -variant_index_parameter 128000”. Next, GVCFs from all 100 samples were merged with GATK CombineGVCFs. Genotypes were refined with GATK GenotypeGVCFs with the following parameters: “-stand_call_conf 30 -stand_emit_conf 0”. Variants with no genotyped allele in any sample are removed with the GATK command SelectVariants and the parameter “--removeUnusedAlternates”, and variant lines where the only remaining allele is a symbolic deletion (*:DEL) are also removed using grep.
SVs were called using the svtools best practices pipeline (https://github.com/hall-lab/svtools/blob/master/Tutorial.md). First, per-sample SV calls were generated with extract-sv-reads, lumpyexpress, and svtyper using the same versions and parameters as the 14 sample analysis. Next, the calls were merged into 100-sample callsets for each pipeline using the following sequence of commands and parameters from the docker container halllab/svtools@sha256:f2f3f9c788beb613bc26c858f897694cd6eaab450880c370bf0ef81d85bf8d45
svtools lsort
svtools lmerge -f 20
create_coordinates
The merged calls were then re-genotyped for each sample using the previous svtyper command. Copy number histograms were generated for each sample using the command cnvnator_wrapper.py with window size 100 (-w 100) in the docker container halllab/cnvnator@sha256:c41e9ce51183fc388ef39484cbb218f7ec2351876e5eda18b709d82b7e8af3a2. Each SV call was annotated with its copy number from the histogram file using the command “svtools copynumber” in that same docker container with the parameters “-w 100 -c coordinates”. Finally, the per-sample genotyped and annotated VCFs were merged back together and refined with the following sequence of commands in the svtools docker container:
svtools vcfpaste
svtools afreq
svtools vcftobedpe
svtools bedpesort
svtools prune -s -d 100 -e "AF"
svtools bedpetovcf
svtools classify -a repeatMasker.recent.lt200millidiv.LINE_SINE_SVA.GRCh38.sorted.bed.gz -m large_sample
### Cross-center variant comparisons for the 100-sample analysis
The VCF of SNPs and indels was split into per-sample VCFs using the command “bcftools view” with the following parameters: “-a -c 1:nref”. Additionally, any remaining variant lines with only the symbolic allele (*) remaining were removed. Pairwise comparisons between the same sample processed by different pipelines were performed using hap.py using the same commands as the 14 sample analysis. Variant concordance rates per sample were calculated using results from the extended.csv output file produced by hap.py the following formula: TRUTH.TP/(TRUTH.TP + TRUTH.FN + QUERY.FP). The reported statistics were calculated using all variants genome-wide except those that were marked LowQual by GATK. No VQSR-based filtering was used. Figure 3a reports the mean rates across all 100 samples for each pairwise comparison of pipelines.
The per-pipeline SV VCFs were converted to BEDPE using the command “svtools vcftobedpe” in the docker container halllab/svtools@sha256:f2f3f9c788beb613bc26c858f897694cd6eaab450880c370bf0ef81d85bf8d45. The variants were compared using bedtools pairtopair as in the 14 sample analysis. Next they were classified into hard, medium, and easy genomic regions by intersecting each breakpoint with BED files describing the regions using “bedtools pairtobed”. Variants were classified by the most difficult region that either of their breakpoints overlapped (see compare_round3_by_region.sh in https://github.com/CCDG/Pipeline-Standardization). Then, the variants were extracted and annotated in per-sample BEDPE files with the script compare_based_on_strand_output_bedpe.py (in https://github.com/CCDG/Pipeline-Standardization). The BEDPE files were converted to VCF using “svtools bedpetovcf” and sorted using “svtools vcfsort”. The number of shared and pipeline-unique variants were counted using “bcftools query” (version 1.6) to extract the genomic region and concordance status of each variant, then summarized with ‘bedtools groupby‘ (v2.23.0). The rates of shared variants per sample were calculated using the output of this file with the following formula: match/(match + 0-only + 1-only).
### Mendelian error (ME) rate calculation
SNPs and indels that were classified by hap.py into categories (shared between pipelines, or unique to one pipeline) were further characterized by looking at the ME rate for each of the offspring in the trios/quads. For each offspring in the sample set, the parents and offspring sample VCFs output by hap.py were merged together using “bcftools merge --force-samples” (v1.3), and the genotypes from the first pipeline in the pair were extracted. Any variants with missing genotypes or uniformly homozygous genotypes were excluded using “bcftools view -g ^miss” and “bcftools view -g het”. A custom python script (classify_mie.py in https://github.com/CCDG/Pipeline-Standardization) was used to classify each variant as uninformative, informative with no Mendelian error, or informative with Mendelian error. Total informative error and non-error sites in each genomic region were counted for shared sites and unique sites separately, and ME rate was calculated by dividing the number of ME sites by the total number of informative sites. A similar calculation was performed for the per-sample SV VCFs produced by the SV concordance calculations. Fig. 3b and Supplementary Fig. 4 report the mean ME rate across 44 offspring-parent trios for each pairwise pipeline comparison.
### Variant quality evaluation
To evaluate possible causes of remaining differences between pipelines, we extracted variant quality scores for each variant type and summarized them by concordance status in each pairwise pipeline comparison across 100 samples. For SNPs and indels, the QUAL field was extracted along with the concordance annotation from the per-sample hap.py comparison VCFs using “bcftools query” (version 1.6). The median QUAL score for each category was reported using “bedtools groupby”. For SVs, MSQ (mean sample quality) is a more informative measure of variant quality, so this field was extracted and summarized in a similar way.
### Cost calculations
To calculate the fraction of per-sample pipeline cost attributed to upstream steps, the Broad Institute production tables were queried for total workflow cost and HaplotypeCaller cost. The upstream cost was calculated as the difference between the two. All successful pipeline runs that didn’t use call caching from October 31, 2017 to May 9, 2018 were included, totaling 13,704 pipeline runs on 13,295 distinct samples.
### Code availability
All custom scripts used for the analysis are available under an MIT license at https://github.com/CCDG/Pipeline-Standardization/tree/master/scripts.
## Data availability
The 14 input WGS data sets (10 original data sets and 4 downsampled data sets) used in the initial development of the pipeline are available in the SRA under the BioProject PRJNA393319. Files in unaligned BAM format as well as CRAM as aligned by all five centers are available via the Download tab on the RunBrowser pages (for testing additional pipelines for functional equivalence). The WGS data from 19 Simon Simplex Collection quad families (accession SFARI_SSC_WGS_P, family codes 11026, 11063, 11069, 11505, 11671, 12083, 12121, 12202, 12261, 12405, 12480, 13226, 13540, 13556, 13567, 13888, 13996, 14497, 14509) are available upon approved application from SFARI Base. The WGS data from the 8 trios are available in the SRA under the BioProject PRJNA477862.
## References
1. 1.
Consortium, U. K. et al. The UK10K project identifies rare variants in health and disease. Nature 526, 82–90 (2015).
2. 2.
Collins, F. S. & Varmus, H. A new initiative on precision medicine. N. Engl. J. Med. 372, 793–795 (2015).
3. 3.
Caulfield, M. et al. The 100,000 Genomes Project Protocol. figshare https://doi.org/10.6084/m9.figshare.4530893.v2 (2017).
4. 4.
Alliance Aviesan. Genomic Medicine France 2025 (Aviesan, 2017).
5. 5.
Felsenfeld, A. Centers for Common Disease Genomics. National Human Genome Research Institutehttps://www.genome.gov/27563570 (2016).
6. 6.
Sanders, S. J. et al. Whole genome sequencing in psychiatric disorders: the WGSPD consortium. Nat. Neurosci. 20, 1661–1668 (2017).
7. 7.
1000 Genomes Project Consortium. et al. A global reference for human genetic variation. Nature 526, 68–74 (2015).
8. 8.
1000 Genomes Project Consortium. et al. An integrated map of genetic variation from 1092 human genomes. Nature 491, 56–65 (2012).
9. 9.
McCarthy, S. et al. A reference panel of 64,976 haplotypes for genotype imputation. Nat. Genet. 48, 1279–1283 (2016).
10. 10.
Lek, M. et al. Analysis of protein-coding genetic variation in 60,706 humans. Nature 536, 285–291 (2016).
11. 11.
Karczewski, K. J. & Francioli, L. The Genome Aggregation Database (gnomAD). MacArthur Labhttps://macarthurlab.org/2017/02/27/the-genome-aggregation-database-gnomad/ (2017).
12. 12.
Regier, A. P. Pipeline-Standardization. GitHubhttps://github.com/CCDG/Pipeline-Standardization/blob/master/PipelineStandard.md (2017).
13. 13.
Li, H. Aligning sequence reads, clone sequences and assembly contigs with BWA-MEM. Preprint at http://arxiv.org/abs/1303.3997 (2013).
14. 14.
Li, H. et al. The sequence alignment/Map format and SAMtools. Bioinformatics 25, 2078–2079 (2009).
15. 15.
Faust, G. G. & Hall, I. M. SAMBLASTER: fast duplicate marking and structural variant read extraction. Bioinformatics 30, 2503–2505 (2014).
16. 16.
Chiang, C. et al. SpeedSeq: ultra-fast personal genome analysis and interpretation. Nat. Methods 12, 966–968 (2015).
17. 17.
Jun, G., Wing, M. K., Abecasis, G. R. & Kang, H. M. An efficient and scalable analysis framework for variant extraction and refinement from population-scale DNA sequence data. Genome Res. 25, 918–925 (2015).
18. 18.
Tarasov, A. et al. Sambamba: fast processing of NGS alignment formats. Bioinformatics 31, 2032-2034 (2015).
19. 19.
Hsi-Yang Fritz, M., Leinonen, R., Cochrane, G. & Birney, E. Efficient storage of high throughput DNA sequencing data using reference-based compression. Genome Res. 21, 734–740 (2011).
20. 20.
Church, D. M. et al. Extending reference assembly models. Genome Biol. 16, 13 (2015).
21. 21.
DePristo, M. A. et al. A framework for variation discovery and genotyping using next-generation DNA sequencing data. Nat. Genet. 43, 491–498 (2011).
22. 22.
Layer, R. M., Chiang, C., Quinlan, A. R. & Hall, I. M. LUMPY: a probabilistic framework for structural variant discovery. Genome Biol. 15, R84 (2014).
23. 23.
Zook, J. M. et al. Integrating human sequence data sets provides a resource of benchmark SNP and indel genotype calls. Nat. Biotechnol. 32, 246–251 (2014).
24. 24.
Turner, T. N. et al. Genomic patterns of de novo mutation in simplex autism. Cell 171, 710–722 (2017).
25. 25.
Leek, J. T. et al. Tackling the widespread and critical impact of batch effects in high-throughput data. Nat. Rev. Genet. 11, 733–739 (2010).
26. 26.
Global Alliance for, G. & Health. Genomics. A federated ecosystem for sharing genomic, clinical data. Science 352, 1278–1280 (2016).
## Acknowledgements
We thank NHGRI and NHLBI program staff for supporting this effort, and Jose M. Soto for calculating pipeline costs. This work was funded by NHGRI CCDG awards to Washington University in St. Louis (UM1 HG008853), Broad Institute of MIT and Harvard (UM1 HG008895), Baylor College of Medicine (UM1 HG008898), and the New York Genome Center (UM1 HG008901), the NHGRI GSP coordinating center (U24 HG008956), and an NHLBI TOPMed Informatics Research Center award to the University of Michigan (3R01HL-117626-02S1) as well as grants to B.N. (U01 HG00908, R01 MH107649), H.K (1 R21 HL133758-01, 1 U01 HL137182-01) and G.A. (4 R01 HL117626-04). The following DNA samples were obtained from the NHGRI Sample Repository for Human Genetic Research at the Coriell Institute for Medical Research: NA12878, NA12891, NA12892, NA19238, NA19431, NA19648, HG00512, HG00513, HG00514, HG00731, HG00732, HG00733, NA19239, NA19240, HG01350, HG01351, HG01352, HG02059, HG02060, HG02061, HG02816, HG02817, HG02818, NA24143, NA24149, NA24385.
## Author information
Authors
### Contributions
I.H., B.N. and G.A. conceived the approach and designed the study with M.Z. and W.S. Y.F., D.L., H.K., A.R., D.H., T.M., W.S., M.Z. and I.H. developed and tested the FE standard. Y.F., D.L., A.R., O.K., H.K., B.C., M.K., E.B., D.A., A.E., G.A., W.S., M.Z. and I.H. developed the center-specific pipelines. A.R., Y.F., D.L., O.K., H.K., D.H., B.C., M.K., E.B., J.X. and Y.Z. performed the data analysis. J.X., Y.Z. and T.M. reviewed and improved the standards document and supported the logistics of data sharing. H.L. provided important updates to software and intellectual guidance. I.H. and A.R. led manuscript preparation with contributions from E.B., B.N., W.S., O.K., D.H., M.Z., H.K., Y.F., G.A. and D.L.
### Corresponding author
Correspondence to Ira M. Hall.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Regier, A.A., Farjoun, Y., Larson, D.E. et al. Functional equivalence of genome sequencing analysis pipelines enables harmonized variant calling across human genetics projects. Nat Commun 9, 4038 (2018). https://doi.org/10.1038/s41467-018-06159-4
• Accepted:
• Published:
• DOI: https://doi.org/10.1038/s41467-018-06159-4
• ### Deep learning enables genetic analysis of the human thoracic aorta
• James P. Pirruccello
• Mark D. Chaffin
• Patrick T. Ellinor
Nature Genetics (2022)
• ### Comparison of sequencing data processing pipelines and application to underrepresented African human populations
• Gwenna Breton
• Anna C. V. Johansson
• Mattias Jakobsson
BMC Bioinformatics (2021)
• ### Trellis for efficient data and task management in the VA Million Veteran Program
• Paul Billing Ross
• Jina Song
• Cuiping Pan
Scientific Reports (2021)
• ### Recent ultra-rare inherited variants implicate new autism candidate risk genes
• Amy B. Wilfert
• Tychele N. Turner
• Evan E. Eichler
Nature Genetics (2021)
• ### Chromosome Xq23 is associated with lower atherogenic lipid concentrations and favorable cardiometabolic indices
• Akhil Pampana
• Gina M. Peloso
Nature Communications (2021) | 2022-01-24 15:25:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2947896420955658, "perplexity": 14811.206890705864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304570.90/warc/CC-MAIN-20220124124654-20220124154654-00684.warc.gz"} |