url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://socratic.org/questions/58af474311ef6b049ea50cc1
|
# Question #50cc1
Feb 23, 2017
See below.
#### Explanation:
First of all, the same amount of time almost exactly would "go by" on each planet, ie 24 hours. Even though the planets are moving at terrifying speeds, and gravity is over double on Jupiter, I don't think there are any relativistic effects worth noting here.
Also, a simple pendulum has approximate period:
$T = 2 \pi \sqrt{\frac{l}{g}}$
On Earth that means the pendulum has, as your post suggests, a period of:
${T}_{e} = 2 \pi \sqrt{\frac{0.25}{9.81}} \approx 1 s$
On Jupiter, that becomes:
${T}_{j} = 2 \pi \sqrt{\frac{0.25}{24}} = 0.641 s$
So it seems that if you assume that every oscillation on Jupiter was still 1 second, you would be fooled into thinking that 24 hours had passed, when in fact only $0.641 \cdot 24 = 15.4$ hours had passed.
I hope I haven't misunderstood you :(
|
2020-05-29 01:15:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.888157308101654, "perplexity": 1613.1044230461832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347401004.26/warc/CC-MAIN-20200528232803-20200529022803-00039.warc.gz"}
|
http://mathonline.wikidot.com/the-binomial-theorem
|
The Binomial Theorem
The Binomial Theorem
Consider the expansion of the binomial $(1 + x)^n$ for $n \in \{ 0, 1, 2, ... \}$. When $n = 0$ we have that:
(1)
\begin{align} \quad (1 + x)^0 = 1 \end{align}
When $n = 1$, $n = 2$, and $n = 3$ we get:
(2)
$$(1 + x)^1 = 1 + x$$
(3)
\begin{align} \quad (1 + x)^2 = 1 + 2x + x^2 \end{align}
(4)
\begin{align} \quad (1 + x)^3 = 1 + 3x + 3x^2 + x^3 \end{align}
Notice that if we list the terms of the expansion of $(1 + x)^n$ in ascending order then the coefficients of these terms match the numbers in each row $n$ of Pascal's Triangle. More generally, this property is also apparent in the expansion of the binomial $(x + y)^n$. For $n = 0$, $n = 1$, $n = 2$, and $n = 3$ we have that:
(5)
\begin{align} \quad (x + y)^0 = 1 \end{align}
(6)
\begin{align} \quad (x + y)^1 = x + y \end{align}
(7)
\begin{align} \quad (x + y)^2 = x^2 + 2xy + y^2 \end{align}
(8)
\begin{align} \quad (x+y)^3 = x^3 + 3x^2 y + 3xy^2 + y^3 \end{align}
The following theorem known as the Binomial Theorem states this result.
Theorem 1 (The Binomial Theorem): For all $x, y \in \mathbb{R}$ and $n \in \{ 0, 1, 2, ... \}$ we have that $\displaystyle{(x + y)^n = \binom{n}{0} x^ny^0 + \binom{n}{1} x^{n-1}y^1 + ... + \binom{n}{n-1} x^1y^{n-1} + \binom{n}{n} x^0y^n = \sum_{k=0}^{n} \binom{n}{k} x^{n-k} y^k}$.
• Proof: Let $n \in \{ 0, 1, 2, ... \}$ and considered the simplified expanded product of:
(9)
\begin{align} \quad (x + y)^n = \underbrace{(x + y)(x + y) ... (x + y)}_{n \:\mathrm{-factors}} \end{align}
• For each factor $(x + y)$ we obtain a term from distributing $x$ and another term from distributing $y$ across either the $x$ or the $y$ in the remaining factors, $(x + y)$. There are $\binom{n}{0} = 1$ ways to get the term $x^ny^0$ (by first multiplying all $x$'s in each of the $(x + y)$ factors), there are $\binom{n}{1} = n$ ways to get the term $x^{n-1}y^1$ (by multiplying any $n-1$ out of $n$ of the $x$'s and then the remaining $y$), etc… and so forth. By continuing in this fashion, we will eventually obtain all terms of the expansion of $(x + y)^n$ and get:
(10)
\begin{align} \quad (x + y)^n = \binom{n}{0} x^ny^0 + \binom{n}{1} x^{n-1}y^1 + ... + \binom{n}{n-1} x^1y^{n-1} + \binom{n}{n} x^0y^n = \sum_{k=0}^{n} \binom{n}{k} x^{n-k} y^k \quad \blacksquare \end{align}
The argument made in Theorem 1 is a bit subtle but nevertheless important. For a simpler example, consider the following expansion:
(11)
\begin{align} \quad (x + y)^3 = (x+y)(x+y)(x+y) \end{align}
Let $A= \{ x, y \}$ the set of terms in the factors $(x + y)$.
For each of $3$ multiplications, we will get a term corresponding to choosing one of the elements in $A$, and multiplying them together, that is, we get a finite sequence $\{ a, b, c \}$ where $a, b, c \in A$ and such that $abc$ is a term in the expansion of $(x + y)^3$. We thus choose to multiply either $0$, $1$, $2$, or $3$ of the $y$'s, and correspondingly, choose to multiply either $3$, $2$, $1$, or $0$ of the $x$'s. Thus the term $x^{n-k}y^k$ appears precisely $\binom{n}{k}$ times, and the full expansion of $(x + y)^3$ is:
(12)
\begin{align} \quad (x + y)^3 = x^3 + 3x^2y + 3xy^2 + y^3 = \sum_{k=0}^{3} \binom{3}{k} x^{n-k}y^k \end{align}
Before we end this page, recall that the binomial coefficients and Pascal's triangle are very much related. We have already noted and proven that a sequence of consecutive binomial coefficients in any row of Pascal's triangle is unimodal symmetric. We will now further show that the sum all binomial coefficients $\binom{n}{k}$ where $k$ is odd is equal to the sum of all binomial coefficients where $k$ is even for any row $n$.
Corollary 1: If $n$ and $k$ are positive integers that satisfy $0 \leq k \leq n$ then $\displaystyle{\sum_{k \: \mathrm{is \: odd}} \binom{n}{k} = \sum_{k \: \mathrm{is \: even}} \binom{n}{k}}$.
• Proof: Consider the binomial $(x + y)^n$. By Theorem 1 we have that the expansion of $(x + y)^n$ is given by:
(13)
\begin{align} \quad (x + y)^n = \sum_{k=0}^{n} \binom{n}{k} x^{n-k} y^k \end{align}
• Setting $x = 1$ and $y = -1$ gives us:
(14)
\begin{align} \quad (1 - 1)^n = \sum_{k=0}^{n} \binom{n}{k} 1^{n-k} (-1)^k \\ \quad 0 = \sum_{k=0}^{n} \binom{n}{k} (-1)^k \\ \quad 0 = \sum_{k \: \mathrm{is \: even}} \binom{n}{k} - \sum_{k \: \mathrm{is \: odd}} \binom{n}{k} \\ \quad \sum_{k \: \mathrm{is \: odd}} \binom{n}{k} = \sum_{k \: \mathrm{is \: even}} \binom{n}{k} \quad \blacksquare \end{align}
|
2022-01-27 03:34:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 13, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9989776611328125, "perplexity": 267.89218814584206}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305052.56/warc/CC-MAIN-20220127012750-20220127042750-00399.warc.gz"}
|
https://electronics.stackexchange.com/questions/138114/how-to-know-how-many-bits-a-microcontroller-has
|
how to know how many bits a microcontroller has?
How can I know what kind of architecture a microcontroller has? For example where can I read in the datasheet of the pic 16F873 how many bits architecture is? 8, 16, 32?
Next question what does: What does mean: Up to 256 x 8 bytes of EEPROM Data Memory. Mean that there are 256 slots of 8 bytes? Or (256 x 8) bytes? So 256 slots of 8 bit each? I don’t think it’s explained really well :S. Thx for answers :)
• Read the datasheet. That's how you learn the internal workings of a particular microcontroller (and any other IC for that matter). – Nick Alexeev Nov 11 '14 at 20:59
• It's perfectly well explained, you just need to understand the technical jargon. – markt Nov 11 '14 at 21:19
• "256 x 8 bytes" ? sure you don't mean 8 bits? – Chris Stratton Nov 11 '14 at 22:48
With PICs in particular, sometimes you find what you need in the Family Reference Manual as opposed to the data sheet. In this case, the mid-range family ref manual has a chapter on architecture that you might find of use, and its in a different tone than the chip's datasheet.
For PICs datasheets tend to have a ton of chip specific info,but the Family Ref Manuals tend to show a more global picture.
• Sorry, -1 because I don't like answers that say "there is PDF with 688 pages and you can find answer there". – Kamil Nov 11 '14 at 22:49
• @Kamil People keep pointing to the data sheet, and that's not necessarily the best resource for this type of question. In learning PICs it took me a while to realize that the family refs existed, and that sometimes they're valuable, so I thought I'd provide the shortcut for a new user. There was no need to repeat the info in the answers already provided. I was a bit divided on commenting vs answering, but I thought there was enough archival value to make this an answer – Scott Seidman Nov 11 '14 at 22:57
• OK, you convinced me. Please edit something so I can rollback my downvote. – Kamil Nov 11 '14 at 23:16
• +1 for pointing people at the right documentation - especially one many might not realize exists! – Grant Nov 12 '14 at 0:57
Page 1 top of the page then further down in the architecture there are the clues: -
256 x 8 usually means 256 bytes each byte is 8 bits wide.
• The 'up to' part is caused by the fact that the datasheet describes multiple devices: the 16F873 that has 128 bytes EPROM, and the 16F876 that has 256 bytes. – Wouter van Ooijen Nov 11 '14 at 21:10
The datasheet states categorically the core CPU size on the title page:
28/40-Pin 8-Bit CMOS FLASH
The EEPROM is arranged in 8-bit bytes, and there are 256 of them.
It's a Harvard architecture 8-bit processor with a 14-bit instruction width.
That's why a 4096-word program memory is listed as 7.2K (7168) bytes.
The 16F873 is an 8-bit microcontroller. Its internal registers are 8 bits wide. The data sheet should say so, but maybe it assumes you already know that -- the person writing it may have thought it was too obvious to mention. The internal RAM and EEPROM are also 8 bits wide.
As for the EEPROM, 256 x 8 in this context means 256 8-bit locations in the memory. 256 slots of 8 bits each. It's not well explained, by the sound of it.
There are many Microchip microcontroller families.
8-bit: PIC10, PIC12, PIC16, PIC18
16bit: PIC24, PIC30, PIC33
32bit: PIC32
If you go to Microchip website and enter microcontrollers (link) you will see these families on the left side.
Regarding 256 x 8 bytes - I agree that this may mislead.
256 x 8 means: 256 directly addressable 8-bit registers of memory (256*8 bits).
address data
0x0000 xxxxxxxx (byte 0)
0x0001 xxxxxxxx (byte 1)
0x0002 xxxxxxxx (byte 2)
0x0003 xxxxxxxx (byte 3)
If that would be
256 x 16 - thay would be 512 bytes, but only 256 directly addressable (you have to store 16 bits at once) and you cannot directly access each byte.
address data
0x0000 xxxxxxxx xxxxxxxx (byte 0, byte 1)
0x0001 xxxxxxxx xxxxxxxx (byte 2, byte 2)
In Intel x86 - most of registers are 32-bit wide. You cannot read only one byte, you have to read whole register to get one byte from it.
|
2019-08-22 17:50:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21948565542697906, "perplexity": 2973.860245461056}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317339.12/warc/CC-MAIN-20190822172901-20190822194901-00039.warc.gz"}
|
https://i.publiclab.org/tag/comments/with:ayana14_ac?page=1
|
# with:ayana14_ac with:ayana14_ac
Author Comment Last activity Moderation
warren " Noting as a reference, this interesting video on making maps of conductivity data! @mimiss " | Read more » almost 4 years ago
|
2023-03-27 13:45:52
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8375862836837769, "perplexity": 12233.008401542562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948632.20/warc/CC-MAIN-20230327123514-20230327153514-00008.warc.gz"}
|
https://people.maths.bris.ac.uk/~matyd/GroupNames/193/C9xDic6.html
|
Copied to
clipboard
## G = C9×Dic6order 216 = 23·33
### Direct product of C9 and Dic6
Series: Derived Chief Lower central Upper central
Derived series C1 — C6 — C9×Dic6
Chief series C1 — C3 — C32 — C3×C6 — C3×C18 — C9×Dic3 — C9×Dic6
Lower central C3 — C6 — C9×Dic6
Upper central C1 — C18 — C36
Generators and relations for C9×Dic6
G = < a,b,c | a9=b12=1, c2=b6, ab=ba, ac=ca, cbc-1=b-1 >
Smallest permutation representation of C9×Dic6
On 72 points
Generators in S72
(1 44 25 9 40 33 5 48 29)(2 45 26 10 41 34 6 37 30)(3 46 27 11 42 35 7 38 31)(4 47 28 12 43 36 8 39 32)(13 59 61 17 51 65 21 55 69)(14 60 62 18 52 66 22 56 70)(15 49 63 19 53 67 23 57 71)(16 50 64 20 54 68 24 58 72)
(1 2 3 4 5 6 7 8 9 10 11 12)(13 14 15 16 17 18 19 20 21 22 23 24)(25 26 27 28 29 30 31 32 33 34 35 36)(37 38 39 40 41 42 43 44 45 46 47 48)(49 50 51 52 53 54 55 56 57 58 59 60)(61 62 63 64 65 66 67 68 69 70 71 72)
(1 67 7 61)(2 66 8 72)(3 65 9 71)(4 64 10 70)(5 63 11 69)(6 62 12 68)(13 48 19 42)(14 47 20 41)(15 46 21 40)(16 45 22 39)(17 44 23 38)(18 43 24 37)(25 57 31 51)(26 56 32 50)(27 55 33 49)(28 54 34 60)(29 53 35 59)(30 52 36 58)
G:=sub<Sym(72)| (1,44,25,9,40,33,5,48,29)(2,45,26,10,41,34,6,37,30)(3,46,27,11,42,35,7,38,31)(4,47,28,12,43,36,8,39,32)(13,59,61,17,51,65,21,55,69)(14,60,62,18,52,66,22,56,70)(15,49,63,19,53,67,23,57,71)(16,50,64,20,54,68,24,58,72), (1,2,3,4,5,6,7,8,9,10,11,12)(13,14,15,16,17,18,19,20,21,22,23,24)(25,26,27,28,29,30,31,32,33,34,35,36)(37,38,39,40,41,42,43,44,45,46,47,48)(49,50,51,52,53,54,55,56,57,58,59,60)(61,62,63,64,65,66,67,68,69,70,71,72), (1,67,7,61)(2,66,8,72)(3,65,9,71)(4,64,10,70)(5,63,11,69)(6,62,12,68)(13,48,19,42)(14,47,20,41)(15,46,21,40)(16,45,22,39)(17,44,23,38)(18,43,24,37)(25,57,31,51)(26,56,32,50)(27,55,33,49)(28,54,34,60)(29,53,35,59)(30,52,36,58)>;
G:=Group( (1,44,25,9,40,33,5,48,29)(2,45,26,10,41,34,6,37,30)(3,46,27,11,42,35,7,38,31)(4,47,28,12,43,36,8,39,32)(13,59,61,17,51,65,21,55,69)(14,60,62,18,52,66,22,56,70)(15,49,63,19,53,67,23,57,71)(16,50,64,20,54,68,24,58,72), (1,2,3,4,5,6,7,8,9,10,11,12)(13,14,15,16,17,18,19,20,21,22,23,24)(25,26,27,28,29,30,31,32,33,34,35,36)(37,38,39,40,41,42,43,44,45,46,47,48)(49,50,51,52,53,54,55,56,57,58,59,60)(61,62,63,64,65,66,67,68,69,70,71,72), (1,67,7,61)(2,66,8,72)(3,65,9,71)(4,64,10,70)(5,63,11,69)(6,62,12,68)(13,48,19,42)(14,47,20,41)(15,46,21,40)(16,45,22,39)(17,44,23,38)(18,43,24,37)(25,57,31,51)(26,56,32,50)(27,55,33,49)(28,54,34,60)(29,53,35,59)(30,52,36,58) );
G=PermutationGroup([(1,44,25,9,40,33,5,48,29),(2,45,26,10,41,34,6,37,30),(3,46,27,11,42,35,7,38,31),(4,47,28,12,43,36,8,39,32),(13,59,61,17,51,65,21,55,69),(14,60,62,18,52,66,22,56,70),(15,49,63,19,53,67,23,57,71),(16,50,64,20,54,68,24,58,72)], [(1,2,3,4,5,6,7,8,9,10,11,12),(13,14,15,16,17,18,19,20,21,22,23,24),(25,26,27,28,29,30,31,32,33,34,35,36),(37,38,39,40,41,42,43,44,45,46,47,48),(49,50,51,52,53,54,55,56,57,58,59,60),(61,62,63,64,65,66,67,68,69,70,71,72)], [(1,67,7,61),(2,66,8,72),(3,65,9,71),(4,64,10,70),(5,63,11,69),(6,62,12,68),(13,48,19,42),(14,47,20,41),(15,46,21,40),(16,45,22,39),(17,44,23,38),(18,43,24,37),(25,57,31,51),(26,56,32,50),(27,55,33,49),(28,54,34,60),(29,53,35,59),(30,52,36,58)])
C9×Dic6 is a maximal subgroup of
Dic6⋊D9 C18.D12 C12.D18 C9⋊Dic12 D18.D6 Dic65D9 Dic18⋊S3 S3×Q8×C9
81 conjugacy classes
class 1 2 3A 3B 3C 3D 3E 4A 4B 4C 6A 6B 6C 6D 6E 9A ··· 9F 9G ··· 9L 12A ··· 12H 12I 12J 12K 12L 18A ··· 18F 18G ··· 18L 36A ··· 36R 36S ··· 36AD order 1 2 3 3 3 3 3 4 4 4 6 6 6 6 6 9 ··· 9 9 ··· 9 12 ··· 12 12 12 12 12 18 ··· 18 18 ··· 18 36 ··· 36 36 ··· 36 size 1 1 1 1 2 2 2 2 6 6 1 1 2 2 2 1 ··· 1 2 ··· 2 2 ··· 2 6 6 6 6 1 ··· 1 2 ··· 2 2 ··· 2 6 ··· 6
81 irreducible representations
dim 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 type + + + + - + - image C1 C2 C2 C3 C6 C6 C9 C18 C18 S3 Q8 D6 C3×S3 Dic6 C3×Q8 S3×C6 S3×C9 Q8×C9 C3×Dic6 S3×C18 C9×Dic6 kernel C9×Dic6 C9×Dic3 C3×C36 C3×Dic6 C3×Dic3 C3×C12 Dic6 Dic3 C12 C36 C3×C9 C18 C12 C9 C32 C6 C4 C3 C3 C2 C1 # reps 1 2 1 2 4 2 6 12 6 1 1 1 2 2 2 2 6 6 4 6 12
Matrix representation of C9×Dic6 in GL2(𝔽37) generated by
16 0 0 16
,
14 0 19 8
,
27 9 34 10
G:=sub<GL(2,GF(37))| [16,0,0,16],[14,19,0,8],[27,34,9,10] >;
C9×Dic6 in GAP, Magma, Sage, TeX
C_9\times {\rm Dic}_6
% in TeX
G:=Group("C9xDic6");
// GroupNames label
G:=SmallGroup(216,44);
// by ID
G=gap.SmallGroup(216,44);
# by ID
G:=PCGroup([6,-2,-2,-3,-2,-3,-3,72,169,79,122,5189]);
// Polycyclic
G:=Group<a,b,c|a^9=b^12=1,c^2=b^6,a*b=b*a,a*c=c*a,c*b*c^-1=b^-1>;
// generators/relations
Export
×
𝔽
|
2018-12-11 03:12:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.905594527721405, "perplexity": 1577.9044330965203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823550.42/warc/CC-MAIN-20181211015030-20181211040530-00342.warc.gz"}
|
https://www.linstitute.net/archives/74639
|
# 2010 AIME I真题及答案解析
## Problem 1
Maya lists all the positive divisors of $2010^2$. She then randomly selects two distinct divisors from this list. Let $p$ be the probability that exactly one of the selected divisors is a perfect square. The probability $p$ can be expressed in the form $\frac {m}{n}$, where $m$ and $n$ are relatively prime positive integers. Find $m + n$.
## Problem 2
Find the remainder when $9 \times 99 \times 999 \times \cdots \times \underbrace{99\cdots9}_{\text{999 9's}}$ is divided by $1000$.
## Problem 3
Suppose that $y = \frac34x$ and $x^y = y^x$. The quantity $x + y$ can be expressed as a rational number $\frac {r}{s}$, where $r$ and $s$ are relatively prime positive integers. Find $r + s$..
## Problem 4
Jackie and Phil have two fair coins and a third coin that comes up heads with probability $\frac47$. Jackie flips the three coins, and then Phil flips the three coins. Let $\frac {m}{n}$ be the probability that Jackie gets the same number of heads as Phil, where $m$ and $n$ are relatively prime positive integers. Find $m + n$.
## Problem 5
Positive integers $a$, $b$, $c$, and $d$ satisfy $a > b > c > d$, $a + b + c + d = 2010$, and $a^2 - b^2 + c^2 - d^2 = 2010$. Find the number of possible values of $a$.
## Problem 6
Let $P(x)$ be a quadratic polynomial with real coefficients satisfying $x^2 - 2x + 2 \le P(x) \le 2x^2 - 4x + 3$ for all real numbers $x$, and suppose $P(11) = 181$. Find $P(16)$.
## Problem 7
Define an ordered triple $(A, B, C)$ of sets to be $\textit{minimally intersecting}$ if $|A \cap B| = |B \cap C| = |C \cap A| = 1$ and $A \cap B \cap C = \emptyset$. For example, $(\{1,2\},\{2,3\},\{1,3,4\})$ is a minimally intersecting triple. Let $N$ be the number of minimally intersecting ordered triples of sets for which each set is a subset of $\{1,2,3,4,5,6,7\}$. Find the remainder when $N$ is divided by $1000$.
Note: $|S|$ represents the number of elements in the set $S$.
## Problem 8
For a real number $a$, let $\lfloor a \rfloor$ denote the greatest integer less than or equal to $a$. Let $\mathcal{R}$ denote the region in the coordinate plane consisting of points $(x,y)$ such that $\lfloor x \rfloor ^2 + \lfloor y \rfloor ^2 = 25$. The region $\mathcal{R}$ is completely contained in a disk of radius $r$ (a disk is the union of a circle and its interior). The minimum value of $r$ can be written as $\frac {\sqrt {m}}{n}$, where $m$ and $n$ are integers and $m$ is not divisible by the square of any prime. Find $m + n$.
## Problem 9
Let $(a,b,c)$ be a real solution of the system of equations $x^3 - xyz = 2$, $y^3 - xyz = 6$, $z^3 - xyz = 20$. The greatest possible value of $a^3 + b^3 + c^3$ can be written in the form $\frac {m}{n}$, where $m$ and $n$ are relatively prime positive integers. Find $m + n$.
## Problem 10
Let $N$ be the number of ways to write $2010$ in the form $2010 = a_3 \cdot 10^3 + a_2 \cdot 10^2 + a_1 \cdot 10 + a_0$, where the $a_i$'s are integers, and $0 \le a_i \le 99$. An example of such a representation is $1\cdot 10^3 + 3\cdot 10^2 + 67\cdot 10^1 + 40\cdot 10^0$. Find $N$.
## Problem 11
Let $\mathcal{R}$ be the region consisting of the set of points in the coordinate plane that satisfy both $|8 - x| + y \le 10$ and $3y - x \ge 15$. When $\mathcal{R}$ is revolved around the line whose equation is $3y - x = 15$, the volume of the resulting solid is $\frac {m\pi}{n\sqrt {p}}$, where $m$, $n$, and $p$ are positive integers, $m$ and $n$ are relatively prime, and $p$ is not divisible by the square of any prime. Find$m + n + p$.
## Problem 12
Let $m \ge 3$ be an integer and let $S = \{3,4,5,\ldots,m\}$. Find the smallest value of $m$ such that for every partition of $S$ into two subsets, at least one of the subsets contains integers $a$, $b$, and $c$ (not necessarily distinct) such that $ab = c$.
Note: a partition of $S$ is a pair of sets $A$, $B$ such that $A \cap B = \emptyset$, $A \cup B = S$.
## Problem 13
Rectangle $ABCD$ and a semicircle with diameter $AB$ are coplanar and have nonoverlapping interiors. Let $\mathcal{R}$ denote the region enclosed by the semicircle and the rectangle. Line $\ell$ meets the semicircle, segment $AB$, and segment $CD$ at distinct points $N$, $U$, and $T$, respectively. Line $\ell$ divides region $\mathcal{R}$ into two regions with areas in the ratio $1: 2$. Suppose that $AU = 84$, $AN = 126$, and $UB = 168$. Then $DA$ can be represented as $m\sqrt {n}$, where $m$ and $n$ are positive integers and $n$ is not divisible by the square of any prime. Find $m + n$.
## Problem 14
For each positive integer n, let $f(n) = \sum_{k = 1}^{100} \lfloor \log_{10} (kn) \rfloor$. Find the largest value of n for which $f(n) \le 300$.
Note: $\lfloor x \rfloor$ is the greatest integer less than or equal to $x$.
## Problem 15
In $\triangle{ABC}$ with $AB = 12$, $BC = 13$, and $AC = 15$, let $M$ be a point on $\overline{AC}$ such that the incircles of $\triangle{ABM}$ and $\triangle{BCM}$ have equal radii. Let $p$ and $q$ be positive relatively prime integers such that $\frac {AM}{CM} = \frac {p}{q}$. Find $p + q$.
|
2021-04-14 07:04:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 138, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9525004625320435, "perplexity": 45.95787508507456}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077336.28/warc/CC-MAIN-20210414064832-20210414094832-00305.warc.gz"}
|
https://cms.math.ca/cjm/msc/47?page=4
|
Canadian Mathematical Society www.cms.math.ca
location: Publications → journals
Search results
Search: MSC category 47 ( Operator theory )
Expand all Collapse all Results 76 - 86 of 86
76. CJM 2000 (vol 52 pp. 197)
Sublinearity and Other Spectral Conditions on a Semigroup Subadditivity, sublinearity, submultiplicativity, and other conditions are considered for spectra of pairs of operators on a Hilbert space. Sublinearity, for example, is a weakening of the well-known property~$L$ and means $\sigma(A+\lambda B) \subseteq \sigma(A) + \lambda \sigma(B)$ for all scalars $\lambda$. The effect of these conditions is examined on commutativity, reducibility, and triangularizability of multiplicative semigroups of operators. A sample result is that sublinearity of spectra implies simultaneous triangularizability for a semigroup of compact operators. Categories:47A15, 47D03, 15A30, 20A20, 47A10, 47B10
77. CJM 1999 (vol 51 pp. 850)
Muhly, Paul S.; Solel, Baruch
Tensor Algebras, Induced Representations, and the Wold Decomposition Our objective in this sequel to \cite{MSp96a} is to develop extensions, to representations of tensor algebras over $C^{*}$-correspondences, of two fundamental facts about isometries on Hilbert space: The Wold decomposition theorem and Beurling's theorem, and to apply these to the analysis of the invariant subspace structure of certain subalgebras of Cuntz-Krieger algebras. Keywords:tensor algebras, correspondence, induced representation, Wold decomposition, Beurling's theoremCategories:46L05, 46L40, 46L89, 47D15, 47D25, 46M10, 46M99, 47A20, 47A45, 47B35
78. CJM 1999 (vol 51 pp. 566)
Ferenczi, V.
Quotient Hereditarily Indecomposable Banach Spaces A Banach space $X$ is said to be {\it quotient hereditarily indecomposable\/} if no infinite dimensional quotient of a subspace of $X$ is decomposable. We provide an example of a quotient hereditarily indecomposable space, namely the space $X_{\GM}$ constructed by W.~T.~Gowers and B.~Maurey in \cite{GM}. Then we provide an example of a reflexive hereditarily indecomposable space $\hat{X}$ whose dual is not hereditarily indecomposable; so $\hat{X}$ is not quotient hereditarily indecomposable. We also show that every operator on $\hat{X}^*$ is a strictly singular perturbation of an homothetic map. Categories:46B20, 47B99
79. CJM 1998 (vol 50 pp. 673)
Carey, Alan; Phillips, John
Fredholm modules and spectral flow An {\it odd unbounded\/} (respectively, $p$-{\it summable}) {\it Fredholm module\/} for a unital Banach $\ast$-algebra, $A$, is a pair $(H,D)$ where $A$ is represented on the Hilbert space, $H$, and $D$ is an unbounded self-adjoint operator on $H$ satisfying: \item{(1)} $(1+D^2)^{-1}$ is compact (respectively, $\Trace\bigl((1+D^2)^{-(p/2)}\bigr) <\infty$), and \item{(2)} $\{a\in A\mid [D,a]$ is bounded$\}$ is a dense $\ast-$subalgebra of $A$. If $u$ is a unitary in the dense $\ast-$subalgebra mentioned in (2) then $$uDu^\ast=D+u[D,u^{\ast}]=D+B$$ where $B$ is a bounded self-adjoint operator. The path $$D_t^u:=(1-t) D+tuDu^\ast=D+tB$$ is a continuous'' path of unbounded self-adjoint Fredholm'' operators. More precisely, we show that $$F_t^u:=D_t^u \bigl(1+(D_t^u)^2\bigr)^{-{1\over 2}}$$ is a norm-continuous path of (bounded) self-adjoint Fredholm operators. The {\it spectral flow\/} of this path $\{F_t^u\}$ (or $\{ D_t^u\}$) is roughly speaking the net number of eigenvalues that pass through $0$ in the positive direction as $t$ runs from $0$ to $1$. This integer, $$\sf(\{D_t^u\}):=\sf(\{F_t^u\}),$$ recovers the pairing of the $K$-homology class $[D]$ with the $K$-theory class [$u$]. We use I.~M.~Singer's idea (as did E.~Getzler in the $\theta$-summable case) to consider the operator $B$ as a parameter in the Banach manifold, $B_{\sa}(H)$, so that spectral flow can be exhibited as the integral of a closed $1$-form on this manifold. Now, for $B$ in our manifold, any $X\in T_B(B_{\sa}(H))$ is given by an $X$ in $B_{\sa}(H)$ as the derivative at $B$ along the curve $t\mapsto B+tX$ in the manifold. Then we show that for $m$ a sufficiently large half-integer: $$\alpha (X)={1\over {\tilde {C}_m}}\Tr \Bigl(X\bigl(1+(D+B)^2\bigr)^{-m}\Bigr)$$ is a closed $1$-form. For any piecewise smooth path $\{D_t=D+B_t\}$ with $D_0$ and $D_1$ unitarily equivalent we show that $$\sf(\{D_t\})={1\over {\tilde {C}_m}} \int_0^1\Tr \Bigl({d\over {dt}} (D_t)(1+D_t^2)^{-m}\Bigr)\,dt$$ the integral of the $1$-form $\alpha$. If $D_0$ and $D_1$ are not unitarily equivalent, we must add a pair of correction terms to the right-hand side. We also prove a bounded finitely summable version of the form: $$\sf(\{F_t\})={1\over C_n}\int_0^1\Tr\Bigl({d\over dt}(F_t)(1-F_t^2)^n\Bigr)\,dt$$ for $n\geq{{p-1}\over 2}$ an integer. The unbounded case is proved by reducing to the bounded case via the map $D\mapsto F=D(1+D^2 )^{-{1\over 2}}$. We prove simultaneously a type II version of our results. Categories:46L80, 19K33, 47A30, 47A55
80. CJM 1998 (vol 50 pp. 538)
Froese, Richard
Upper bounds for the resonance counting function of Schrödinger operators in odd dimensions The purpose of this note is to provide a simple proof of the sharp polynomial upper bound for the resonance counting function of a Schr\"odinger operator in odd dimensions. At the same time we generalize the result to the class of super-exponentially decreasing potentials. Categories:47A10, 47A40, 81U05
81. CJM 1998 (vol 50 pp. 658)
Symesak, Frédéric
Hankel operators on pseudoconvex domains of finite type in ${\Bbb C}^2$ The aim of this paper is to study small Hankel operators $h$ on the Hardy space or on weighted Bergman spaces, where $\Omega$ is a finite type domain in ${\Bbbvii C}^2$ or a strictly pseudoconvex domain in ${\Bbbvii C}^n$. We give a sufficient condition on the symbol $f$ so that $h$ belongs to the Schatten class ${\cal S}_p$, $1\le p<+\infty$. Categories:32A37, 47B35, 47B10, 46E22
82. CJM 1998 (vol 50 pp. 290)
Davidson, Kenneth R.; Popescu, Gelu
Noncommutative disc algebras for semigroups We study noncommutative disc algebras associated to the free product of discrete subsemigroups of $\bbR^+$. These algebras are associated to generalized Cuntz algebras, which are shown to be simple and purely infinite. The nonself-adjoint subalgebras determine the semigroup up to isomorphism. Moreover, we establish a dilation theorem for contractive representations of these semigroups which yields a variant of the von Neumann inequality. These methods are applied to establish a solution to the truncated moment problem in this context. Category:47D25
83. CJM 1998 (vol 50 pp. 99)
Izuchi, Keiji; Matsugu, Yasuo
$A_\phi$-invariant subspaces on the torus Generalizing the notion of invariant subspaces on the 2-dimensional torus $T^2$, we study the structure of $A_\phi$-invariant subspaces of $L^2(T^2)$. A complete description is given of $A_\phi$-invariant subspaces that satisfy conditions similar to those studied by Mandrekar, Nakazi, and Takahashi. Categories:32A35, 47A15
84. CJM 1997 (vol 49 pp. 1117)
Hu, Zhiguo
The von Neumann algebra $\VN(G)$ of a locally compact group and quotients of its subspaces Let $\VN(G)$ be the von Neumann algebra of a locally compact group $G$. We denote by $\mu$ the initial ordinal with $\abs{\mu}$ equal to the smallest cardinality of an open basis at the unit of $G$ and $X= \{\alpha; \alpha < \mu \}$. We show that if $G$ is nondiscrete then there exist an isometric $*$-isomorphism $\kappa$ of $l^{\infty}(X)$ into $\VN(G)$ and a positive linear mapping $\pi$ of $\VN(G)$ onto $l^{\infty}(X)$ such that $\pi\circ\kappa = \id_{l^{\infty}(X)}$ and $\kappa$ and $\pi$ have certain additional properties. Let $\UCB (\hat{G})$ be the $C^{*}$-algebra generated by operators in $\VN(G)$ with compact support and $F(\hat{G})$ the space of all $T \in \VN(G)$ such that all topologically invariant means on $\VN(G)$ attain the same value at $T$. The construction of the mapping $\pi$ leads to the conclusion that the quotient space $\UCB (\hat{G})/F(\hat{G})\cap \UCB(\hat{G})$ has $l^{\infty}(X)$ as a continuous linear image if $G$ is nondiscrete. When $G$ is further assumed to be non-metrizable, it is shown that $\UCB(\hat{G})/F (\hat{G})\cap \UCB(\hat{G})$ contains a linear isomorphic copy of $l^{\infty}(X)$. Similar results are also obtained for other quotient spaces. Categories:22D25, 43A22, 43A30, 22D15, 43A07, 47D35
85. CJM 1997 (vol 49 pp. 736)
Fendler, Gero
Dilations of one parameter Semigroups of positive Contractions on $L^{\lowercase {p}}$ spaces It is proved in this note, that a strongly continuous semigroup of (sub)positive contractions acting on an $L^p$-space, for $1 Categories:47D03, 22D12, 43A22 86. CJM 1997 (vol 49 pp. 100) Lance, T. L.; Stessin, M. I. Multiplication Invariant Subspaces of Hardy Spaces This paper studies closed subspaces$L$of the Hardy spaces$H^p$which are$g$-invariant ({\it i.e.},$g\cdot L \subseteq L)$where$g$is inner,$g\neq 1$. If$p=2$, the Wold decomposition theorem implies that there is a countable $g$-basis''$f_1, f_2,\ldots$of$L$in the sense that$L$is a direct sum of spaces$f_j\cdot H^2[g]$where$H^2[g] = \{f\circ g \mid f\in H^2\}$. The basis elements$f_j$satisfy the additional property that$\int_T |f_j|^2 g^k=0$,$k=1,2,\ldots\,.$We call such functions$g$-$2$-inner. It also follows that any$f\in H^2$can be factored$f=h_{f,2}\cdot (F_2\circ g)$where$h_{f,2}$is$g$-$2$-inner and$F$is outer, generalizing the classical Riesz factorization. Using$L^p$estimates for the canonical decomposition of$H^2$, we find a factorization$f=h_{f,p} \cdot (F_p \circ g)$for$f\in H^p$. If$p\geq 1$and$g$is a finite Blaschke product we obtain, for any$g$-invariant$L\subseteq H^p$, a finite$g$-basis of$g$-$p\$-inner functions. Categories:30H05, 46E15, 47B38
Page Previous 1 2 3 4
top of page | contact us | privacy | site map |
© Canadian Mathematical Society, 2018 : https://cms.math.ca/
|
2018-06-18 17:38:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.942745566368103, "perplexity": 499.05942517706814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860684.14/warc/CC-MAIN-20180618164208-20180618184208-00007.warc.gz"}
|
http://mathematica.stackexchange.com/questions/4832/creating-word-histograms-from-lists-of-strings
|
# Creating word histograms from lists of strings
Consider the following:
StringData1={"Dog","Dog","Dog","Dog","Duck","Duck","Duck","Rabbit","House"};
StringData2={"A dog is in the house.","A dog is outside.",
"Why is everybody talking about a dog?", "Dog", "The duck died in the house.",
"The duck is actually a rabbit.", "Duck", "And the rabbit in fact is a dog",
"I have a dog in my house."};
Using the following function, it is quite easy to plot a histogram for StringData1. However the options remain fixed.
StringHistogram[list_] :=
Module[{counter, strings, numbers},
counter = {First@#, Length@#} & /@ (GatherBy@list);
strings = First /@ counter;
numbers = Last /@ counter;
BarChart[numbers, ChartLabels -> strings]
]
In case of StringData2 I was wondering whether it would be possible to chop up each sentence into single words (e.g.{Sentence1,...} -> {{Word1, Word2,...},...}) and then to run StringHistogram on Flatten@StringData2New, whereas StringData2New is of the form {{Word1, Word2,...},...}.
-
I have a slightly different strategy for splittling the string than rcollyer, because you can stick the pattern into StringSplit directly instead of needing to do a StringReplace first to prepare the string for splitting. For instance:
In[141]:= StringSplit["A dog is in the house.", Except[WordCharacter]]
Out[141]= {"A", "dog", "is", "in", "the", "house"}
Also, you can make StringHistogram simpler by using the built-in Tally function, which you've reimplemented (pretty darn well, I must say):
StringHistogram[list_, opts : OptionsPattern[]] :=
With[{tally = Tally@list},
BarChart[tally[[All, 2]],
BarOrigin -> Left,
ChartLabels -> tally[[All, 1]],
FilterRules[{opts}, Options[BarChart]]
]]
I use the BarOrigin option to make the bars come from the side, which makes everything much easier to read when you have many strings, and I pass options to BarChart in order to make tweaking things easier:
StringHistogram[
StringSplit[StringData2, Except[WordCharacter]] // Flatten,
ImageSize -> 500, BaseStyle -> "Label"]
-
Except[WordCharacter] is a very good alternative, and I'm ashamed to admit I never think of Except, +1. – rcollyer Apr 27 '12 at 21:51
You're looking for StringSplit. In your case, I would do this
StringSplit[{"A dog is in the house.","A dog is outside."}]
which returns
(* {{"A", "dog", "is", "in", "the", "house."},
{"A", "dog", "is", "outside."}} *)
Unfortunately, that leaves in the punctuation. To rid yourself of the punctuation, I would use StringReplace first, as follows
StringSplit @ StringReplace[{"A dog is in the house.", "A dog is outside."},
RegularExpression["[[:punct:]]"]
which gives
(* {{"A", "dog", "is", "in", "the", "house"},
{"A", "dog", "is", "outside"}} *)
Note the use of the character class in the RegularExpression; it makes specifying all punctuation easier.
-
Regarding the first part of your question, I would rewrite StringHistogram like this:
StringHistogram[list_] := BarChart[#2, ChartLabels -> #1] & @@ Transpose[Tally@list]
If you want to supply additional options, you can extend this as:
StringHistogram[list_, opts : OptionsPattern[]] :=
BarChart[#2, ChartLabels -> #1, opts] & @@ Transpose[Tally@list]
I think Pillsy's suggestion to split the strings is very clean. Here's another equally clean approach using StringCases:
StringCases[StringData2, x : WordCharacter .. :> x]
-
|
2016-07-01 08:13:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25432881712913513, "perplexity": 12304.601286926923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00022-ip-10-164-35-72.ec2.internal.warc.gz"}
|
https://kr.mathworks.com/help/predmaint/ref/extractspectralfeatures.html
|
# Extract Spectral Features
Interactively extract spectral fault band metrics in the Live Editor
## Description
To add the Extract Spectral Features task to a live script in the MATLAB Editor:
• On the Live Editor tab, select Task > Extract Spectral Features.
• In a code block in your script, type a relevant keyword, such as ```fault bands``` or `metrics`. Select ```Extract Spectral Features``` from the suggested command completions.
## Parameters
Select Power Spectrum Data
Select a vector of frequencies from the MATLAB workspace that correspond to your power spectrum data.
Select a vector containing the power spectrum magnitudes from the MATLAB workspace.
Configure Components
Choose between adding a bearing, gear mesh, or custom component. You can name your component and then click the button. You can set the physical characteristics of these components using their corresponding parameters. The Extract Spectral Features Live Editor task plots fault frequency bands at the characteristic frequencies of the components.
Bearing Component Parameters
You can toggle this option to enable or disable the component from being included in the spectral metrics computation. Disabling the component also removes its fault bands from the plot. Use the button to permanently remove a component.
Specify the number of rolling elements in the bearing as a positive integer.
Specify the pitch diameter of the bearing as a positive scalar. The pitch diameter is the diameter of the circle that the center of the ball or roller travels during the bearing rotation.
Specify the rotational speed of the shaft or inner race of the bearing as a positive scalar. The rotational speed is the fundamental frequency around which the Extract Spectral Features live task generates the fault frequency bands. The units must be consistent with the unit of the frequency vector.
Specify the contact angle in degrees between a plane perpendicular to the ball or roller axis and the line joining the two raceways.
Specify the diameter of the ball or roller in the bearing as a positive scalar.
Specify the harmonics of the fundamental frequency to be included in the plot and in the spectral metrics computation.
Specify the sidebands around the fundamental frequency and its harmonics to be included in the plot and in the spectral metrics computation.
Specify the units of the fault band frequencies as either `'frequency'` or `'order'`. Select:
• `'frequency'` if you have the fault bands in the same units as the Rotational speed.
• `'order'` if you have the fault bands as a number of rotations relative to the inner race rotation Rotational speed.
Specify the width of the frequency bands centered at the nominal fault frequencies as a positive scalar. Uncheck the Auto option to specify the width value manually.
Gear Mesh Component Parameters
You can toggle this option to enable or disable the component from being included in the spectral metrics computation. Disabling the component also removes its fault bands from the plot. Use the button to permanently remove a component.
Specify the number of teeth on the input gear as a positive integer.
Specify the number of teeth on the output gear as a positive integer.
Specify the rotational speed of the input gear as a positive scalar. The rotational speed is the fundamental frequency around which the Extract Spectral Features live task generates the fault frequency bands. The units must be consistent with the unit of the frequency vector.
Specify the harmonics of the fundamental frequency to be included in the plot and in the spectral metrics computation.
Specify the sidebands around the fundamental frequency and its harmonics to be included in the plot and in the spectral metrics computation.
Specify the units of the fault band frequencies as either `'frequency'` or `'order'`. Select:
• `'frequency'` if you have the fault bands in the same units as the Rotational speed.
• `'order'` if you have the fault bands as a number of rotations relative to the Rotational speed.
Specify the width of the frequency bands centered at the nominal fault frequencies as a positive scalar. Uncheck the Auto option to specify the width value manually.
Custom Component Parameters
You can toggle this option to enable or disable the component from being included in the spectral metrics computation. Disabling the component also removes its fault bands from the plot. Use the button to permanently remove a component.
Specify the fundamental frequency of interest as a positive scalar. The Extract Spectral Features live task constructs the fault frequency bands around the fundamental frequency. For instance, to construct fault bands for a faulty induction motor, the mains frequency of 60 Hz is the fundamental frequency of interest. Similarly, to generate fault bands for a faulty gear train, the input shaft frequency is the fundamental frequency.
Specify the harmonics of the fundamental frequency to be included in the plot and in the spectral metrics computation.
Specify the sidebands around the fundamental frequency and its harmonics to be included in the plot and in the spectral metrics computation.
Specify the type of separation between successive sidebands as either `'additive'` or `'multiplicative'`. Select:
• `'additive'`, to set the separation between successive sidebands to a value of 0.1 times the `F1` frequency value, where `F1` is the distance of the first sideband from the fundamental frequency.
• `'multiplicative'`, to set the separation between successive sidebands proportional to both the harmonic order and the sideband value.
Specify the separation value between successive sidebands as a positive scalar. Uncheck the Auto option to specify the separation value manually.
Specify the width of the frequency bands centered at the nominal fault frequencies as a positive scalar. Uncheck the Auto option to specify the width value manually.
Toggle this option to specify whether negative nominal fault frequencies are folded about the frequency origin. If you turn Folding `on`, then the Extract Spectral Features live task folds the negative nominal fault frequencies about the frequency origin by taking their absolute values such that the folded fault bands always fall in the positive frequency intervals. The folded fault bands are computed as , where W is the Band width and F is the Frequency.
Display Results
Toggle this option to enable or disable the display of spectral metrics. When the option is checked, then the Extract Spectral Features live task displays the metrics as a `1`-by-`N` table, where `N = 3*size((F+S),1)+1`. That is, it displays three metrics per frequency range and the total band power over the frequency range.
The live task returns the following spectral metrics:
• `Peak Amplitude` — Peak amplitude value for each specified frequency range.
• `Peak Frequency` — Peak frequency value for each specified frequency range.
• `Band Power` — Average power of each specified frequency range. For more information on band power, see `bandpower`.
• `Total Band Power` — Sum of individual band powers for the set of specified frequency ranges.
## Version History
Introduced in R2021a
|
2023-02-09 02:17:24
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8204876184463501, "perplexity": 1423.823635048826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501066.53/warc/CC-MAIN-20230209014102-20230209044102-00877.warc.gz"}
|
https://www.physicsforums.com/threads/how-do-you-explkain-the-difference-of-labels-and-dynamical-variable.509472/
|
# How do you explkain the difference of labels and dynamical variable?
1. Jun 24, 2011
### kof9595995
I think I understand this, but when I tried to explain it to a friend I couldn't phrase it in a nice and clear way. How would you guys explain it to others?
2. Jun 24, 2011
### Pengwuino
I don't understand what you even mean. You need more context I think.
3. Jun 24, 2011
### kof9595995
Like in fluid theory, Eulerian coordinate is (sort of?) labels and Lagrangian coordinate is dynamical variables?
Or in old quantum mechanics, when treating EM interation with particle, we write the field as $A(\hat {x})$, where $\hat {x}$ denote the position observable of the particle(so it's a dynamical varible), whereas in QFT we write the field as $\hat{A}(x)$ where x is just a spatial coordinate(a label),as said in http://press.princeton.edu/chapters/s7573.pdf : In quantum field theory, x is a label, not a dynamical variable. The x appearing in ϕ(t, x) corresponds to the label a in qa(t) in quantum mechanics. ...
4. Jun 24, 2011
### kof9595995
Emm, it's a bit hard to even express my question clearly, but I'll give it a try: say position x, in order to specify a partcle's postion, which is a dynamical variable(DM for short), we must have a coordinate system, which has all possible values of x, and this coordinate system works like labels, so is it correct to say in order to define a position as a DM it's inevitable to define position labels first? Or is DM is a "subconcept" of coordinate labels?...
I realize my question is ambiguious, plz bear with me.....
|
2017-11-19 00:55:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6874220967292786, "perplexity": 1251.122115553054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805242.68/warc/CC-MAIN-20171119004302-20171119024302-00625.warc.gz"}
|
https://www.physicsforums.com/threads/cylindrically-symmetric-current-distribution-magnetic-field-in-all-space.162195/
|
# Homework Help: Cylindrically symmetric current distribution: Magnetic field in all space
1. Mar 23, 2007
### JamesTheBond
1. The problem statement, all variables and given/known data
a. An infinite cyclindrically symmetric current distribution has the form
$$\vec J (r, \phi, z) = J_0 r^2/R^2 \ \ \ \vec\hat \phi$$ for $$R<r<2R$$. Outside the interval, the current is 0. What is the field everywhere in space?
b. An infinite cyclindrically symmetric current distribution has the form
$$\vec J (r, \phi, z) = J_0 r^2/R^2 \ \ \ \vec \hat z$$ for $$R<r<2R$$. Outside the interval, the current is 0. What is the field everywhere in space?
2. Relevant equations
Ampere's Law
Biot Savart?
3. The attempt at a solution
r<R B = 0
r>2R B= 0
???
I don't know where to start.
Last edited: Mar 23, 2007
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook
Can you offer guidance or do you also need help?
Draft saved Draft deleted
|
2018-05-24 20:23:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.520296573638916, "perplexity": 3492.153807803718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866772.91/warc/CC-MAIN-20180524190036-20180524210036-00338.warc.gz"}
|
https://wikimili.com/en/Finite-state_machine
|
# Finite-state machine
Last updated
Classes of automata
(Clicking on each layer will take you to an article on that subject)
A finite-state machine (FSM) or finite-state automaton (FSA, plural: automata), finite automaton, or simply a state machine, is a mathematical model of computation. It is an abstract machine that can be in exactly one of a finite number of states at any given time. The FSM can change from one state to another in response to some external inputs; the change from one state to another is called a transition. An FSM is defined by a list of its states, its initial state, and the conditions for each transition. Finite state machines are of two types – deterministic finite state machines and non-deterministic finite state machines. [1] A deterministic finite-state machine can be constructed equivalent to any non-deterministic one.
In computer science, and more specifically in computability theory and computational complexity theory, a model of computation is a model which describes how a set of outputs are computed given a set of inputs. This model describes how units of computations, memories, and communications are organized. The computational complexity of an algorithm can be measured given a model of computation. Using a model allows studying the performance of algorithms independently of the variations that are specific to particular implementations and specific technology.
An abstract machine, also called an abstract computer, is a theoretical model of a computer hardware or software system used in automata theory. Abstraction of computing processes is used in both the computer science and computer engineering disciplines and usually assumes a discrete time paradigm.
In information technology and computer science, a program is described as stateful if it is designed to remember preceding events or user interactions; the remembered information is called the state of the system.
## Contents
The behavior of state machines can be observed in many devices in modern society that perform a predetermined sequence of actions depending on a sequence of events with which they are presented. Simple examples are vending machines, which dispense products when the proper combination of coins is deposited, elevators, whose sequence of stops is determined by the floors requested by riders, traffic lights, which change sequence when cars are waiting, and combination locks, which require the input of combination numbers in the proper order.
A vending machine is an automated machine that provides items such as snacks, beverages, cigarettes and lottery tickets to consumers after money, a credit card, or specially designed card is inserted into the machine. The first modern vending machines were developed in England in the early 1880s and dispensed postcards. Vending machines exist in many countries, and in more recent times, specialized vending machines that provide less common products compared to traditional vending machine items have been created.
An elevator or lift is a type of vertical transportation that moves people or goods between floors of a building, vessel, or other structure. Elevators are typically powered by electric motors that either drive traction cables and counterweight systems like a hoist, or pump hydraullic fluid to raise a cylindrical piston like a jack.
Traffic lights, also known as traffic signals, traffic lamps, traffic semaphore, signal lights, stop lights, robots, and traffic control signals, are signalling devices positioned at road intersections, pedestrian crossings, and other locations to control flows of traffic.
The finite state machine has less computational power than some other models of computation such as the Turing machine. [2] The computational power distinction means there are computational tasks that a Turing machine can do but a FSM cannot. This is because a FSM's memory is limited by the number of states it has. FSMs are studied in the more general field of automata theory.
A Turing machine is a mathematical model of computation that defines an abstract machine, which manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, given any computer algorithm, a Turing machine capable of simulating that algorithm's logic can be constructed.
In computing, memory refers to the computer hardware integrated circuits that store information for immediate use in a computer; it is synonymous with the term "primary storage". Computer memory operates at a high speed, for example random-access memory (RAM), as a distinction from storage that provides slow-to-access information but offers higher capacities. If needed, contents of the computer memory can be transferred to secondary storage; a very common way of doing this is through a memory management technique called "virtual memory". An archaic synonym for memory is store.
Automata theory is the study of abstract machines and automata, as well as the computational problems that can be solved using them. It is a theory in theoretical computer science and discrete mathematics. The word automata comes from the Greek word αὐτόματα, which means "self-acting".
## Example: coin-operated turnstile
An example of a simple mechanism that can be modeled by a state machine is a turnstile. [3] [4] A turnstile, used to control access to subways and amusement park rides, is a gate with three rotating arms at waist height, one across the entryway. Initially the arms are locked, blocking the entry, preventing patrons from passing through. Depositing a coin or token in a slot on the turnstile unlocks the arms, allowing a single customer to push through. After the customer passes through, the arms are locked again until another coin is inserted.
A turnstile, also called a baffle gate or turnstyle, is a form of gate which allows one person to pass at a time. It can also be made so as to enforce one-way traffic of people, and in addition, it can restrict passage only to people who insert a coin, a ticket, a pass, or similar. Thus a turnstile can be used in the case of paid access, for example to access public transport, a pay toilet, or to restrict access to authorized people, for example in the lobby of an office building.
In numismatics, token coins or trade tokens are coin-like objects used instead of coins. The field of token coins is part of exonumia and token coins are token money. Tokens have a denomination either shown or implied by size, color or shape. "Tokens" are often made of cheaper metals: copper, pewter, aluminium, brass and tin were commonly used, while bakelite, leather, porcelain, and other less durable materials are also known.
Considered as a state machine, the turnstile has two possible states: Locked and Unlocked. [3] There are two possible inputs that affect its state: putting a coin in the slot (coin) and pushing the arm (push). In the locked state, pushing on the arm has no effect; no matter how many times the input push is given, it stays in the locked state. Putting a coin in – that is, giving the machine a coin input – shifts the state from Locked to Unlocked. In the unlocked state, putting additional coins in has no effect; that is, giving additional coin inputs does not change the state. However, a customer pushing through the arms, giving a push input, shifts the state back to Locked.
The turnstile state machine can be represented by a state transition table, showing for each possible state, the transitions between them (based upon the inputs given to the machine) and the outputs resulting from each input:
In automata theory and sequential logic, a state transition table is a table showing what state a finite semiautomaton or finite state machine will move to, based on the current state and other inputs. A state table is essentially a truth table in which some of the inputs are the current state, and the outputs include the next state, along with other outputs.
Current StateInputNext StateOutput
LockedcoinUnlockedUnlocks the turnstile so that the customer can push through.
pushLockedNone
UnlockedcoinUnlockedNone
pushLockedWhen the customer has pushed through, locks the turnstile.
The turnstile state machine can also be represented by a directed graph called a state diagram (above). Each state is represented by a node (circle). Edges (arrows) show the transitions from one state to another. Each arrow is labeled with the input that triggers that transition. An input that doesn't cause a change of state (such as a coin input in the Unlocked state) is represented by a circular arrow returning to the original state. The arrow into the Locked node from the black dot indicates it is the initial state.
## Concepts and terminology
A state is a description of the status of a system that is waiting to execute a transition. A transition is a set of actions to be executed when a condition is fulfilled or when an event is received. For example, when using an audio system to listen to the radio (the system is in the "radio" state), receiving a "next" stimulus results in moving to the next station. When the system is in the "CD" state, the "next" stimulus results in moving to the next track. Identical stimuli trigger different actions depending on the current state.
In some finite-state machine representations, it is also possible to associate actions with a state:
• an entry action: performed when entering the state, and
• an exit action: performed when exiting the state.
## Representations
### State/Event table
Several state transition table types are used. The most common representation is shown below: the combination of current state (e.g. B) and input (e.g. Y) shows the next state (e.g. C). The complete action's information is not directly described in the table and can only be added using footnotes. A FSM definition including the full actions information is possible using state tables (see also virtual finite-state machine).
State transition table
Current
state
Input
State AState BState C
Input X
Input YState C
Input Z
### UML state machines
The Unified Modeling Language has a notation for describing state machines. UML state machines overcome the limitations of traditional finite state machines while retaining their main benefits. UML state machines introduce the new concepts of hierarchically nested states and orthogonal regions, while extending the notion of actions. UML state machines have the characteristics of both Mealy machines and Moore machines. They support actions that depend on both the state of the system and the triggering event, as in Mealy machines, as well as entry and exit actions, which are associated with states rather than transitions, as in Moore machines.[ citation needed ]
### SDL state machines
The Specification and Description Language is a standard from ITU that includes graphical symbols to describe actions in the transition:
• send an event
• start a timer
• cancel a timer
• start another concurrent state machine
• decision
SDL embeds basic data types called "Abstract Data Types", an action language, and an execution semantic in order to make the finite state machine executable.[ citation needed ]
### Other state diagrams
There are a large number of variants to represent an FSM such as the one in figure 3.
## Usage
In addition to their use in modeling reactive systems presented here, finite state machines are significant in many different areas, including electrical engineering, linguistics, computer science, philosophy, biology, mathematics, and logic. Finite state machines are a class of automata studied in automata theory and the theory of computation. In computer science, finite state machines are widely used in modeling of application behavior, design of hardware digital systems, software engineering, compilers, network protocols, and the study of computation and languages.
## Classification
Finite state machines can be subdivided into transducers, acceptors, classifiers and sequencers. [5]
### Acceptors (recognizers)
Acceptors (also called recognizers and sequence detectors), produce binary output, indicating whether or not the received input is accepted. Each state of an FSM is either "accepting" or "not accepting". Once all input has been received, if the current state is an accepting state, the input is accepted; otherwise it is rejected. As a rule, input is a sequence of symbols (characters); actions are not used. The example in figure 4 shows a finite state machine that accepts the string "nice". In this FSM, the only accepting state is state 7.
A (possibly infinite) set of symbol sequences, aka. formal language, is called a regular language if there is some Finite State Machine that accepts exactly that set. For example, the set of binary strings with an even number of zeroes is a regular language (cf. Fig. 5), while the set of all strings whose length is a prime number is not. [6] :18,71
A machine could also be described as defining a language, that would contain every string accepted by the machine but none of the rejected ones; that language is "accepted" by the machine. By definition, the languages accepted by FSMs are the regular languages—; a language is regular if there is some FSM that accepts it.
The problem of determining the language accepted by a given finite state acceptor is an instance of the algebraic path problem—itself a generalization of the shortest path problem to graphs with edges weighted by the elements of an (arbitrary) semiring. [7] [8] [ jargon ]
The start state can also be an accepting state, in which case the automaton accepts the empty string.
An example of an accepting state appears in Fig.5: a deterministic finite automaton (DFA) that detects whether the binary input string contains an even number of 0s.
S1 (which is also the start state) indicates the state at which an even number of 0s has been input. S1 is therefore an accepting state. This machine will finish in an accept state, if the binary string contains an even number of 0s (including any binary string containing no 0s). Examples of strings accepted by this DFA are ε (the empty string), 1, 11, 11…, 00, 010, 1010, 10110, etc.
### Classifiers
A classifier is a generalization of a finite state machine that, similar to an acceptor, produces a single output on termination but has more than two terminal states.[ citation needed ]
### Transducers
Transducers generate output based on a given input and/or a state using actions. They are used for control applications and in the field of computational linguistics.
In control applications, two types are distinguished:
Moore machine
The FSM uses only entry actions, i.e., output depends only on the state. The advantage of the Moore model is a simplification of the behaviour. Consider an elevator door. The state machine recognizes two commands: "command_open" and "command_close", which trigger state changes. The entry action (E:) in state "Opening" starts a motor opening the door, the entry action in state "Closing" starts a motor in the other direction closing the door. States "Opened" and "Closed" stop the motor when fully opened or closed. They signal to the outside world (e.g., to other state machines) the situation: "door is open" or "door is closed".
Mealy machine
The FSM also uses input actions, i.e., output depends on input and state. The use of a Mealy FSM leads often to a reduction of the number of states. The example in figure 7 shows a Mealy FSM implementing the same behaviour as in the Moore example (the behaviour depends on the implemented FSM execution model and will work, e.g., for virtual FSM but not for event-driven FSM). There are two input actions (I:): "start motor to close the door if command_close arrives" and "start motor in the other direction to open the door if command_open arrives". The "opening" and "closing" intermediate states are not shown.
### Generators
Sequencers, or generators, are a subclass of the acceptor and transducer types that have a single-letter input alphabet. They produce only one sequence which can be seen as an output sequence of acceptor or transducer outputs.[ citation needed ]
### Determinism
A further distinction is between deterministic (DFA) and non-deterministic (NFA, GNFA) automata. In a deterministic automaton, every state has exactly one transition for each possible input. In a non-deterministic automaton, an input can lead to one, more than one, or no transition for a given state. The powerset construction algorithm can transform any nondeterministic automaton into a (usually more complex) deterministic automaton with identical functionality.
A finite state machine with only one state is called a "combinatorial FSM". It only allows actions upon transition into a state. This concept is useful in cases where a number of finite state machines are required to work together, and when it is convenient to consider a purely combinatorial part as a form of FSM to suit the design tools. [9]
## Alternative semantics
There are other sets of semantics available to represent state machines. For example, there are tools for modeling and designing logic for embedded controllers. [10] They combine hierarchical state machines (which usually have more than one current state), flow graphs, and truth tables into one language, resulting in a different formalism and set of semantics. [11] These charts, like Harel's original state machines, [12] support hierarchically nested states, orthogonal regions, state actions, and transition actions. [13]
## Mathematical model
In accordance with the general classification, the following formal definitions are found:
• A deterministic finite state machine or acceptor deterministic finite state machine is a quintuple ${\displaystyle (\Sigma ,S,s_{0},\delta ,F)}$, where:
• ${\displaystyle \Sigma }$ is the input alphabet (a finite, non-empty set of symbols).
• ${\displaystyle S}$ is a finite, non-empty set of states.
• ${\displaystyle s_{0}}$ is an initial state, an element of ${\displaystyle S}$.
• ${\displaystyle \delta }$ is the state-transition function: ${\displaystyle \delta :S\times \Sigma \rightarrow S}$ (in a nondeterministic finite automaton it would be ${\displaystyle \delta :S\times \Sigma \rightarrow {\mathcal {P}}(S)}$, i.e., ${\displaystyle \delta }$ would return a set of states).
• ${\displaystyle F}$ is the set of final states, a (possibly empty) subset of ${\displaystyle S}$.
For both deterministic and non-deterministic FSMs, it is conventional to allow ${\displaystyle \delta }$ to be a partial function, i.e. ${\displaystyle \delta (q,x)}$ does not have to be defined for every combination of ${\displaystyle q\in S}$ and ${\displaystyle x\in \Sigma }$. If an FSM ${\displaystyle M}$ is in a state ${\displaystyle q}$, the next symbol is ${\displaystyle x}$ and ${\displaystyle \delta (q,x)}$ is not defined, then ${\displaystyle M}$ can announce an error (i.e. reject the input). This is useful in definitions of general state machines, but less useful when transforming the machine. Some algorithms in their default form may require total functions.
A finite state machine has the same computational power as a Turing machine that is restricted such that its head may only perform "read" operations, and always has to move from left to right. That is, each formal language accepted by a finite state machine is accepted by such a kind of restricted Turing machine, and vice versa. [14]
• A finite-state transducer is a sextuple ${\displaystyle (\Sigma ,\Gamma ,S,s_{0},\delta ,\omega )}$, where:
• ${\displaystyle \Sigma }$ is the input alphabet (a finite non-empty set of symbols).
• ${\displaystyle \Gamma }$ is the output alphabet (a finite, non-empty set of symbols).
• ${\displaystyle S}$ is a finite, non-empty set of states.
• ${\displaystyle s_{0}}$ is the initial state, an element of ${\displaystyle S}$. In a nondeterministic finite automaton, ${\displaystyle s_{0}}$ is a set of initial states.
• ${\displaystyle \delta }$ is the state-transition function: ${\displaystyle \delta :S\times \Sigma \rightarrow S}$.
• ${\displaystyle \omega }$ is the output function.
If the output function is a function of a state and input alphabet (${\displaystyle \omega :S\times \Sigma \rightarrow \Gamma }$) that definition corresponds to the Mealy model, and can be modelled as a Mealy machine. If the output function depends only on a state (${\displaystyle \omega :S\rightarrow \Gamma }$) that definition corresponds to the Moore model, and can be modelled as a Moore machine. A finite-state machine with no output function at all is known as a semiautomaton or transition system.
If we disregard the first output symbol of a Moore machine, ${\displaystyle \omega (s_{0})}$, then it can be readily converted to an output-equivalent Mealy machine by setting the output function of every Mealy transition (i.e. labeling every edge) with the output symbol given of the destination Moore state. The converse transformation is less straightforward because a Mealy machine state may have different output labels on its incoming transitions (edges). Every such state needs to be split in multiple Moore machine states, one for every incident output symbol. [15]
## Optimization
Optimizing an FSM means finding a machine with the minimum number of states that performs the same function. The fastest known algorithm doing this is the Hopcroft minimization algorithm. [16] [17] Other techniques include using an implication table, or the Moore reduction procedure. Additionally, acyclic FSAs can be minimized in linear time. [18]
## Implementation
### Hardware applications
In a digital circuit, an FSM may be built using a programmable logic device, a programmable logic controller, logic gates and flip flops or relays. More specifically, a hardware implementation requires a register to store state variables, a block of combinational logic that determines the state transition, and a second block of combinational logic that determines the output of an FSM. One of the classic hardware implementations is the Richards controller.
In a Medvedev machine, the output is directly connected to the state flip-flops minimizing the time delay between flip-flops and output. [19] [20]
Through state encoding for low power state machines may be optimized to minimize power consumption.
### Software applications
The following concepts are commonly used to build software applications with finite state machines:
### Finite state machines and compilers
Finite automata are often used in the frontend of programming language compilers. Such a frontend may comprise several finite state machines that implement a lexical analyzer and a parser. Starting from a sequence of characters, the lexical analyzer builds a sequence of language tokens (such as reserved words, literals, and identifiers) from which the parser builds a syntax tree. The lexical analyzer and the parser handle the regular and context-free parts of the programming language's grammar. [21]
## Related Research Articles
In the theory of computation, a branch of theoretical computer science, a pushdown automaton (PDA) is a type of automaton that employs a stack.
A state diagram is a type of diagram used in computer science and related fields to describe the behavior of systems. State diagrams require that the system described is composed of a finite number of states; sometimes, this is indeed the case, while at other times this is a reasonable abstraction. Many forms of state diagrams exist, which differ slightly and have different semantics.
In the theory of computation, a Mealy machine is a finite-state machine whose output values are determined both by its current state and the current inputs. This is in contrast to a Moore machine, whose output values are determined solely by its current state. A Mealy machine is a deterministic finite-state transducer: for each state and input, at most one transition is possible.
In computer science and automata theory, a Büchi automaton is a type of ω-automaton, which extends a finite automaton to infinite inputs. It accepts an infinite input sequence if there exists a run of the automaton that visits one of the final states infinitely often. Büchi automata recognize the omega-regular languages, the infinite word version of regular languages. It is named after the Swiss mathematician Julius Richard Büchi who invented this kind of automaton in 1962.
Computability is the ability to solve a problem in an effective manner. It is a key topic of the field of computability theory within mathematical logic and the theory of computation within computer science. The computability of a problem is closely linked to the existence of an algorithm to solve the problem.
In the theory of computation, a branch of theoretical computer science, a deterministic finite automaton (DFA)—also known as deterministic finite acceptor (DFA), deterministic finite state machine (DFSM), or deterministic finite state automaton (DFSA)—is a finite-state machine that accepts or rejects strings of symbols and only produces a unique computation of the automaton for each input string. Deterministic refers to the uniqueness of the computation. In search of the simplest models to capture finite-state machines, Warren McCulloch and Walter Pitts were among the first researchers to introduce a concept similar to finite automata in 1943.
In automata theory, a finite state machine is called a deterministic finite automaton (DFA), if
In automata theory, an alternating finite automaton (AFA) is a nondeterministic finite automaton whose transitions are divided into existential and universal transitions. For example, let A be an alternating automaton.
A finite-state transducer (FST) is a finite-state machine with two memory tapes, following the terminology for Turing machines: an input tape and an output tape. This contrasts with an ordinary finite-state automaton, which has a single tape. An FST is a type of finite-state automaton that maps between two sets of symbols. An FST is more general than a finite-state automaton (FSA). An FSA defines a formal language by defining a set of accepted strings while an FST defines relations between sets of strings.
In automata theory, a permutation automaton, or pure-group automaton, is a deterministic finite automaton such that each input symbol permutes the set of states.
In automata theory, a deterministic pushdown automaton is a variation of the pushdown automaton. The class of deterministic pushdown automata accepts the deterministic context-free languages, a proper subset of context-free languages.
In computer science, in particular in automata theory, a two-way finite automaton is a finite automaton that is allowed to re-read its input.
In computer science, a computation history is a sequence of steps taken by an abstract machine in the process of computing its result. Computation histories are frequently used in proofs about the capabilities of certain machines, and particularly about the undecidability of various formal languages.
In quantum computing, quantum finite automata (QFA) or quantum state machines are a quantum analog of probabilistic automata or a Markov decision process. They are related to quantum computers in a similar fashion as finite automata are related to Turing machines. Several types of automata may be defined, including measure-once and measure-many automata. Quantum finite automata can also be understood as the quantization of subshifts of finite type, or as a quantization of Markov chains. QFAs are, in turn, special cases of geometric finite automata or topological finite automata.
In automata theory, a Muller automaton is a type of an ω-automaton. The acceptance condition separates a Muller automaton from other ω-automata. The Muller automaton is defined using Muller acceptance condition, i.e. the set of all states visited infinitely often must be an element of the acceptance set. Both deterministic and non-deterministic Muller automata recognize the ω-regular languages. They are named after David E. Muller, an American mathematician and computer scientist, who invented them in 1963.
A queue machine or queue automaton is a finite state machine with the ability to store and retrieve data from an infinite-memory queue. It is a model of computation equivalent to a Turing machine, and therefore it can process the same class of formal languages.
A read-only Turing machine or Two-way deterministic finite-state automaton (2DFA) is class of models of computability that behave like a standard Turing machine and can move in both directions across input, except cannot write to its input tape. The machine in its bare form is equivalent to a Deterministic finite automaton in computational power, and therefore can only parse a regular language.
In computer science, more specifically in automata and formal language theory, nested words are a concept proposed by Alur and Madhusudan as a joint generalization of words, as traditionally used for modelling linearly ordered structures, and of ordered unranked trees, as traditionally used for modelling hierarchical structures. Finite-state acceptors for nested words, so-called nested word automata, then give a more expressive generalization of finite automata on words. The linear encodings of languages accepted by finite nested word automata gives the class of visibly pushdown languages. The latter language class lies properly between the regular languages and the deterministic context-free languages. Since their introduction in 2004, these concepts have triggered much research in that area.
In automata theory, a branch of theoretical computer science, an ω-automaton is a variation of finite automatons that runs on infinite, rather than finite, strings as input. Since ω-automata do not stop, they have a variety of acceptance conditions rather than simply a set of accepting states.
## References
1. "Finite State Machines - Brilliant Math & Science Wiki". brilliant.org. Retrieved 14 April 2018.
2. Belzer, Jack; Holzman, Albert George; Kent, Allen (1975). Encyclopedia of Computer Science and Technology. 25. USA: CRC Press. p. 73. ISBN 978-0-8247-2275-3.
3. Koshy, Thomas (2004). Discrete Mathematics With Applications. Academic Press. p. 762. ISBN 978-0-12-421180-3.
4. Wright, David R. (2005). "Finite State Machines" (PDF). CSC215 Class Notes. David R. Wright website, N. Carolina State Univ. Archived from the original (PDF) on March 27, 2014. Retrieved July 14, 2012.
5. Keller, Robert M. (2001). "Classifiers, Acceptors, Transducers, and Sequencers" (PDF). Computer Science: Abstraction to Implementation (PDF). Harvey Mudd College. p. 480.
6. John E. Hopcroft and Jeffrey D. Ullman (1979). Introduction to Automata Theory, Languages, and Computation. Reading/MA: Addison-Wesley. ISBN 978-0-201-02988-8.
7. Pouly, Marc; Kohlas, Jürg (2011). Generic Inference: A Unifying Theory for Automated Reasoning. John Wiley & Sons. Chapter 6. Valuation Algebras for Path Problems, p. 223 in particular. ISBN 978-1-118-01086-0.
8. Jacek Jonczy (Jun 2008). "Algebraic path problems" (PDF). Archived from the original (PDF) on 21 August 2014. Retrieved 20 August 2014., p. 34
9. Brutscheck, M., Berger, S., Franke, M., Schwarzbacher, A., Becker, S.: Structural Division Procedure for Efficient IC Analysis. IET Irish Signals and Systems Conference, (ISSC 2008), pp.18-23. Galway, Ireland, 18–19 June 2008.
10. "Tiwari, A. (2002). Formal Semantics and Analysis Methods for Simulink Stateflow Models" (PDF). sri.com. Retrieved 14 April 2018.
11. Hamon, G. (2005). A Denotational Semantics for Stateflow. International Conference on Embedded Software. Jersey City, NJ: ACM. pp. 164–172. CiteSeerX .
12. Black, Paul E (12 May 2008). "Finite State Machine". Dictionary of Algorithms and Data Structures. U.S. National Institute of Standards and Technology. Archived from the original on 13 October 2018. Retrieved 2 November 2016.
13. Anderson, James Andrew; Head, Thomas J. (2006). Automata theory with modern applications. Cambridge University Press. pp. 105–108. ISBN 978-0-521-84887-9.
14. Hopcroft, John E. (1971). An n log n algorithm for minimizing states in a finite automaton (PDF) (Technical Report). CS-TR-71-190. Stanford Univ.
15. Almeida, Marco; Moreira, Nelma; Reis, Rogerio (2007). On the performance of automata minimization algorithms (PDF) (Technical Report). DCC-2007-03. Porto Univ. Archived from the original (PDF) on 17 January 2009. Retrieved 25 June 2008.
16. Revuz, D. (1992). "Minimization of Acyclic automata in Linear Time". Theoretical Computer Science. 92: 181–189. doi:10.1016/0304-3975(92)90142-3.
17. Kaeslin, Hubert (2008). "Mealy, Moore, Medvedev-type and combinatorial output bits". Digital Integrated Circuit Design: From VLSI Architectures to CMOS Fabrication. Cambridge University Press. p. 787. ISBN 978-0-521-88267-5.
18. Slides, Synchronous Finite State Machines; Design and Behaviour, University of Applied Sciences Hamburg, p.18
19. Aho, Alfred V.; Sethi, Ravi; Ullman, Jeffrey D. (1986). Compilers: Principles, Techniques, and Tools (1st ed.). Addison-Wesley. ISBN 978-0-201-10088-4.
### Finite state machines (automata theory) in theoretical computer science
• Arbib, Michael A. (1969). Theories of Abstract Automata (1st ed.). Englewood Cliffs, N.J.: Prentice-Hall, Inc. ISBN 978-0-13-913368-8.
• Bobrow, Leonard S.; Arbib, Michael A. (1974). Discrete Mathematics: Applied Algebra for Computer and Information Science (1st ed.). Philadelphia: W. B. Saunders Company, Inc. ISBN 978-0-7216-1768-8.
• Booth, Taylor L. (1967). Sequential Machines and Automata Theory (1st ed.). New York: John Wiley and Sons, Inc. Library of Congress Card Catalog Number 67-25924.
• Boolos, George; Jeffrey, Richard (1999) [1989]. Computability and Logic (3rd ed.). Cambridge, England: Cambridge University Press. ISBN 978-0-521-20402-6.
• Brookshear, J. Glenn (1989). Theory of Computation: Formal Languages, Automata, and Complexity. Redwood City, California: Benjamin/Cummings Publish Company, Inc. ISBN 978-0-8053-0143-4.
• Davis, Martin; Sigal, Ron; Weyuker, Elaine J. (1994). Computability, Complexity, and Languages and Logic: Fundamentals of Theoretical Computer Science (2nd ed.). San Diego: Academic Press, Harcourt, Brace & Company. ISBN 978-0-12-206382-4.
• Hopcroft, John; Ullman, Jeffrey (1979). Introduction to Automata Theory, Languages, and Computation (1st ed.). Reading Mass: Addison-Wesley. ISBN 978-0-201-02988-8.
• Hopcroft, John E.; Motwani, Rajeev; Ullman, Jeffrey D. (2001). Introduction to Automata Theory, Languages, and Computation (2nd ed.). Reading Mass: Addison-Wesley. ISBN 978-0-201-44124-6.
• Hopkin, David; Moss, Barbara (1976). Automata. New York: Elsevier North-Holland. ISBN 978-0-444-00249-5.
• Kozen, Dexter C. (1997). Automata and Computability (1st ed.). New York: Springer-Verlag. ISBN 978-0-387-94907-9.
• Lewis, Harry R.; Papadimitriou, Christos H. (1998). Elements of the Theory of Computation (2nd ed.). Upper Saddle River, New Jersey: Prentice-Hall. ISBN 978-0-13-262478-7.
• Linz, Peter (2006). Formal Languages and Automata (4th ed.). Sudbury, MA: Jones and Bartlett. ISBN 978-0-7637-3798-6.
• Minsky, Marvin (1967). Computation: Finite and Infinite Machines (1st ed.). New Jersey: Prentice-Hall.
• Papadimitriou, Christos (1993). Computational Complexity (1st ed.). Addison Wesley. ISBN 978-0-201-53082-7.
• Pippenger, Nicholas (1997). Theories of Computability (1st ed.). Cambridge, England: Cambridge University Press. ISBN 978-0-521-55380-3.
• Rodger, Susan; Finley, Thomas (2006). JFLAP: An Interactive Formal Languages and Automata Package (1st ed.). Sudbury, MA: Jones and Bartlett. ISBN 978-0-7637-3834-1.
• Sipser, Michael (2006). Introduction to the Theory of Computation (2nd ed.). Boston Mass: Thomson Course Technology. ISBN 978-0-534-95097-2.
• Wood, Derick (1987). Theory of Computation (1st ed.). New York: Harper & Row, Publishers, Inc. ISBN 978-0-06-047208-5.
### Machine learning using finite-state algorithms
• Mitchell, Tom M. (1997). Machine Learning (1st ed.). New York: WCB/McGraw-Hill Corporation. ISBN 978-0-07-042807-2.
### Hardware engineering: state minimization and synthesis of sequential circuits
• Booth, Taylor L. (1967). Sequential Machines and Automata Theory (1st ed.). New York: John Wiley and Sons, Inc. Library of Congress Card Catalog Number 67-25924.
• Booth, Taylor L. (1971). Digital Networks and Computer Systems (1st ed.). New York: John Wiley and Sons, Inc. ISBN 978-0-471-08840-0.
• McCluskey, E. J. (1965). Introduction to the Theory of Switching Circuits (1st ed.). New York: McGraw-Hill Book Company, Inc. Library of Congress Card Catalog Number 65-17394.
• Hill, Fredrick J.; Peterson, Gerald R. (1965). Introduction to the Theory of Switching Circuits (1st ed.). New York: McGraw-Hill Book Company. Library of Congress Card Catalog Number 65-17394.
### Finite Markov chain processes
"We may think of a Markov chain as a process that moves successively through a set of states s1, s2, …, sr. … if it is in state si it moves on to the next stop to state sj with probability pij. These probabilities can be exhibited in the form of a transition matrix" (Kemeny (1959), p. 384)
Finite Markov-chain processes are also known as subshifts of finite type.
• Booth, Taylor L. (1967). Sequential Machines and Automata Theory (1st ed.). New York: John Wiley and Sons, Inc. Library of Congress Card Catalog Number 67-25924.
• Kemeny, John G.; Mirkil, Hazleton; Snell, J. Laurie; Thompson, Gerald L. (1959). Finite Mathematical Structures (1st ed.). Englewood Cliffs, N.J.: Prentice-Hall, Inc. Library of Congress Card Catalog Number 59-12841. Chapter 6 "Finite Markov Chains".
|
2019-03-22 05:50:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 34, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7027266025543213, "perplexity": 1445.6701616790544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202635.43/warc/CC-MAIN-20190322054710-20190322080710-00030.warc.gz"}
|
https://ir.cwi.nl/pub/21022
|
We consider (near-)critical percolation on the square lattice. Let $\mathcal{M}_{n}$ be the size of the largest open cluster contained in the box $[-n,n]^2$, and let $\pi(n)$ be the probability that there is an open path from $O$ to the boundary of the box. It is well-known that for all $0< a < b$ the probability that $\mathcal{M}_{n}$ is smaller than $a n^2 \pi(n)$ and the probability that $\mathcal{M}_{n}$ is larger than $b n^2 \pi(n)$ are bounded away from $0$ as $n \rightarrow \infty$. It is a natural question, which arises for instance in the study of so-called frozen-percolation processes, if a similar result holds for the probability that $\mathcal{M}_{n}$ is {\em between} $a n^2 \pi(n)$ and $b n^2 \pi(n)$. By a suitable partition of the box, and a careful construction involving the building blocks, we show that the answer to this question is affirmative. The `sublinearity' of $1/\pi(n)$ appears to be essential for the argument.
Additional Metadata
Keywords Critical percolation, Cluster size
THEME Logistics (theme 3), Energy (theme 4)
Publisher ims
Persistent URL dx.doi.org/10.1214/ECP.v17-2263
Journal Electronic Communications in Probability
Citation
van den Berg, J, & Conijn, R.P. (2012). On the size of the largest cluster in 2D critical percolation. Electronic Communications in Probability, 17. doi:10.1214/ECP.v17-2263
Additional Files
Publisher Version
|
2020-07-02 05:22:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.92198246717453, "perplexity": 228.730962628168}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655878519.27/warc/CC-MAIN-20200702045758-20200702075758-00524.warc.gz"}
|
https://pythagitup.com/category/uncategorized/
|
Category: Uncategorized
# Black History Month
I took a risk this February. My school celebrates Black History Month in a number of ways, but I always feel like I need to do more. Here’s what I tried.
I’m not sure what made me think of this, but I decided that we would read a poem by an African-American poet each morning in homeroom. I figured this could be a simple yet powerful way to celebrate African-American culture, and honestly, I just thought it would be interesting. You can find the poems I used here.
I wasn’t really sure what to expect the first day. Would the students really listen? Would anyone want to participate? Would the experience be meaningful to anyone but me?
I explained to my homeroom the plan for the month. I told them that anyone could volunteer to read a poem or even bring in a poem of their choice. As I prepared to read the first poem, I paused and thought “Why not ask for a volunteer now?” I expected dead silence and blank stares. Instead, an energetic, excitable young man – who happens to be African-American – said he wanted to read the poem. Overjoyed. I was absolutely overjoyed.
As the month continued, I kept bringing in poems, and my students kept volunteering to read. It might have only been 7 or 8 students, but when I started, I had no expectations whatsoever. And while my students sometimes struggled to read the poems, they truly committed themselves to their delivery. And the rest of the class? Quiet, respectful, attentive. Did they find the poems interesting or meaningful or enjoyable? I can’t say, but I do know that they respected my idea and made it a reality.
Students read nearly all of the poems. I had to read 1 or 2 because of time constraints, and I asked a guidance counselor to read one. Her reading of Audre Lorde’s “Hanging Fire” truly moved me. I had hoped that having a “guest reader” would be special, but I was totally blown away. I think the kids were too.
For the last day of February, I decided to talk briefly about the idea of Black History Month and close with a short selection from a poem that means something to me. I thanked my students for committing to the poem readings all month and told them that I would really miss not having a poem to read every day. Then, I attempted to tell them how I’d like us all to carry the message of Black History Month forward. That we need to spend all year trying to make our school, our community, and our country more tolerant and more just. I think I stumbled over my words a bit here. I was emotional, especially knowing what would come next. I closed with the last few lines of Amiri Baraka’s “Three Modes of History and Culture.”
I think about a time when I will be relaxed.
When flames and non-specific passion wear themselves
away. And my eyes and hands and mind can turn
and soften, and my songs will be softer
and lightly weight the air.
I’m not a poet. I’m not an English teacher. I’m not a literary scholar. Maybe this poem or any of the others mean something totally different than I think. I don’t think it matters, though. What matters is that we pushed ourselves to do something different, that we worked outside of our comfort zone, that we really tried to learn and understand.
But it’s not enough. I need to do more next year. I need to do more for my students. To let them know that their history and their culture matter. To let them know that they matter. To help us all learn to be better, more tolerant, more understanding, more generous in spirit.
This was a risk. I don’t know if I did a good thing. I don’t know if I made a mistake. I badly want feedback, but I’m also terrified that I sent a message I didn’t intend to send. It’s uncomfortable sometimes – teaching – but it’s worth it for those moments. Those powerful moments when twenty-five thirteen- and fourteen-year-old students devote their attention to listening to a classmate read a poem. I hope that I made a difference.
Between 1969 and 1972, twelve men walked on the moon. This is humanity’s greatest accomplishment – that we managed to send astronauts more than 200,000 miles through the unknowns and the dangers of space, that these astronauts set foot on another world, and that they returned home safely. We did this to explore. Yes, there was an element of Cold War competition, but in the end, these missions were about science, about discovery, and about challenging the limits of possibility.
John Young has died, leaving only five surviving astronauts who walked on the moon. Their ages: 87, 85, 85, 82, and 82. Young was 87. Eight others who flew to the moon without landing are still alive; the youngest is 81. I hope that these men have many years left, but realistically, the day will soon come when no living person has set foot on the moon or even left low Earth orbit. Despite all of the advances we have made, we have not yet surpassed this accomplishment from nearly fifty years ago. NASA’s priorities have certainly changed, and they still do lots of wonderful, important work. And perhaps sending an astronaut back to the moon would serve little purpose. But I cannot avoid the sadness I feel knowing that some of our greatest heroes will soon be gone and that they will leave us without successors.
Gemini 3. Gemini 10. Apollo 10. Apollo 16. STS-1. STS-9. What an amazing career.
The image on the left shows Young in 1965, a few weeks before the Gemini 3 mission. The image on the right shows Young (seated, second from right) in 1983, about six months before the STS-9 mission. Young ultimately worked for NASA for 42 years. My words cannot do justice to his great career. Instead, let me share with you some quotes I find particularly meaningful in light of his death.
My favorite description of John Young comes from Andrew Chaikin’s A Man on the Moon:
Inside Young was an unwavering determination, an overriding sense of responsibility – to the space program, to the country, to his crew – and an almost childlike sense of wonder at the universe.
But more than this, I think, Young felt a responsibility – a commitment – to truth and to knowledge. Chaikin writes:
More than most astronauts, Mattingly thought, John Young seemed mindful of the risks of his profession. Around the Astronaut Office, his memos were well known, sounding the alarm about some engineering problem he’d uncovered. He wouldn’t rest until he knew every detail about the particular system or technique that worried him. And when he had learned all he could, then it was time to go fly – with his eyes wide open. That was the only way to handle this business; that was what made him so good. Maybe Young worried so much because he saw so clearly. But when it came down to the real question – Will you fly it? – John’s answer would always be yes.
John Young, intrepid explorer. Perhaps in looking at heroic figures from the past we see in them what we want to see. Maybe we look for the best of ourselves in them. In John Young, I see a man who lived for the thrill of discovery, a man for whom being bold was a way of life, a man who acknowledged challenges but saw past them, a man with a vision of limitless possibility. Consider the scene Chaikin describes as Young exits the Space Shuttle Columbia after its maiden flight:
Later, after the ground crews had arrived, Young emerged and bounded down the stairway to inspect his ship, punching the air with his fist like a relief pitcher who had just won the World Series. That day, Young told a crowd of well-wishers, “We’re really not too far, the human race isn’t, from going to the stars.”
Nearly thirty-seven years after that flight – and nearly forty-six years since Young walked on the moon – the stars still seem not too far off. NASA, SpaceX, Blue Origin, and others continue to push boundaries and extend our reach into the stars. But no amount of technological advancement can replace the boldness and the vision of men like John Young. We lost a great man on January 5, 2018. Ad astra, John Young. Ad astra.
I recently watched this video produced by YouCubed and Jo Boaler that talks about giftedness. Essentially, the video argues that labeling students as gifted presents equity issues and does a disservice to students by giving them a fixed idea of what they can learn and do as well as how they should behave.
Giftedness is real. One definition of “gifted” is “a high level of intelligence [indicative of] advanced, highly integrated, and accelerated development of functions within the brain” (Clark, 2013). The Elementary and Secondary Education Act defines gifted students as those who “give evidence of high achievement capability … and who need services or activities not ordinarily provided by the school in order to fully develop those capabilities.” Just as some individuals have extraordinary artistic or athletic talents, some students have significant intellectual gifts. Acknowledging this fact does not force us to believe that some students cannot learn math. Nor does it force us to set limits on what we think students can learn and do.
The problem, I think, is that YouCubed has conflated the concept of giftedness with how this concept has been applied in schools. Even if many teachers and schools wrongly label and limit kids, that doesn’t mean giftedness is not a useful concept. It simply means that teachers need to do better with how we use the idea of giftedness.
This argument refers to ineffective and inappropriate uses of giftedness to suggest that gifted education is inherently inequitable. But we can provide services to gifted students without limiting other children’s potential. It’s bad teaching to suggest that gifted students should always know the answer or should not ask questions. Similarly, it’s bad teaching to suggest that non-gifted students cannot learn high levels of math or to place false limitations on what students can do. But these are problems with teacher behavior. These are not problems with the idea of gifted education.
Indeed, our developing knowledge of neuroplasticity and the idea that brains experience significant growth and change actually support labeling students as gifted. Why? Because acknowledging the incredible potential that some students have forces us to consider ways to help them realize that potential.
Is it inequitable to provide services such as enriched classes to gifted students? No. Equity means allowing every student the opportunity to achieve his or her potential. Equity does not mean offering the exact same opportunities to every student. Our obligation as educators is to create an environment that helps every student to learn and grow as much as possible. We can do so while accepting that some students learn faster or slower, that some students require more support or greater challenges.
Is everyone gifted? No. But that doesn’t mean we should place artificial limits on what students can learn and do. It’s okay to acknowledge the great intellectual capacity and potential that gifted students have. We can do this without saying that gifted students are better or deserve more. We cannot afford to avoid labeling gifted students, however, because doing so will make it harder to meet the needs of exceptional learners.
Note: I wrote a draft of this post after initially viewing the YouCubed video last month. I’ve fleshed out some of my commentary, but it remains mostly the same as I left it late in the evening on November 9th.
References
Clark, B. (2013). Growing up gifted: Developing the potential of children at home and at school. Boston: Pearson.
Elementary and Secondary Education Act, 20 U.S.C. § 7801 (1965).
# TMC17 Highlights and Shout-outs
I’ve been back from Twitter Math Camp for more than a day. I guess that means it’s time to reflect on some of the best parts and to recognize some people who made the experience wonderful.
Dylan Kane was a great roommate. I’d never met him before TMC, so I wasn’t sure quite what I was getting myself into. But he was supportive and insightful. I’m glad I know him, and I look forward to talking to him more in the future. He’s @math8_teacher on Twitter and is definitely worth a follow.
Chris Luzniak was my mentor. His exuberance helped make my first TMC an amazing experience. I’m so thankful for Chris’s kindness and good humor. I probably wouldn’t have talked to anyone if he hadn’t forced me to meet so many people on the first day!
Chris and Mattie B led the three-day “Talk Less, Smile More” session. I learned so much about bringing discussion into the math classroom, and I’m super excited to get my students talking this year. I already have a fun Talking Points activity planned for the first few days. I look forward to using some of Chris and Mattie’s ideas to make my classroom into a richer learning environment for everyone.
Grace Chen delivered an amazing keynote! Her talked both saddened me and inspired me. As Grace showed, the deck is stacked against many people, and by their designs, systems in this country often work to prevent marginalized peoples from getting ahead. Grace’s thoughtfulness and passion cannot be denied. I’m so glad to have met her. I strongly encourage you to follow her on Twitter @graceachen. You won’t regret it!
I took a risk and talked to Sammie Marshall on Thursday. She listened to my rambling thoughts about diversity at my school, promoting tolerance, and supporting students from marginalized groups. These can be challenging topics to discuss, so I’m thankful for the opportunity to talk to someone as open-minded and as understanding as Sammie. She has a blog, and I really hope she starts writing more frequently!
I enjoyed Chris Shore‘s Clothesline Math session so much that I went to a second session he held the following day! Chris’s enthusiasm made his session an absolute blast, and Clothesline Math seems like a fantastic way to build number sense. I’m looking forward to using the clothesline this fall. I have a feeling it’s going to make a big difference with my students. Chris is on Twitter (@MathProjects), and even if he’s only 10% as awesome online as he was in person, that’s still pretty darn awesome!
Sam Shah is just an awesome guy. I told him about the Quote Board at the newbie dinner, and he urged me to blog about it. Who can say no to Sam Shah?! The post proved quite popular, which is a nice affirmation of the work I’ve done to build a strong classroom culture. I’ve read Sam’s blog for years, so I truly enjoyed meeting him. Thanks, Sam, for being awesome and for giving me a confidence boost!
Major shout-out Lisa Bejarano who was one of the first people I met and talked with. Her kindness and her support definitely helped me feel more comfortable. Lisa came up to me and complimented me on The Terror of Twitter Math Camp at a point when I didn’t think anyone even looked at it! I haven’t read her blog yet, but I’m looking forward to exploring it!
Shout-out as well to Deb Boden who talked to me every day – even after I’d forgotten that we’d met on the first day! Deb is just one of the many kind, welcoming people I met at TMC. Thanks to Deb and everyone else for making me feel like part of the TMC community.
I had an awesome conversation with Scott Miller on Friday while walking back to the hotel. Scott had some great advice on introducing new ideas to colleagues. I hope to demonstrate some of what I learned at TMC to my colleagues and let them actually experience the math!
I met Kate Nowak at Rose and Crown after Descon17. I’d read her blog many times over the years, so it was exciting to actually talk to her. We talked about Illustrative Math and curriculum in general. I’m looking forward to taking a deeper look at IM this year to see if it would be a good fit for my school. And I’m keeping my fingers crossed that Kate, who has blogged twice in the past week, will keep up the pace!
Plenty of other people made TMC17 memorable for me. Thanks to everyone for creating such a welcoming and enriching environment. I’m so excited for TMC18 in Cleveland!!!
# The Joy of Twitter Math Camp
On Thursday, I wrote about The Terror of Twitter Math Camp. Three amazing, inspiring, mind-blowing, thought-provoking, deeply moving days later and I’m ready to write the sequel.
I am a math teacher, but I don’t teach math.
I teach people.
People.
People make me nervous.
And anxious.
And uncomfortable.
But I work with people every day.
All day.
And I’m at a conference with people.
Who make me nervous.
And anxious.
And uncomfortable.
But they don’t do that.
I do that.
They talk to me.
And laugh with me.
And sit next to me.
And shake my hand.
And listen to me.
And smile at me.
And some of them feel the same way I feel.
Nervous.
And anxious.
And uncomfortable.
Maybe they think I make them feel that way.
But I don’t.
I talk to them.
And laugh with them.
And sit next to them.
And shake their hands.
And listen to them.
And smile at them.
But sometimes I don’t do any of that.
Sometimes I don’t talk to anyone.
I don’t laugh
I sit by myself.
I don’t look at anyone.
I don’t smile.
And sometimes they do all of that too.
Even though they teach people.
And work with people every day.
All day.
And they’re at a conference with people.
The same conference I’m at with people.
Because we want to be better math teachers.
Who don’t teach math.
We teach people.
People who make us nervous.
And anxious.
And uncomfortable.
I don’t know why I do it.
Or why they do it.
But I know exactly why.
Because I love people.
I want to help people.
I want to support people.
Even though people make me nervous.
And anxious.
And uncomfortable.
And that’s why I’m here and why they’re here.
And that’s why we share.
Even when we’re nervous.
And that’s why we talk.
Even when we’re anxious.
And that’s why we smile.
Even when we’re uncomfortable.
Because even though it doesn’t feel good.
It feels amazing.
And inspiring.
And wonderful.
And better than anything else in the world.
Just to make one student.
Even only one student.
More successful. Happier. Stronger. Kinder. Wiser. More confident.
To make one student into the person that student wants to be.
No matter what.
To make that student’s life better.
I can be nervous.
And anxious.
And uncomfortable.
For as long as I live.
But it’s worth it.
For that one student.
Even only one student.
Thank you all for the kindness, for the support, for the friendship, for the wisdom, for the generosity, for the hundreds of small gestures that made my time here so meaningful.
# The Terror of Twitter Math Camp
I don’t know who to talk to or what to say or where to stand or when to get involved or oh my gosh what do I do with my hands when I’m standing here shouldn’t talking to people be easier but it just seems so forced and am I thinking too long or not long enough and am I staring did I nod enough or too much am I agreeing too much do I need to jump in here what is this person’s name again do they even know I’m here why am I here anyway but maybe I’m doing okay or maybe not you can make it through this I know you can I know you can I know you can just be yourself but not too much yourself don’t seem crazy or strange or weird or insane or maybe just don’t worry about what anybody else is saying or doing but are they judging me and do I even care wait have I said anything lately what’s the right thing to say did someone just say that make sure to nod again smile not too much that looks creepy keep going you’re doing okay you can do this you can do this you can do this it will all be over eventually just hang in there it’s not that bad you will be stronger because of this it’s about the learning am I thinking too much again quick smile and nod and agree and make eye contact okay that’s too much eye contact don’t scare people but you don’t need to stare at their shoes either and make sure to smile and nod and don’t look uncomfortable but don’t look like you’re trying not to look uncomfortable and make sure to say something but not something obvious and don’t make jokes unless they’re good jokes how do I know if that’s funny anyway and do I laugh at my own joke and if someone else makes a joke laugh but not too much because nobody likes someone who laughs too much I don’t think I’ve blinked lately do people usually have to remember to blink or is that just me now I’m blinking too much okay maybe there’s something in my eye play it off okay smile again and nod yes I agree okay good it was nice meeting you okay have a good afternoon see you later I’ll try to be more normal I promise I promise I can do better trust me I’m just like all of you and I really want to be here I really do I really do I really do deep breaths deep breaths deep breaths deep breaths
# Get On My Fraction Level!
Many students have trouble with fractions. When I taught at a high school, my 10th and 11th grade students regularly had difficulty performing operations with fractions. As an 8th grade teacher, I’ve tried to help my students develop, refine, and maintain strong fraction skills. Don’t get me wrong: I don’t consider fluency in performing operations with fractions to be the most important skill for my students to have, but it’s certainly one that will contribute to their success at the high school level. With that in mind, here’s an approach I used this year to work on fractions.
After the warmup and overview of the day’s class some time in October, I pulled up a slide with four fraction multiplication problems. These first ones were relatively simple like $4\cdot \frac{1}{2}$. Students had little trouble performing the multiplication (yay!), and thankfully, students presented a number of different methods. The most common early responses were $\frac{4}{1}\cdot \frac{1}{2}=\frac{4}{2}=2$ and $4\div 2=2$. As I continued to present problems over the next few weeks, I added complications. Students noticed that simplifying often made the multiplication easier (e.g. $\frac{10}{5}\cdot 44$). The big breakthrough came when I presented a particularly annoying pair of fractions to multiply like $\frac{27}{7}\cdot \frac{14}{9}$. To this point, I had not pushed students to use a particular method; any simplifying they did came from them not me. Whoever offered the response of $\frac{378}{63}$ did not respond kindly to the question of whether that fraction could be simplified. By this point, students had been doing so much simplifying that it was no surprise to anyone that their lives would be easier if they found a way to simplify before multiplying. A brief discussion of the commutative property allowed a student to rewrite the multiplication as $\frac{27}{9}\cdot \frac{14}{7}$, which everyone in the classroom felt comfortable multiplying. It was a great moment of mathematical discovery.
As the weeks progressed, I continued to throw more and more challenging multiplication problems at them, and I also started to incorporate some addition, subtraction, and division. Students began feeling much more comfortable with fractions than they ever had before, even if they still weren’t the biggest fraction fans around. This fraction work paid off when we wrote equations of lines, and in general, I think it gave students some confidence in an area where they had so little before.
I definitely plan to continue using “Get On My Fraction Level” in my classes this coming school year. I’d like to find a way to incorporate more active participation. I might give students a weekly template to use each day when we do our fractions. I did that two years ago with scientific notation, and it worked pretty well. One big concern is time: with so many topics to cover, it’s difficult to carve out time to work on something that isn’t really an 8th grade standard. Having seen how working with fractions helped so many of my students grow, however, I will definitely find a way to incorporate regular fraction work into my lessons.
|
2018-06-24 05:24:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3058329224586487, "perplexity": 1794.0039422532218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866358.52/warc/CC-MAIN-20180624044127-20180624064127-00491.warc.gz"}
|
https://zbmath.org/authors/?q=ai%3Asun.zhizhong
|
## Sun, Zhizhong
Compute Distance To:
Author ID: sun.zhizhong Published as: Sun, Zhi-Zhong; Sun, Zhizhong; Sun, Zhi-zhong; Sun, ZhiZhong; Sun, Zhi Zhong more...less
Documents Indexed: 161 Publications since 1989, including 2 Books Co-Authors: 67 Co-Authors with 129 Joint Publications 2,162 Co-Co-Authors
all top 5
### Co-Authors
23 single-authored 19 Gao, Guanghua 9 Zhao, Xuan 8 Hao, Zhaopeng 8 Ji, Cuicui 8 Liao, HongLin 8 Sun, Hong 8 Wu, Xiaonan 7 Ren, Jincheng 7 Zhang, Yanan 6 Dai, Weizhong 6 Du, Rui 6 Shen, Jinye 5 Cao, Haiyan 5 Cao, Wanrong 5 Du, Ruilian 4 Shi, Hansheng 4 Wang, Xuping 4 Zhang, Jiwei 4 Zhang, Qifeng 3 Zhao, Lei 2 Alikhanov, Anatoly A. 2 Brenner, Gunther 2 Chen, Shaobing 2 Durst, Franz 2 Li, Fule 2 Li, Weidong 2 Li, Xueling 2 Liu, Jianming 2 Pan, Zhushan 2 Qiao, Zhonghua 2 Sun, Haiwei 2 Wang, Desheng 2 Wang, Jialing 2 Wang, Tingchun 2 Wu, Hongwei 2 Ye, Chao-rong 2 Zhang, Lingyun 2 Zhang, Zaibin 2 Zhang, Zhengru 2 Zhu, Youlan 1 Anjos, Miguel F. 1 Axelbaum, Richard L. 1 Chao, Bor-Herng 1 Chern, I-Liang 1 Cui, Jin 1 Davenne, Luc 1 Emine, Y. 1 Fan, Kai 1 Hamill, P. A. 1 Han, Houde 1 Hesthaven, Jan S. 1 Jiang, Mingjie 1 Karniadakis, George Em 1 Li, Juan 1 Li, Youwei 1 Lin, Guang 1 Liu, Desheng 1 Lodi, Andrea 1 Pecastaings, F. 1 Peng, Fei 1 Qin, Yifan 1 Ren, Yunzhu 1 Santa, K. J. 1 Shen, Longjun 1 Shi, Peihu 1 Stocker, D. P. 1 Stynes, Martin 1 Su, Yucheng 1 Sun, Weiwei 1 Sunderland, P. B. 1 Urban, D. L. 1 Vigne, Samuel A. 1 Wan, Zhengsu 1 Wu, Chikuang 1 Wu, Jingyu 1 Xie, Shijie 1 Xu, Peipei 1 Yan, Yonggui 1 Yang, Mei 1 Zhang, Jiyuan 1 Zhang, Yulian 1 Zhang, Zhimin 1 Zhao, Dandan 1 Zhu, Qiding 1 Zhu, Yun
all top 5
### Serials
26 Numerical Methods for Partial Differential Equations 14 Journal of Computational Physics 12 Journal of Scientific Computing 8 Applied Mathematics and Computation 6 Journal of Southeast University. English Edition 6 International Journal of Computer Mathematics 6 East Asian Journal on Applied Mathematics 5 Journal of Computational and Applied Mathematics 5 Numerical Mathematics 5 Mathematica Numerica Sinica 4 Computers & Mathematics with Applications 4 SIAM Journal on Numerical Analysis 4 Journal of Nanjing University. Mathematical Biquarterly 4 Applied Mathematical Modelling 3 Journal of Computational Mathematics 3 Applied Mathematics Letters 3 Numerical Algorithms 3 Computational Methods in Applied Mathematics 3 Numerical Mathematics: Theory, Methods and Applications 2 Mathematics of Computation 2 Numerische Mathematik 2 Acta Mathematicae Applicatae Sinica 2 Applied Numerical Mathematics 2 Journal of Southeast University 2 Numerical Mathematics 2 Advances in Computational Mathematics 2 Communications in Computational Physics 2 Science China. Mathematics 1 International Journal of Heat and Mass Transfer 1 Physics Letters. A 1 Problems of Information Transmission 1 Operations Research Letters 1 Applied Mathematics and Mechanics. (English Edition) 1 European Journal of Mechanics. A. Solids 1 Science in China. Series A 1 Annals of Operations Research 1 SIAM Journal on Scientific Computing 1 Journal on Numerical Methods and Computer Applications 1 Chinese Journal of Numerical Mathematics and Applications 1 Combustion Theory and Modelling 1 Central European Journal of Mathematics 1 Advances in Applied Mathematics and Mechanics 1 Scientia Sinica. Mathematica 1 Springer Finance 1 Communications on Applied Mathematics and Computation
all top 5
### Fields
148 Numerical analysis (65-XX) 113 Partial differential equations (35-XX) 12 Classical thermodynamics, heat transfer (80-XX) 11 Fluid mechanics (76-XX) 10 Real functions (26-XX) 9 Statistical mechanics, structure of matter (82-XX) 7 Ordinary differential equations (34-XX) 6 Integral equations (45-XX) 5 Mechanics of deformable solids (74-XX) 2 Probability theory and stochastic processes (60-XX) 2 Optics, electromagnetic theory (78-XX) 2 Quantum theory (81-XX) 2 Geophysics (86-XX) 2 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 1 Combinatorics (05-XX) 1 Special functions (33-XX) 1 Difference and functional equations (39-XX) 1 Approximations and expansions (41-XX) 1 Harmonic analysis on Euclidean spaces (42-XX) 1 Statistics (62-XX) 1 Computer science (68-XX) 1 Operations research, mathematical programming (90-XX) 1 Biology and other natural sciences (92-XX) 1 Information and communication theory, circuits (94-XX)
### Citations contained in zbMATH Open
123 Publications have been cited 3,525 times in 1,796 Documents Cited by Year
A fully discrete difference scheme for a diffusion-wave system. Zbl 1094.65083
Sun, Zhi-Zhong; Wu, Xiaonan
2006
A compact finite difference scheme for the fractional sub-diffusion equations. Zbl 1211.65112
Gao, Guanghua; Sun, Zhizhong
2011
A new fractional numerical differentiation formula to approximate the Caputo fractional derivative and its applications. Zbl 1349.65088
Gao, Guang-hua; Sun, Zhi-zhong; Zhang, Hong-wei
2014
A fourth-order compact ADI scheme for two-dimensional nonlinear space fractional Schrödinger equation. Zbl 1328.65187
Zhao, Xuan; Sun, Zhi-Zhong; Hao, Zhao-Peng
2014
A compact difference scheme for the fractional diffusion-wave equation. Zbl 1201.65154
Du, R.; Cao, W. R.; Sun, Z. Z.
2010
Maximum norm error bounds of ADI and compact ADI methods for solving parabolic equations. Zbl 1196.65154
Liao, Honglin; Sun, Zhizhong
2010
Alternating direction implicit schemes for the two-dimensional fractional sub-diffusion equation. Zbl 1242.65174
Zhang, Ya-Nan; Sun, Zhi-Zhong
2011
Finite difference methods for the time fractional diffusion equation on non-uniform meshes. Zbl 1349.65359
Zhang, Ya-nan; Sun, Zhi-zhong; Liao, Hong-lin
2014
Compact alternating direction implicit scheme for the two-dimensional fractional diffusion-wave equation. Zbl 1251.65126
Zhang, Ya-Nan; Sun, Zhi-Zhong; Zhao, Xuan
2012
A fourth-order approximation of fractional derivatives with its applications. Zbl 1352.65238
Hao, Zhao-peng; Sun, Zhi-zhong; Cao, Wan-rong
2015
Second-order approximations for variable order fractional derivatives: algorithms and applications. Zbl 1349.65092
Zhao, Xuan; Sun, Zhi-zhong; Karniadakis, George Em
2015
Error estimates of Crank-Nicolson-type difference schemes for the subdiffusion equation. Zbl 1251.65132
Zhang, Yanan; Sun, Zhizhong; Wu, Hongwei
2011
Compact difference scheme for the fractional sub-diffusion equation with Neumann boundary conditions. Zbl 1291.35428
Ren, Jincheng; Sun, Zhi-Zhong; Zhao, Xuan
2013
A box-type scheme for fractional sub-diffusion equation with Neumann boundary conditions. Zbl 1227.65075
Zhao, Xuan; Sun, Zhi-Zhong
2011
Error estimate of fourth-order compact scheme for linear Schrödinger equations. Zbl 1208.65130
Liao, Honglin; Sun, Zhizhong; Shi, Hansheng
2010
Stability and convergence of finite difference schemes for a class of time-fractional sub-diffusion equations based on certain superconvergence. Zbl 1349.65295
Gao, Guang-Hua; Sun, Hai-Wei; Sun, Zhi-Zhong
2015
A second-order accurate linearized difference scheme for the two-dimensional Cahn-Hilliard equation. Zbl 0847.65056
Sun, Zhizhong
1995
A finite difference scheme for fractional sub-diffusion equations on an unbounded domain using artificial boundary conditions. Zbl 1242.65160
Gao, Guang-Hua; Sun, Zhi-Zhong; Zhang, Ya-Nan
2012
Some high-order difference schemes for the distributed-order differential equations. Zbl 1349.65296
Gao, Guang-hua; Sun, Hai-wei; Sun, Zhi-zhong
2015
Compact difference schemes for heat equation with Neumann boundary conditions. Zbl 1181.65115
Sun, Zhi-Zhong
2009
A high-order compact finite difference scheme for the fractional sub-diffusion equation. Zbl 1328.65176
Ji, Cui-cui; Sun, Zhi-zhong
2015
On the $$L_\infty$$ convergence of a difference scheme for coupled nonlinear Schrödinger equations. Zbl 1198.65173
Sun, Zhi-Zhong; Zhao, Dan-Dan
2010
The temporal second order difference schemes based on the interpolation approximation for solving the time multi-term and distributed-order fractional sub-diffusion equations. Zbl 1381.65064
Gao, Guang-hua; Alikhanov, Anatoly A.; Sun, Zhi-zhong
2017
Numerical algorithm with high spatial accuracy for the fractional diffusion-wave equation with von Neumann boundary conditions. Zbl 1281.65113
Ren, Jincheng; Sun, Zhi-zhong
2013
Compact Crank-Nicolson schemes for a class of fractional Cattaneo equation in inhomogeneous medium. Zbl 1319.65084
Zhao, Xuan; Sun, Zhi-Zhong
2015
Some temporal second order difference schemes for fractional wave equations. Zbl 1352.65269
Sun, Hong; Sun, Zhi-Zhong; Gao, Guang-Hua
2016
Two alternating direction implicit difference schemes for two-dimensional distributed-order fractional diffusion equations. Zbl 1373.65055
Gao, Guang-hua; Sun, Zhi-zhong
2016
A linearized compact difference scheme for a class of nonlinear delay partial differential equations. Zbl 1352.65270
Sun, Zhi-Zhong; Zhang, Zai-Bin
2013
On Tsertvadze’s difference scheme for the Kuramoto-Tsuzuki equation. Zbl 0933.65104
Sun, Zhizhong; Zhu, Qiding
1998
A three level linearized compact difference scheme for the Cahn-Hilliard equation. Zbl 1262.65106
Li, Juan; Sun, Zhizhong; Zhao, Xuan
2012
Stability and convergence of second-order schemes for the nonlinear epitaxial growth model without slope selection. Zbl 1305.65186
Qiao, Zhonghua; Sun, Zhi-Zhong; Zhang, Zhengru
2015
Efficient numerical solution of the multi-term time fractional diffusion-wave equation. Zbl 1322.65088
Ren, Jincheng; Sun, Zhi-Zhong
2015
Error analysis of a compact ADI scheme for the 2D fractional subdiffusion equation. Zbl 1304.65208
Zhang, Ya-Nan; Sun, Zhi-Zhong
2014
Convergence of difference scheme for heat equation in unbounded domains using artificial boundary conditions. Zbl 1053.65074
Wu, Xiaonan; Sun, Zhizhong
2004
A finite difference scheme for semilinear space-fractional diffusion equations with time delay. Zbl 1410.65310
Hao, Zhaopeng; Fan, Kai; Cao, Wanrong; Sun, Zhizhong
2016
Convergence analysis of a linearized Crank-Nicolson scheme for the two-dimensional complex Ginzburg-Landau equation. Zbl 1274.65262
Zhang, Ya-Nan; Sun, Zhi-Zhong; Wang, Ting-Chun
2013
The stability and convergence of two linearized finite difference schemes for the nonlinear epitaxial growth model. Zbl 1252.82071
Qiao, Zhonghua; Sun, Zhi-Zhong; Zhang, Zhengru
2012
Some high order difference schemes for the space and time fractional Bloch-Torrey equations. Zbl 1410.65329
Sun, Hong; Sun, Zhizhong; Gao, Guanghua
2016
The stability and convergence of a difference scheme for the Schrödinger equation on an infinite domain by using artificial boundary conditions. Zbl 1094.65088
Sun, Zhi-zhong; Wu, Xiaonan
2006
Efficient and stable numerical methods for multi-term time fractional sub-diffusion equations. Zbl 1320.65120
Ren, Jincheng; Sun, Zhi-Zhong
2014
The finite difference approximation for a class of fractional sub-diffusion equations on a space unbounded domain. Zbl 1286.35251
Gao, Guang-Hua; Sun, Zhi-Zhong
2013
Fast evaluation of the Caputo fractional derivative and its applications to fractional diffusion equations: a second-order scheme. Zbl 1488.65306
Yan, Yonggui; Sun, Zhi-Zhong; Zhang, Jiwei
2017
Finite difference methods for a nonlinear and strongly coupled heat and moisture transport system in textile materials. Zbl 1332.76034
Sun, Weiwei; Sun, Zhizhong
2012
Temporal second order difference schemes for the multi-dimensional variable-order time fractional sub-diffusion equations. Zbl 1447.65020
Du, Ruilian; Alikhanov, Anatoly A.; Sun, Zhi-Zhong
2020
The temporal second order difference schemes based on the interpolation approximation for the time multi-term fractional wave equation. Zbl 1437.35696
Sun, Hong; Zhao, Xuan; Sun, Zhi-Zhong
2019
A linearized high-order difference scheme for the fractional Ginzburg-Landau equation. Zbl 1359.65150
Hao, Zhao-Peng; Sun, Zhi-Zhong
2017
Maximum norm error estimates of efficient difference schemes for second-order wave equations. Zbl 1227.65082
Liao, Honglin; Sun, Zhizhong
2011
Numerical algorithms with high spatial accuracy for the fourth-order fractional sub-diffusion equations with the first Dirichlet boundary conditions. Zbl 1373.65057
Ji, Cui-cui; Sun, Zhi-zhong; Hao, Zhao-peng
2016
A second-order accurate finite difference scheme for a class of nonlocal parabolic equations with natural boundary conditions. Zbl 0873.65129
Sun, Zhizhong
1996
Two unconditionally stable and convergent difference schemes with the extrapolation method for the one-dimensional distributed-order differential equations. Zbl 1339.65115
Gao, Guang-Hua; Sun, Zhi-Zhong
2016
Compact difference schemes for heat equation with Neumann boundary conditions. II. Zbl 1422.65152
Gao, Guang-Hua; Sun, Zhi-Zhong
2013
Convergence of compact ADI method for solving linear Schrödinger equations. Zbl 1259.65135
Liao, Honglin; Sun, Zhizhong; Shi, Hansheng; Wang, Tingchun
2012
A three-level linearized compact difference scheme for the Ginzburg-Landau equation. Zbl 1320.65116
Hao, Zhao-Peng; Sun, Zhi-Zhong; Cao, Wan-Rong
2015
Lubich second-order methods for distributed-order time-fractional differential equations with smooth solutions. Zbl 1457.65047
Du, Rui; Hao, Zhao-Peng; Sun, Zhi-Zhong
2016
A note on finite difference method for generalized Zakharov equations. Zbl 1007.65052
Sun, Zhizhong
2000
An unconditionally stable and $$O(\tau^2+h^4)$$ order $$L_\infty$$ convergent difference scheme for linear parabolic equations with variable coefficients. Zbl 0996.65096
Sun, Zhi-Zhong
2001
Two alternating direction implicit difference schemes for solving the two-dimensional time distributed-order wave equations. Zbl 1372.65230
Gao, Guang-hua; Sun, Zhi-zhong
2016
A two-level compact ADI method for solving second-order wave equations. Zbl 1279.65099
Liao, Hong-Lin; Sun, Zhi-Zhong
2013
A linearized difference scheme for the Kuramoto-Tsuzuki equation. Zbl 0844.65068
Sun, Z. Z.
1996
A finite difference scheme for solving the Timoshenko beam equations with boundary feedback. Zbl 1107.74049
Li, Fule; Sun, Zhizhong
2007
An analysis for a high-order difference scheme for numerical solution to $$U_{tt} = A(x,t)U_{xx} + F(x,t,u,u_{t},u_{x})$$. Zbl 1119.65081
Li, Wei-Dong; Sun, Zhi-Zhong; Zhao, Lei
2007
On the stability and convergence of a difference scheme for an one-dimensional parabolic inverse problem. Zbl 1119.65089
Ye, Chao-rong; Sun, Zhi-zhong
2007
A sufficient condition for the existence of restricted fractional $$(g, f)$$-factors in graphs. Zbl 1459.05275
Zhou, S.; Sun, Z.; Pan, Q.
2020
Two difference schemes for solving the one-dimensional time distributed-order fractional wave equations. Zbl 1372.65229
Gao, Guang-hua; Sun, Zhi-zhong
2017
A high-order difference scheme for a nonlocal boundary-value problem for the heat equation. Zbl 0996.65095
Sun, Zhi-Zhong
2001
Corrected explicit-implicit domain decomposition algorithms for two-dimensional semilinear parabolic equations. Zbl 1183.65120
Liao, HongLin; Shi, HanSheng; Sun, ZhiZhong
2009
On two linearized difference schemes for Burgers’ equation. Zbl 1398.65219
Sun, Hong; Sun, Zhi-zhong
2015
A second-order finite difference scheme for solving the dual-phase-lagging equation in a double-layered nanoscale thin film. Zbl 1359.65161
Sun, Hong; Sun, Zhi-Zhong; Dai, Weizhong
2017
Analysis of high-order absorbing boundary conditions for the Schrödinger equation. Zbl 1388.65070
Zhang, Jiwei; Sun, Zhizhong; Wu, Xiaonan; Wang, Desheng
2011
The high-order compact numerical algorithms for the two-dimensional fractional sub-diffusion equation. Zbl 1410.65315
Ji, Cui-cui; Sun, Zhi-zhong
2015
An H2N2 interpolation for Caputo derivative with order in $$(1,2)$$ and its application to time-fractional wave equations in more than one space dimension. Zbl 1439.65098
Shen, Jinye; Li, Changpin; Sun, Zhi-zhong
2020
A finite difference scheme on graded meshes for time-fractional nonlinear Korteweg-de Vries equation. Zbl 1429.65199
Shen, Jinye; Sun, Zhi-zhong; Cao, Wanrong
2019
A second order accurate difference scheme for the heat equation with concentrated capacity. Zbl 1060.65097
Sun, Zhizhong; Zhu, You-lan
2004
A generalized box scheme for the numerical solution of the Kuramoto-Tsuzuki equation. Zbl 0990.65503
Sun, Zhizhong
1996
Numerical schemes for solving the time-fractional dual-phase-lagging heat conduction model in a double-layered nanoscale thin film. Zbl 1434.65120
Ji, Cui-cui; Dai, Weizhong; Sun, Zhi-zhong
2019
An unconditionally stable and high-order convergent difference scheme for Stokes’ first problem for a heated generalized second grade fluid with fractional derivative. Zbl 1399.65154
Ji, Cuicui; Sun, Zhizhong
2017
On $$L_\infty$$ convergence of a linearized difference scheme for the Kuramoto-Tsuzuki equation. Zbl 0911.65076
Sun, Zhizhong
1997
A linearized compact difference scheme for an one-dimensional parabolic inverse problem. Zbl 1168.65378
Ye, Chao-Rong; Sun, Zhi-Zhong
2009
Maximum norm error estimates of the Crank-Nicolson scheme for solving a linear moving boundary problem. Zbl 1204.65112
Cao, Wan-Rong; Sun, Zhi-Zhong
2010
Two finite difference schemes for the phase field crystal equation. Zbl 1335.82030
Cao, HaiYan; Sun, ZhiZhong
2015
New approximations for solving the Caputo-type fractional partial differential equations. Zbl 1452.65176
Ren, Jincheng; Sun, Zhi-zhong; Dai, Weizhong
2016
Numerical method for solving the time-fractional dual-phase-lagging heat conduction equation with the temperature-jump boundary condition. Zbl 1422.65158
Ji, Cui-cui; Dai, Weizhong; Sun, Zhi-zhong
2018
A difference scheme for Burgers equation in an unbounded domain. Zbl 1214.65047
Sun, Zhi-Zhong; Wu, Xiao-Nan
2009
A high order accurate numerical method for solving two-dimensional dual-phase-lagging equation with temperature jump boundary condition in nanoheat conduction. Zbl 1335.65074
Sun, Hong; Du, Rui; Dai, Weizhong; Sun, Zhi-Zhong
2015
A new higher-order accurate numerical method for solving heat conduction in a double-layered film with the Neumann boundary condition. Zbl 1310.65096
Sun, Zhi-Zhong; Dai, Weizhong
2014
Two alternating direction implicit difference schemes with the extrapolation method for the two-dimensional distributed-order differential equations. Zbl 1443.65124
Gao, Guang-hua; Sun, Zhi-zhong
2015
Fast finite difference schemes for time-fractional diffusion equations with a weak singularity at initial time. Zbl 1468.65110
Shen, Jin-Ye; Sun, Zhi-Zhong; Du, Rui
2018
A Crank-Nicolson scheme for a class of delay nonlinear parabolic differential equations. Zbl 1240.65266
Zhang, Zaibin; Sun, Zhizhong
2010
The stability and convergence of an explicit difference scheme for the Schrödinger equation on an infinite domain by using artificial boundary conditions. Zbl 1175.65105
Sun, Zhi-Zhong
2006
A three-level linearized finite difference scheme for the Camassa-Holm equation. Zbl 1290.65071
Cao, Hai-Yan; Sun, Zhi-Zhong; Gao, Guang-Hua
2014
Numerical simulation of turbulent jet flow and combustion. Zbl 0969.76602
Zhou, X.; Sun, Z.; Durst, F.; Brenner, G.
1999
A new class of difference schemes for linear parabolic equations in 1-D. Zbl 0900.65261
Sun, Zhizhong
1994
Finite difference method for reaction-diffusion equation with nonlocal boundary conditions. Zbl 1174.65460
Liu, Jianming; Sun, Zhizhong
2007
Maximum norm error analysis of difference schemes for fractional diffusion equations. Zbl 1339.65136
Ren, Jincheng; Sun, Zhi-zhong
2015
A high-order difference scheme for the fractional sub-diffusion equation. Zbl 1364.65164
Hao, Zhao-Peng; Lin, Guang; Sun, Zhi-Zhong
2017
A finite difference approach for the initial-boundary value problem of the fractional Klein-Kramers equation in phase space. Zbl 1254.65014
Gao, Guang-hua; Sun, Zhi-zhong
2012
A new analytical technique of the $$L$$-type difference schemes for time fractional mixed sub-diffusion and diffusion-wave equations. Zbl 07206944
Sun, Zhi-zhong; Ji, Cui-cui; Du, Ruilian
2020
The pointwise error estimates of two energy-preserving fourth-order compact schemes for viscous Burgers’ equation. Zbl 1472.65103
Wang, Xuping; Zhang, Qifeng; Sun, Zhi-zhong
2021
A second-order linearized difference scheme on nonuniform meshes for nonlinear parabolic systems with Dirichlet boundary value conditions. Zbl 1033.65076
Zhang, Ling-Yun; Sun, Zhi-Zhong
2003
Derivative securities and difference methods. 2nd ed. Zbl 1337.91006
Zhu, You-lan; Wu, Xiaonan; Chern, I-Liang; Sun, Zhi-zhong
2013
The pointwise error estimates of two energy-preserving fourth-order compact schemes for viscous Burgers’ equation. Zbl 1472.65103
Wang, Xuping; Zhang, Qifeng; Sun, Zhi-zhong
2021
Two finite difference schemes for multi-dimensional fractional wave equations with weakly singular solutions. Zbl 1476.65191
Shen, Jinye; Stynes, Martin; Sun, Zhi-Zhong
2021
A fast temporal second-order compact ADI difference scheme for the 2D multi-term fractional wave equation. Zbl 1461.65231
Sun, Hong; Sun, Zhi-zhong
2021
A fast temporal second-order compact ADI scheme for time fractional mixed diffusion-wave equations. Zbl 1482.65141
Du, Rui-Lian; Sun, Zhi-Zhong
2021
Pointwise error estimate in difference setting for the two-dimensional nonlinear fractional complex Ginzburg-Landau equation. Zbl 1480.65230
Zhang, Qifeng; Hesthaven, Jan S.; Sun, Zhi-zhong; Ren, Yunzhu
2021
Temporal second-order difference methods for solving multi-term time fractional mixed diffusion and wave equations. Zbl 07384500
Du, Rui-lian; Sun, Zhi-zhong
2021
The study of exact and numerical solutions of the generalized viscous Burgers’ equation. Zbl 1453.65240
Zhang, Qifeng; Qin, Yifan; Wang, Xuping; Sun, Zhi-zhong
2021
Temporal second order difference schemes for the multi-dimensional variable-order time fractional sub-diffusion equations. Zbl 1447.65020
Du, Ruilian; Alikhanov, Anatoly A.; Sun, Zhi-Zhong
2020
A sufficient condition for the existence of restricted fractional $$(g, f)$$-factors in graphs. Zbl 1459.05275
Zhou, S.; Sun, Z.; Pan, Q.
2020
An H2N2 interpolation for Caputo derivative with order in $$(1,2)$$ and its application to time-fractional wave equations in more than one space dimension. Zbl 1439.65098
Shen, Jinye; Li, Changpin; Sun, Zhi-zhong
2020
A new analytical technique of the $$L$$-type difference schemes for time fractional mixed sub-diffusion and diffusion-wave equations. Zbl 07206944
Sun, Zhi-zhong; Ji, Cui-cui; Du, Ruilian
2020
Fractional differential equations. Finite difference methods. Zbl 1440.65003
Sun, Zhi-zhong; Gao, Guang-hua
2020
The conservation and convergence of two finite difference schemes for KdV equations with initial and boundary value conditions. Zbl 1463.65246
Shen, Jinye; Wang, Xuping; Sun, Zhizhong
2020
The temporal second order difference schemes based on the interpolation approximation for the time multi-term fractional wave equation. Zbl 1437.35696
Sun, Hong; Zhao, Xuan; Sun, Zhi-Zhong
2019
A finite difference scheme on graded meshes for time-fractional nonlinear Korteweg-de Vries equation. Zbl 1429.65199
Shen, Jinye; Sun, Zhi-zhong; Cao, Wanrong
2019
Numerical schemes for solving the time-fractional dual-phase-lagging heat conduction model in a double-layered nanoscale thin film. Zbl 1434.65120
Ji, Cui-cui; Dai, Weizhong; Sun, Zhi-zhong
2019
A linearized second-order difference scheme for the nonlinear time-fractional fourth-order reaction-diffusion equation. Zbl 1463.65250
Sun, Hong; Sun, Zhizhong; Du, Rui
2019
A compact difference scheme for multi-point boundary value problems of heat equations. Zbl 1449.65203
Wang, Xuping; Sun, Zhizhong
2019
Numerical method for solving the time-fractional dual-phase-lagging heat conduction equation with the temperature-jump boundary condition. Zbl 1422.65158
Ji, Cui-cui; Dai, Weizhong; Sun, Zhi-zhong
2018
Fast finite difference schemes for time-fractional diffusion equations with a weak singularity at initial time. Zbl 1468.65110
Shen, Jin-Ye; Sun, Zhi-Zhong; Du, Rui
2018
A high-order difference scheme for the space and time fractional Bloch-Torrey equation. Zbl 1382.65262
Zhu, Yun; Sun, Zhi-Zhong
2018
The temporal second order difference schemes based on the interpolation approximation for solving the time multi-term and distributed-order fractional sub-diffusion equations. Zbl 1381.65064
Gao, Guang-hua; Alikhanov, Anatoly A.; Sun, Zhi-zhong
2017
Fast evaluation of the Caputo fractional derivative and its applications to fractional diffusion equations: a second-order scheme. Zbl 1488.65306
Yan, Yonggui; Sun, Zhi-Zhong; Zhang, Jiwei
2017
A linearized high-order difference scheme for the fractional Ginzburg-Landau equation. Zbl 1359.65150
Hao, Zhao-Peng; Sun, Zhi-Zhong
2017
Two difference schemes for solving the one-dimensional time distributed-order fractional wave equations. Zbl 1372.65229
Gao, Guang-hua; Sun, Zhi-zhong
2017
A second-order finite difference scheme for solving the dual-phase-lagging equation in a double-layered nanoscale thin film. Zbl 1359.65161
Sun, Hong; Sun, Zhi-Zhong; Dai, Weizhong
2017
An unconditionally stable and high-order convergent difference scheme for Stokes’ first problem for a heated generalized second grade fluid with fractional derivative. Zbl 1399.65154
Ji, Cuicui; Sun, Zhizhong
2017
A high-order difference scheme for the fractional sub-diffusion equation. Zbl 1364.65164
Hao, Zhao-Peng; Lin, Guang; Sun, Zhi-Zhong
2017
Some temporal second order difference schemes for fractional wave equations. Zbl 1352.65269
Sun, Hong; Sun, Zhi-Zhong; Gao, Guang-Hua
2016
Two alternating direction implicit difference schemes for two-dimensional distributed-order fractional diffusion equations. Zbl 1373.65055
Gao, Guang-hua; Sun, Zhi-zhong
2016
A finite difference scheme for semilinear space-fractional diffusion equations with time delay. Zbl 1410.65310
Hao, Zhaopeng; Fan, Kai; Cao, Wanrong; Sun, Zhizhong
2016
Some high order difference schemes for the space and time fractional Bloch-Torrey equations. Zbl 1410.65329
Sun, Hong; Sun, Zhizhong; Gao, Guanghua
2016
Numerical algorithms with high spatial accuracy for the fourth-order fractional sub-diffusion equations with the first Dirichlet boundary conditions. Zbl 1373.65057
Ji, Cui-cui; Sun, Zhi-zhong; Hao, Zhao-peng
2016
Two unconditionally stable and convergent difference schemes with the extrapolation method for the one-dimensional distributed-order differential equations. Zbl 1339.65115
Gao, Guang-Hua; Sun, Zhi-Zhong
2016
Lubich second-order methods for distributed-order time-fractional differential equations with smooth solutions. Zbl 1457.65047
Du, Rui; Hao, Zhao-Peng; Sun, Zhi-Zhong
2016
Two alternating direction implicit difference schemes for solving the two-dimensional time distributed-order wave equations. Zbl 1372.65230
Gao, Guang-hua; Sun, Zhi-zhong
2016
New approximations for solving the Caputo-type fractional partial differential equations. Zbl 1452.65176
Ren, Jincheng; Sun, Zhi-zhong; Dai, Weizhong
2016
A fourth-order approximation of fractional derivatives with its applications. Zbl 1352.65238
Hao, Zhao-peng; Sun, Zhi-zhong; Cao, Wan-rong
2015
Second-order approximations for variable order fractional derivatives: algorithms and applications. Zbl 1349.65092
Zhao, Xuan; Sun, Zhi-zhong; Karniadakis, George Em
2015
Stability and convergence of finite difference schemes for a class of time-fractional sub-diffusion equations based on certain superconvergence. Zbl 1349.65295
Gao, Guang-Hua; Sun, Hai-Wei; Sun, Zhi-Zhong
2015
Some high-order difference schemes for the distributed-order differential equations. Zbl 1349.65296
Gao, Guang-hua; Sun, Hai-wei; Sun, Zhi-zhong
2015
A high-order compact finite difference scheme for the fractional sub-diffusion equation. Zbl 1328.65176
Ji, Cui-cui; Sun, Zhi-zhong
2015
Compact Crank-Nicolson schemes for a class of fractional Cattaneo equation in inhomogeneous medium. Zbl 1319.65084
Zhao, Xuan; Sun, Zhi-Zhong
2015
Stability and convergence of second-order schemes for the nonlinear epitaxial growth model without slope selection. Zbl 1305.65186
Qiao, Zhonghua; Sun, Zhi-Zhong; Zhang, Zhengru
2015
Efficient numerical solution of the multi-term time fractional diffusion-wave equation. Zbl 1322.65088
Ren, Jincheng; Sun, Zhi-Zhong
2015
A three-level linearized compact difference scheme for the Ginzburg-Landau equation. Zbl 1320.65116
Hao, Zhao-Peng; Sun, Zhi-Zhong; Cao, Wan-Rong
2015
On two linearized difference schemes for Burgers’ equation. Zbl 1398.65219
Sun, Hong; Sun, Zhi-zhong
2015
The high-order compact numerical algorithms for the two-dimensional fractional sub-diffusion equation. Zbl 1410.65315
Ji, Cui-cui; Sun, Zhi-zhong
2015
Two finite difference schemes for the phase field crystal equation. Zbl 1335.82030
Cao, HaiYan; Sun, ZhiZhong
2015
A high order accurate numerical method for solving two-dimensional dual-phase-lagging equation with temperature jump boundary condition in nanoheat conduction. Zbl 1335.65074
Sun, Hong; Du, Rui; Dai, Weizhong; Sun, Zhi-Zhong
2015
Two alternating direction implicit difference schemes with the extrapolation method for the two-dimensional distributed-order differential equations. Zbl 1443.65124
Gao, Guang-hua; Sun, Zhi-zhong
2015
Maximum norm error analysis of difference schemes for fractional diffusion equations. Zbl 1339.65136
Ren, Jincheng; Sun, Zhi-zhong
2015
A high accurate and conservative difference scheme for the solutions of nonlinear Schrödinger equations. Zbl 1340.65166
Cui, Jin; Sun, Zhizhong; Wu, Hongwei
2015
A new fractional numerical differentiation formula to approximate the Caputo fractional derivative and its applications. Zbl 1349.65088
Gao, Guang-hua; Sun, Zhi-zhong; Zhang, Hong-wei
2014
A fourth-order compact ADI scheme for two-dimensional nonlinear space fractional Schrödinger equation. Zbl 1328.65187
Zhao, Xuan; Sun, Zhi-Zhong; Hao, Zhao-Peng
2014
Finite difference methods for the time fractional diffusion equation on non-uniform meshes. Zbl 1349.65359
Zhang, Ya-nan; Sun, Zhi-zhong; Liao, Hong-lin
2014
Error analysis of a compact ADI scheme for the 2D fractional subdiffusion equation. Zbl 1304.65208
Zhang, Ya-Nan; Sun, Zhi-Zhong
2014
Efficient and stable numerical methods for multi-term time fractional sub-diffusion equations. Zbl 1320.65120
Ren, Jincheng; Sun, Zhi-Zhong
2014
A new higher-order accurate numerical method for solving heat conduction in a double-layered film with the Neumann boundary condition. Zbl 1310.65096
Sun, Zhi-Zhong; Dai, Weizhong
2014
A three-level linearized finite difference scheme for the Camassa-Holm equation. Zbl 1290.65071
Cao, Hai-Yan; Sun, Zhi-Zhong; Gao, Guang-Hua
2014
A numerical method for solving the nonlinear Fermi-Pasta-Ulam problem. Zbl 1290.65081
Ren, Jincheng; Sun, Zhi-Zhong; Cao, Hai-Yan
2014
A second-order three-level difference scheme for a magneto-thermo-elasticity model. Zbl 1311.74136
Cao, Hai-Yan; Sun, Zhi-Zhong; Zhao, Xuan
2014
Compact difference scheme for the fractional sub-diffusion equation with Neumann boundary conditions. Zbl 1291.35428
Ren, Jincheng; Sun, Zhi-Zhong; Zhao, Xuan
2013
Numerical algorithm with high spatial accuracy for the fractional diffusion-wave equation with von Neumann boundary conditions. Zbl 1281.65113
Ren, Jincheng; Sun, Zhi-zhong
2013
A linearized compact difference scheme for a class of nonlinear delay partial differential equations. Zbl 1352.65270
Sun, Zhi-Zhong; Zhang, Zai-Bin
2013
Convergence analysis of a linearized Crank-Nicolson scheme for the two-dimensional complex Ginzburg-Landau equation. Zbl 1274.65262
Zhang, Ya-Nan; Sun, Zhi-Zhong; Wang, Ting-Chun
2013
The finite difference approximation for a class of fractional sub-diffusion equations on a space unbounded domain. Zbl 1286.35251
Gao, Guang-Hua; Sun, Zhi-Zhong
2013
Compact difference schemes for heat equation with Neumann boundary conditions. II. Zbl 1422.65152
Gao, Guang-Hua; Sun, Zhi-Zhong
2013
A two-level compact ADI method for solving second-order wave equations. Zbl 1279.65099
Liao, Hong-Lin; Sun, Zhi-Zhong
2013
Derivative securities and difference methods. 2nd ed. Zbl 1337.91006
Zhu, You-lan; Wu, Xiaonan; Chern, I-Liang; Sun, Zhi-zhong
2013
Compact alternating direction implicit scheme for the two-dimensional fractional diffusion-wave equation. Zbl 1251.65126
Zhang, Ya-Nan; Sun, Zhi-Zhong; Zhao, Xuan
2012
A finite difference scheme for fractional sub-diffusion equations on an unbounded domain using artificial boundary conditions. Zbl 1242.65160
Gao, Guang-Hua; Sun, Zhi-Zhong; Zhang, Ya-Nan
2012
A three level linearized compact difference scheme for the Cahn-Hilliard equation. Zbl 1262.65106
Li, Juan; Sun, Zhizhong; Zhao, Xuan
2012
The stability and convergence of two linearized finite difference schemes for the nonlinear epitaxial growth model. Zbl 1252.82071
Qiao, Zhonghua; Sun, Zhi-Zhong; Zhang, Zhengru
2012
Finite difference methods for a nonlinear and strongly coupled heat and moisture transport system in textile materials. Zbl 1332.76034
Sun, Weiwei; Sun, Zhizhong
2012
Convergence of compact ADI method for solving linear Schrödinger equations. Zbl 1259.65135
Liao, Honglin; Sun, Zhizhong; Shi, Hansheng; Wang, Tingchun
2012
A finite difference approach for the initial-boundary value problem of the fractional Klein-Kramers equation in phase space. Zbl 1254.65014
Gao, Guang-hua; Sun, Zhi-zhong
2012
A linearized difference scheme for semilinear parabolic equations with nonlinear absorbing boundary conditions. Zbl 1245.65114
Sun, Zhi-Zhong; Wu, Xiaonan; Zhang, Jiwei; Wang, Desheng
2012
A compact finite difference scheme for the fractional sub-diffusion equations. Zbl 1211.65112
Gao, Guanghua; Sun, Zhizhong
2011
Alternating direction implicit schemes for the two-dimensional fractional sub-diffusion equation. Zbl 1242.65174
Zhang, Ya-Nan; Sun, Zhi-Zhong
2011
Error estimates of Crank-Nicolson-type difference schemes for the subdiffusion equation. Zbl 1251.65132
Zhang, Yanan; Sun, Zhizhong; Wu, Hongwei
2011
A box-type scheme for fractional sub-diffusion equation with Neumann boundary conditions. Zbl 1227.65075
Zhao, Xuan; Sun, Zhi-Zhong
2011
Maximum norm error estimates of efficient difference schemes for second-order wave equations. Zbl 1227.65082
Liao, Honglin; Sun, Zhizhong
2011
Analysis of high-order absorbing boundary conditions for the Schrödinger equation. Zbl 1388.65070
Zhang, Jiwei; Sun, Zhizhong; Wu, Xiaonan; Wang, Desheng
2011
A compact difference scheme for the fractional diffusion-wave equation. Zbl 1201.65154
Du, R.; Cao, W. R.; Sun, Z. Z.
2010
Maximum norm error bounds of ADI and compact ADI methods for solving parabolic equations. Zbl 1196.65154
Liao, Honglin; Sun, Zhizhong
2010
Error estimate of fourth-order compact scheme for linear Schrödinger equations. Zbl 1208.65130
Liao, Honglin; Sun, Zhizhong; Shi, Hansheng
2010
On the $$L_\infty$$ convergence of a difference scheme for coupled nonlinear Schrödinger equations. Zbl 1198.65173
Sun, Zhi-Zhong; Zhao, Dan-Dan
2010
Maximum norm error estimates of the Crank-Nicolson scheme for solving a linear moving boundary problem. Zbl 1204.65112
Cao, Wan-Rong; Sun, Zhi-Zhong
2010
A Crank-Nicolson scheme for a class of delay nonlinear parabolic differential equations. Zbl 1240.65266
Zhang, Zaibin; Sun, Zhizhong
2010
Maximum norm error analysis of explicit schemes for two-dimensional nonlinear Schrödinger equations. Zbl 1488.65255
Liao, Honglin; Sun, Zhizhong; Shi, Hansheng
2010
Compact difference schemes for heat equation with Neumann boundary conditions. Zbl 1181.65115
Sun, Zhi-Zhong
2009
Corrected explicit-implicit domain decomposition algorithms for two-dimensional semilinear parabolic equations. Zbl 1183.65120
Liao, HongLin; Shi, HanSheng; Sun, ZhiZhong
2009
A linearized compact difference scheme for an one-dimensional parabolic inverse problem. Zbl 1168.65378
Ye, Chao-Rong; Sun, Zhi-Zhong
2009
A difference scheme for Burgers equation in an unbounded domain. Zbl 1214.65047
Sun, Zhi-Zhong; Wu, Xiao-Nan
2009
A second-order accurate difference scheme for the two-dimensional Burgers’ system. Zbl 1153.76048
Xu, Peipei; Sun, Zhizhong
2009
A second-order linearized difference scheme for a strongly coupled reaction-diffusion system. Zbl 1185.65142
Cao, Hai-Yan; Sun, Zhi-Zhong
2008
Convergence of a difference scheme for the heat equation in a long strip by artificial boundary conditions. Zbl 1133.65070
Han, Houde; Sun, Zhi-Zhong; Wu, Xiao-Nan
2008
A finite difference scheme for solving the Timoshenko beam equations with boundary feedback. Zbl 1107.74049
Li, Fule; Sun, Zhizhong
2007
An analysis for a high-order difference scheme for numerical solution to $$U_{tt} = A(x,t)U_{xx} + F(x,t,u,u_{t},u_{x})$$. Zbl 1119.65081
Li, Wei-Dong; Sun, Zhi-Zhong; Zhao, Lei
2007
...and 23 more Documents
all top 5
### Cited by 1,774 Authors
79 Sun, Zhizhong 47 Dehghan Takht Fooladi, Mehdi 43 Liu, Fawang 38 Xu, Da 33 Abbaszadeh, Mostafa 32 Sun, Haiwei 30 Liu, Yang 30 Wang, Hong 29 Li, Hong 29 Wang, Tingchun 27 Zhang, Chengjian 27 Zhang, Jiwei 26 Vong, Seakweng 24 Huang, Chengming 24 Turner, Ian William 24 Wang, Zhibo 24 Zhang, Luming 23 Liao, HongLin 23 Wang, Yushun 22 Gao, Guanghua 21 Li, Dongfang 21 Tang, Yifa 21 Wang, Yuanming 20 Jiang, Xiaoyun 20 Zheng, Xiangcheng 18 Deng, Weihua 18 Yang, Xuehua 17 Li, Meng 17 Li, Xiaoli 17 Zhang, Haixiang 17 Zhang, Qifeng 16 Karniadakis, George Em 16 Li, Changpin 16 Lu, Shujuan 16 Yan, Yubin 16 Zaky, Mahmoud A. 16 Zeng, Fanhai 15 Bu, Weiping 15 Dai, Weizhong 15 Xiao, Aiguo 15 Zhao, Yongliang 15 Zhou, Zhi 14 Feng, Libo 14 Gu, Xian-Ming 14 Hao, Zhaopeng 14 Hendy, Ahmed S. 14 Lyu, Pin 14 Mohebbi, Akbar 13 Machado, José António Tenreiro 13 Qiu, Wenlin 13 Yin, Baoli 13 Zhang, Zhimin 13 Zhao, Yanmin 12 Anh, Vo V. 12 Chen, Minghua 12 Deng, Dingwen 12 Fu, Hongfei 12 Omrani, Khaled 12 Ren, Jincheng 12 Rui, Hongxing 12 Sun, Hongguang 12 Wei, LeiLei 12 Xu, Chuanju 11 Alikhanov, Anatoly A. 11 Băleanu, Dumitru I. 11 Bhrawy, Ali Hassan 11 Cai, Wenjun 11 Cheng, Aijie 11 Du, Rui 11 Huang, Jianfei 11 Jin, Bangti 11 Lei, Siulong 11 Li, Buyang 11 Liang, Dong 11 Qiao, Zhonghua 11 Zhai, Shuying 11 Zhang, Zhiyue 10 Cao, Wanrong 10 Chen, Hu 10 Hu, Xiuling 10 Liu, Huan 10 Liu, Zhengguang 10 Mei, Liquan 10 Ng, Michael Kwok-Po 10 Qiao, Leijie 10 Ran, Maohua 10 Wang, Dongling 10 Wang, Jilu 10 Wang, Pengde 10 Wu, Xiaonan 10 Zayernouri, Mohsen 10 Zheng, Chunxiong 9 Abdelkawy, Mohamed A. 9 Cui, Mingrong 9 Ding, Hengfei 9 Guo, Boling 9 Huang, Ting-Zhu 9 Lin, Xuelei 9 Ren, Lei 9 Shi, Dongyang ...and 1,674 more Authors
all top 5
### Cited in 177 Serials
140 Applied Numerical Mathematics 129 Computers & Mathematics with Applications 128 Applied Mathematics and Computation 126 Journal of Scientific Computing 110 Journal of Computational Physics 92 Journal of Computational and Applied Mathematics 82 Numerical Algorithms 72 International Journal of Computer Mathematics 51 Computational and Applied Mathematics 51 Advances in Difference Equations 47 Applied Mathematics Letters 40 Numerical Methods for Partial Differential Equations 34 Mathematics and Computers in Simulation 32 Communications in Nonlinear Science and Numerical Simulation 29 Applied Mathematical Modelling 26 SIAM Journal on Scientific Computing 26 Fractional Calculus & Applied Analysis 24 SIAM Journal on Numerical Analysis 23 Advances in Applied Mathematics and Mechanics 21 East Asian Journal on Applied Mathematics 18 Journal of Applied Mathematics and Computing 17 Advances in Computational Mathematics 16 Engineering Analysis with Boundary Elements 15 Communications on Applied Mathematics and Computation 12 Computer Methods in Applied Mechanics and Engineering 12 Mathematics of Computation 12 Chaos, Solitons and Fractals 12 Communications in Computational Physics 12 Journal of Function Spaces 11 Applicable Analysis 11 Computational Methods in Applied Mathematics 11 Science China. Mathematics 11 AIMS Mathematics 10 Nonlinear Dynamics 10 Journal of Applied Analysis and Computation 9 Numerische Mathematik 9 Acta Mathematicae Applicatae Sinica. English Series 9 Mathematical Problems in Engineering 8 International Journal of Applied and Computational Mathematics 7 Abstract and Applied Analysis 7 European Series in Applied and Industrial Mathematics (ESAIM): Mathematical Modelling and Numerical Analysis 7 Numerical Mathematics: Theory, Methods and Applications 6 Journal of Mathematical Analysis and Applications 6 Physica A 6 BIT 6 Journal of Computational Mathematics 6 Inverse Problems in Science and Engineering 5 Mathematical Methods in the Applied Sciences 5 Calcolo 5 Numerical Functional Analysis and Optimization 5 Taiwanese Journal of Mathematics 5 Lobachevskii Journal of Mathematics 5 Discrete and Continuous Dynamical Systems. Series B 5 Boundary Value Problems 5 Advances in Mathematical Physics 4 Computer Physics Communications 4 Journal of Difference Equations and Applications 4 Chaos 4 Mathematical Modelling and Analysis 4 International Journal of Computational Methods 4 Computational Methods for Differential Equations 3 International Journal of Numerical Methods for Heat & Fluid Flow 3 ETNA. Electronic Transactions on Numerical Analysis 3 Journal of Vibration and Control 3 Discrete Dynamics in Nature and Society 3 International Journal of Nonlinear Sciences and Numerical Simulation 3 Differential Equations 3 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 3 Mediterranean Journal of Mathematics 3 International Journal of Numerical Analysis and Modeling 3 Discrete and Continuous Dynamical Systems. Series S 3 Open Mathematics 2 Computers and Fluids 2 Inverse Problems 2 Journal of Mathematical Physics 2 Lithuanian Mathematical Journal 2 ZAMP. Zeitschrift für angewandte Mathematik und Physik 2 Demonstratio Mathematica 2 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 2 SIAM Journal on Control and Optimization 2 Acta Applicandae Mathematicae 2 Computational Mechanics 2 Japan Journal of Industrial and Applied Mathematics 2 Journal of Mathematical Sciences (New York) 2 Filomat 2 Journal of Inverse and Ill-Posed Problems 2 Complexity 2 Differential Equations and Dynamical Systems 2 Journal of Applied Mathematics 2 Multiscale Modeling & Simulation 2 Structural and Multidisciplinary Optimization 2 Journal of Statistical Mechanics: Theory and Experiment 2 Waves in Random and Complex Media 2 Journal of Nonlinear Science and Applications 2 Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A: Matemáticas. RACSAM 2 Journal of Mathematics 2 AMM. Applied Mathematics and Mechanics. (English Edition) 2 Journal of Computational and Theoretical Transport 1 Acta Mechanica 1 International Journal of Control ...and 77 more Serials
all top 5
### Cited in 38 Fields
1,648 Numerical analysis (65-XX) 1,276 Partial differential equations (35-XX) 335 Real functions (26-XX) 147 Fluid mechanics (76-XX) 120 Ordinary differential equations (34-XX) 74 Integral equations (45-XX) 52 Statistical mechanics, structure of matter (82-XX) 38 Mechanics of deformable solids (74-XX) 29 Classical thermodynamics, heat transfer (80-XX) 27 Special functions (33-XX) 26 Approximations and expansions (41-XX) 25 Probability theory and stochastic processes (60-XX) 24 Linear and multilinear algebra; matrix theory (15-XX) 22 Biology and other natural sciences (92-XX) 19 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 18 Integral transforms, operational calculus (44-XX) 18 Calculus of variations and optimal control; optimization (49-XX) 17 Systems theory; control (93-XX) 16 Quantum theory (81-XX) 14 Difference and functional equations (39-XX) 14 Operator theory (47-XX) 13 Optics, electromagnetic theory (78-XX) 12 Dynamical systems and ergodic theory (37-XX) 10 Harmonic analysis on Euclidean spaces (42-XX) 7 Information and communication theory, circuits (94-XX) 5 Functional analysis (46-XX) 5 Computer science (68-XX) 4 Number theory (11-XX) 4 Statistics (62-XX) 4 Mechanics of particles and systems (70-XX) 4 Geophysics (86-XX) 3 Operations research, mathematical programming (90-XX) 2 General and overarching topics; collections (00-XX) 1 Topological groups, Lie groups (22-XX) 1 Potential theory (31-XX) 1 Differential geometry (53-XX) 1 Global analysis, analysis on manifolds (58-XX) 1 Mathematics education (97-XX)
|
2022-10-04 23:23:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5395787954330444, "perplexity": 6492.673572204019}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00129.warc.gz"}
|
https://index.mirasmart.com/ISMRM2019/PDFfiles/3564.html
|
3564
Investigating the Benefits of Incorporating Higher Order Spherical Harmonics in Axon Diameter Measurements
Qiuyun Fan1,2, Aapo A Nummenmaa1,2, Qiyuan Tian1,2, Ned A Ohringer1,2, Thomas Witzel1,2, Lawrence L Wald1,2,3, Bruce R Rosen1,2,3, and Susie Y Huang1,2,3
1Radiology, Massachusetts General Hospital, Charlestown, MA, United States, 2Harvard Medical School, Boston, MA, United States, 3Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, United States
Synopsis
Separating out the scalar and orientation-dependent components of the diffusion MRI signal offers the possibility of increasing sensitivity to microscopic tissue features unconfounded by the fiber orientation. Recent approaches to estimating apparent axon diameter in white matter have employed spherical averaging to avoid the confounding effects of fiber crossings and dispersion at the expense of losing sensitivity to effective compartment size. Here, we investigate the feasibility and benefits of incorporating higher-order spherical harmonic (SH) components into a rotationally invariant axon diameter estimation framework and demonstrate improved precision of axon diameter estimation in the in vivo human brain.
Introduction
Approaches for estimating microstructural tissue properties by diffusion MRI have evolved over the past decade, with increasing recognition that the scalar parameters describing biophysical features of tissue, e.g., volume fraction, diffusivity, and effective compartment size, and the orientation-dependent parameters describing the fiber orientation distribution function, carry separate and complementary information 1-6. Recent approaches to estimating apparent axon diameter in white matter using high b-value data have employed spherical averaging to avoid the confounding effects of fiber crossings and orientation dispersion 7,16. While spherical averaging enables estimation of biophysical parameters independent of crossing fibers, averaging the diffusion MRI signal over all directions reduces sensitivity to apparent axon diameter by reducing the contribution of the signal perpendicular to the principal fiber direction. The goal of this study is to investigate the feasibility and benefits of incorporating higher-order spherical harmonic (SH) components into the axon diameter estimation framework.
Theory
White matter is modeled as composed of two compartments: restricted diffusion in cylindrical axons modeled using the Gaussian phase distribution approximation8, and extra-axonal hindered diffusion modeled as Gaussian diffusion with parallel diffusivity of 1.7x10-3mm2/s. The signal was projected to the SH space (Figure 1), so that $s_{lm}=c_{l}\cdot p_{lm}$, where $s_{lm}$, $c_{l}$, and $p_{lm}$ are the SH coefficients for the diffusion signal, response kernel (Figure 2) and orientation distribution function (ODF). Then both sides of the equation were divided by a normalization factor $\sum_{j=1}^Ts_{lm}=\sum_{j=1}^Tc_{l}\cdot p_{lm}$ over all $T$ experiments, so that the $p_{lm}$ on the right side is canceled out, yielding the relationship between the normalized signal and the normalized response kernel, i.e., $ns_{l}$ = $nc_{l}\equiv \frac{c_{l}}{\sum_{j=1}^Tc_{l}}$, an ODF independent form response kernel. Specifically, $c_{l}=f_{r}\cdot c_{r}(a)+f_{h}\cdot c_{h}(D_{h})$, where $f_{r}+f_{h}=1$, $c_{r}$ is the SH representation of the van Galderen model8,9, and $c_{h}$ is the SH representation of the tensor model10.
Method
A healthy subject was scanned on the 3T Connectome scanner with 300mT/m maximum gradient strength using a custom-made 64-channel head coil11. Real-valued diffusion data was acquired to avoid buildup of the noise floor12. Sagittal 2-mm isotropic resolution diffusion-weighted spin-echo EPI images were acquired with whole brain coverage. The following parameters were used: TR/TE=4000/77ms, δ=8ms, Δ=19/49ms, 8 diffusion gradient strengths linearly spaced from 30-290mT/m per Δ, 32-64 diffusion directions, parallel imaging (R=2) and simultaneous multislice (MB=2). Diffusion data were corrected for susceptibility and eddy current distortions using the TOPUP13 and EDDY14,15 tool in FSL.
The normalized kernel $c_{l}$ was evaluated in the 3D parameter space X=(a, Dh, fr), and the best fit to the model was found by searching on the grid by minimizing the “energy” function1 (Figure 3), i.e., $\widetilde{x}=\arg_{x \in X} min\sum_j^T\sum_{l=-L}^L (ns_{l}-nc_{l} (x))^{2}$
Simulation data was generated by adding 100 samples of noise at SNR=20. Voxel-wise fitting for axon diameter a, restricted fraction fr, and hindered diffusivity Dh was performed.
Results
Simulation data shows that by combining L=0 and L=2 signal components, the standard deviation in axon diameter estimates decrease compared to using L=0 components alone (Figure 4). In the in vivo data, the feature of larger apparent axon diameter is more pronounced in the corticospinal tract in the L0+L2 representation by incorporating higher-order SH components (Figure 5).
Discussion
We propose a method for measurement of compartment size and volume fraction by supplementing spherical averaging with higher-order SH components. While spherical averaging is robust to fiber crossings and image noise, only 0th order signal components were used with higher frequency signal components being wasted. Similar approaches have been pursued previously to make use of the additional information available in multi-shell data 1,3. Our results indicate that it is feasible to incorporate the higher-order SH in measuring compartment size by transforming the signal equation into an ODF invariant form. This approach can be applied to whole brain analyses, with the potential to improve the precision of axon diameter estimates.
Acknowledgements
This work was funded by an NIH Blueprint for Neuroscience Research Grant: U01MH093765, as well as NIH funding from NCRR P41EB015896, NIBIB R01EB006847, NIBIB R00EB015445, NINDS K23NS096056, NINDS K23NS078044, NIH/NCRR/NIBIB P41EB015896 and Instrumentation Grants S10-RR023401, S10-RR023043, and S10-RR019307. Funding support was also received from the National Multiple Sclerosis Society, the American Heart Association Postdoctoral Fellowship Award (17POST33670452), a Radiological Sciences of North America Research Resident Grant, the Conrad N. Hilton Foundation and the MGH Executive Committee on Research Fund for Medical Discovery Fellowship Award.
References
1. Novikov, D.S., Veraart, J., Jelescu, I.O. & Fieremans, E. Rotationally-invariant mapping of scalar and orientational metrics of neuronal microstructure with diffusion MRI. NeuroImage 174, 518-538 (2018).
2. Veraart, J., Fieremans, E. & Novikov, D. Quantifying neuronal microstructure integrity with TE dependent Diffusion Imaging (TEdDI) in Proc. ISMRM, Vol. 0836 (Honolulu, HI, USA, 2017).
3. Reisert, M., Kellner, E., Dhital, B., Hennig, J. & Kiselev, V.G. Disentangling micro from mesostructure by diffusion MRI: A Bayesian approach. NeuroImage 147, 964-975 (2017).
4. Kaden, E., Kelm, N.D., Carson, R.P., Does, M.D. & Alexander, D.C. Multi-compartment microscopic diffusion imaging. Neuroimage 139, 346-359 (2016).
5. Kaden, E., Kruggel, F. & Alexander, D.C. Quantitative mapping of the per-axon diffusion coefficients in brain white matter. Magn Reson Med 75, 1752-1763 (2016).
6. Jespersen, S.N., et al. Neurite density from magnetic resonance diffusion measurements at ultrahigh field: Comparison with light microscopy and electron microscopy. NeuroImage 49, 205-216 (2010).
7. Fan, Q., et al. Axon Diameter Mapping Independent of Crossing Structures using Spherical Mean Technique. in Proc. ISMRM 5244 (Paris, France, 2018).
8. van Gelderen, P., Moonen, C.T. & Duyn, J.H. Susceptibility insensitive single shot MRI combining BURST and multiple spin echoes. Magn Reson Med 33, 439-442 (1995).
9. Panagiotaki, E., et al. Compartment models of the diffusion MR signal in brain white matter: a taxonomy and comparison. Neuroimage 59, 2241-2254 (2012).
10. Anderson, A.W. Measurement of fiber orientation distributions using high angular resolution diffusion imaging. Magn Reson Med 54, 1194-1206 (2005).
11. Setsompop, K., et al. Pushing the limits of in vivo diffusion MRI for the Human Connectome Project. NeuroImage 80, 220-233 (2013).
12. Eichner, C., et al. Real diffusion-weighted MRI enabling true signal averaging and increased diffusion contrast. Neuroimage 122, 373-384 (2015).
13. Andersson, J.L., Skare, S. & Ashburner, J. How to correct susceptibility distortions in spin-echo echo-planar images: application to diffusion tensor imaging. Neuroimage 20, 870-888 (2003).
14. Andersson, J.L. & Sotiropoulos, S.N. An integrated approach to correction for off-resonance effects and subject movement in diffusion MR imaging. NeuroImage 125, 1063-1078 (2016).
15. Andersson, J.L.R., Graham, M.S., Zsoldos, E. & Sotiropoulos, S.N. Incorporating outlier detection and replacement into a non-parametric framework for movement and distortion correction of diffusion MR images. NeuroImage 141, 556-572 (2016).
16. Veraart J, Fieremans E, Novikov DS. On the scaling behavior of water diffusion in human brain white matter. NeuroImage. 2018 Oct 4;185:379-87.
Figures
Figure 1. Illustration of the signal representation using spherical harmonics. The white matter signal (S) can be modeled as a mixture of restricted (Sr) and hindered (Sh) diffusion signal, which can be represented by the spherical harmonics (Ylm) and corresponding coefficients (slm). When the fiber axis is aligned with the primary axis of the coordinate system, slm=0 for all m≠0, so that the signal can be fully represented by sl0, l=0,2,4,… s00 and s20 as a function of gradient strength are plotted for the restricted and hindered compartments. Note that the L=0 component is identical to the spherical mean signal.
Figure 2. The response kernel in the spherical harmonic space. The restricted diffusion response kernel (cr) as a function of diameter is shown on the left (a-d), and the hindered response kernel (ch) as a function of hindered diffusivity is shown on the right (e-h).
Figure 3. Exemplary “energy” function for different spherical harmonic orders L. The “energy” function was evaluated for the ground truth parameter {a, Dh}={6μm,10-9 m2/s} (marked with the white cross) and its neighborhood in the parameter space X = {a, Dh}. The global minimum of the “energy” appears at the ground truth value for both L=0 and L=2 components, each of which shows a slightly different pattern. By combining the two components (L0+L2), the “low-energy” region shrinks, indicating that incorporating complementary contrasts originating from different orders is helpful for finding the global minimum.
Figure 4. Simulation results of estimated axon diameters and restricted volume fractions. The in vivo imaging protocol was used to generate the noise-free data (keeping fr=0.7 for the left figure and keeping diameter=6 μm for the right), and 100 samples of noise were added with an SNR=20. The mean value for the 100 trials is plotted as solid lines, and the shading bounded by dotted lines denotes the standard deviation. By combining L=0 and L=2 components (in green), the standard deviation in axon diameter estimates decreases compared to using L=0 components alone (in orange), especially in the small diameter regime.
Figure 5. Estimated apparent axon diameter map of in vivo human data. A sagittal slice through the right corticospinal tract was shown for the map calculated using L=0 components only (left) and that obtained by combining L=0 and L=2 components (right). Overall, the two maps show similar patterns, but there is a subtle improvement in contrast between the larger apparent axon diameter in the corticospinal tract and surrounding white matter on the L0+L2 map, indicating that including higher-order spherical harmonics may provide better sensitivity to axon diameter compared to the spherical mean approach.
Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
3564
|
2022-05-28 13:06:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6789858937263489, "perplexity": 6851.2081535830175}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016853.88/warc/CC-MAIN-20220528123744-20220528153744-00036.warc.gz"}
|
http://clay6.com/qa/13712/two-uniform-circular-discs-having-the-same-mass-and-the-same-thickness-but-
|
Want to ask us a question? Click here
Browse Questions
Ad
0 votes
# Two uniform circular discs having the same mass and the same thickness but different radii are made from different materials. The disc with the smaller rotational inertia is:
(a) the one made from the more dense material
(b) the one made from the less dense material
(c) the disc with the larger angular velocity
(d) the disc with the larger torque
Can you answer this question?
## 1 Answer
0 votes
(a) the one made from the more dense material
answered Nov 7, 2013 by
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
0 votes
0 answers
0 votes
1 answer
|
2017-01-20 11:59:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9308980107307434, "perplexity": 7940.546132335132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00081-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/why-0-0-not-1.871862/
|
# Why 0/0 not = 1
Ahmed Jubair
if( 2/2=1,5/5=1) then it must be that 0/0=1.but Again,it couldn't be (-1) also i think because if its-5/5=5,-2/2=-2.but 0 have no value and its the low valuenumber.so no need a( - )before it.so(- 0/0 is not =-1)
then why its undefined???why not 1
ProfuselyQuarky
Gold Member
Dividing by 0 is undefined.
You can't divide anything by 0 .... including 0!
Ahmed Jubair
blue_leaf77
Homework Helper
2/2=1 because 2=2x1, likewise 8/8=1 because 8=8x1 is satisfied. But now, which number when multiplied by zero yields zero? It's anything, 0 = 0x3 = 0x100 = 0x1000. Then how will you define 0/0?
Ahmed Jubair
phyzguy
Because there are many ways in which the limit 0/0 could be reached. Your comment (2/2=1, 5/5=1, ...) is implicitly defining:
$$\frac{0}{0}=\lim_{x\to 0} \frac{x}{x} = 1$$
But this is not the only possibility. Why couldn't I define:
$\frac{0}{0}=\lim_{x\to 0} \frac{2x}{x} = 2$ or : $\frac{0}{0}=\lim_{x\to 0} \frac{x}{2x} = 1/2$
or in an infinite number of other ways? That is why it is undefined.
Mark44
Mentor
if( 2/2=1,5/5=1) then it must be that 0/0=1.but Again,it couldn't be (-1) also i think because if its-5/5=5,-2/2=-2.
I don't understand what you're doing here. -5/5 = -1, not 5, and -2/2 = -1, not 2
Ahmed Jubair said:
but 0 have no value
Certainly 0 has a value.
Ahmed Jubair said:
and its the low valuenumber
???
It's the smallest number that isn't negative.
Ahmed Jubair said:
.so no need a( - )before it.so(- 0/0 is not =-1)
then why its undefined???why not 1
jim mcnamara
fresh_42
Mentor
I prefer to say: ##0## is no element of the multiplicative group. Therefore the question whether there is an inverse or not simply doesn't exist.
One could now object: But ##1## as the neutral element of multiplication is part of the additive group, it even generates it.
My answer then would be: ##1## has a natural usage for addition, ##0## hasn't for multiplication. The definition ##0 \cdot 1 = 0## simply is a necessity for the distributive law which is the only connection between both operations.
ProfuselyQuarky
ProfuselyQuarky
Gold Member
I prefer to say: ##0## is no element of the multiplicative group. Therefore the question whether there is an inverse or not simply doesn't exist.
One could now object: But ##1## as the neutral element of multiplication is part of the additive group, it even generates it.
My answer then would be: ##1## has a natural usage for addition, ##0## hasn't for multiplication. The definition ##0 \cdot 1 = 0## simply is a necessity for the distributive law which is the only connection between both operations.
That's a nice explanation. I really love how such a simple question can have such a variety of legitimate answers.
DrewD
Division is a repeated subtraction and you keep doing it until you reach zero or a dead end ( reminder)
So for example:
20 -5 -5 -5 -5 = 0
So the result of 20/5 = 4 (how many times did you repeat the 5?)
Now for the zero:
0 - 0 = 0 ... Well I reached zero (so 0/0 = 1)
0 - 0 - 0 - 0 = 0 I reached zero too ( 0/0 = 3 )
So you can see that you can make infinite answers. So when you divide 0/0, you can't just choose one of the answers because Why not the others too?
That is how I see it which is similar to Blue_Leaf way
One could also use a proof by contradiction here.
Let a=b, if we multiply both sides by a..
a^2=ab ,now subtract b^2 from both sides.
a^2-b^2=ab-b^2
Now simply factorise:
(a-b)(a+b)=b(a-b) , now divide both sides by (a-b).
So we're left with, (a+b)=b
Using our original definition of a=b, we can simplfy this to 2b=b which implies that 2=1, which is mathematically incorrect. The mathematics of my steps were valid until the point where I divided both sides by (a-b), [a-b=0]. So as you can already tell, dividing anything by zero is not possible. Many other good reasons have been explained here. Try a graphical approach if you're really interested, and plot as many graphs as you can that pass through (0,0), the trend you will notice is that there are an infinite amount of ways to approach 0 and thus we cannot give it's division a value.
Biker
It's an axiom that division by zero is undefined, conceptually it makes sense because it makes no sense to ask ' how much nothing goes into something '.
It's also interesting to note that you cant divide any number by another and obtain a non-approximate zero.
Pure mathematics makes my head hurt.
fresh_42
Mentor
It's an axiom that division by zero is undefined, ...
It is not. 0 has nothing to do with multiplication. There is no need for an inverse!
It is not. 0 has nothing to do with multiplication. There is no need for an inverse!
Hmm I was using 'axiom' in it's broadest sense though your point is well taken, thanks.
Mark44
Mentor
One could also use a proof by contradiction here.
Let a=b, if we multiply both sides by a..
a^2=ab ,now subtract b^2 from both sides.
a^2-b^2=ab-b^2
Now simply factorise:
(a-b)(a+b)=b(a-b) , now divide both sides by (a-b).
No, this isn't valid. Since a = b, by assumption, then a - b = 0, so you're dividing by zero.
If you do that, all bets are off, which you explain below.
whit3r0se- said:
So we're left with, (a+b)=b
Using our original definition of a=b, we can simplfy this to 2b=b which implies that 2=1, which is mathematically incorrect. The mathematics of my steps were valid until the point where I divided both sides by (a-b), [a-b=0]. So as you can already tell, dividing anything by zero is not possible. Many other good reasons have been explained here. Try a graphical approach if you're really interested, and plot as many graphs as you can that pass through (0,0), the trend you will notice is that there are an infinite amount of ways to approach 0 and thus we cannot give it's division a value.
Mark44
Mentor
It's an axiom that division by zero is undefined, conceptually it makes sense because it makes no sense to ask ' how much nothing goes into something '.
It's also interesting to note that you cant divide any number by another and obtain a non-approximate zero.
"non-approximate zero"? What is that?
If you divide any nonzero number by itself, you get 1.
Marcus-H said:
Pure mathematics makes my head hurt.
Mark44
Mentor
Time to put this thread to bed. The question has been asked and answered. Division by zero is undefined, and that's all you need to say.
Last edited:
|
2021-03-06 06:26:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8206555843353271, "perplexity": 965.7042700338807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374391.90/warc/CC-MAIN-20210306035529-20210306065529-00135.warc.gz"}
|
http://mathhelpforum.com/math-software/103878-what-command-whould-i-use.html
|
Math Help - What command whould i use?
1. What command whould i use?
hi all, im not sure if this is the right place to ask, but i hope so ^^
im looking for an solution for my problem. im currently working in Matlab with an while loop witch calculates something for me, but he need's to do this 10000 times, so i made an while loop for it. so in the of the round of the loop it for example the number 10. im looking for an way to save this number.
the code i wrote is this (note: the ____ are just to make the reading fo the matrix easier, i left them out in the orignal code)
____________________________
%inserting values
R = 12000
C = 1.6e-9
n = 0
%the while loop
while n<5
n = n+1;
f(n)=n;
RC= 1/(j*2*pi*f(n)*C);
tRC=2/(j*2*pi*f(n)*C);
%insert of matrix 1
M1 = _[0___________(2/R)+(1/tRC)_(-1/R);
______(2/RC)+(2/R)__0____________(1/RC);
______(-1/RC)______(-1/R)________(2/RC)];
%insert of matrix 2
M2 = __[10/R;
_______10/RC;
_______0];
%calculate unknown values in the matrix
M = M1\M2
Vin = 10
%grab the value i need or my calculation
Vout= M(3,1)
H = Vout/Vin
end
_____________________________
so, now im looking for an way to store the value of H so i can plot into an graph later, any ideas how i can do this?
2. Hi
You could make a vector which you create before the while loop.
Lets call it A.
A=[]; %Create empty vector
Now in each loop, write
A=[A 'Your new number you wish to save'];
This way you should be able to save all the numbers, one new with every loop.
3. it works smoothly , thanks a lot
4. Assuming H is a scalar:
Code:
%inserting values
R = 12000
C = 1.6e-9
n = 0
HH=[]; %empty array to hold the output
%the while loop
while n<5
n = n+1;
f=n;
RC= 1/(j*2*pi*f*C);
tRC=2/(j*2*pi*f*C);
%insert of matrix 1
M1 = [0 (2/R)+(1/tRC)_(-1/R);
(2/RC)+(2/R) 0 (1/RC);
(-1/RC) (-1/R) (2/RC)];
%insert of matrix 2
M2 = [10/R;
10/RC;
0];
%calculate unknown values in the matrix
M = M1\M2
Vin = 10
%grab the value i need or my calculation
Vout= M(3,1)
H = Vout/Vin
HH=[HH,H]; % stuff result into extended output array
end
CB
|
2015-04-26 15:54:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32578471302986145, "perplexity": 3965.091947689311}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246654687.23/warc/CC-MAIN-20150417045734-00289-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://socratic.org/questions/the-discriminant-of-a-quadratic-equation-is-5-which-answer-describes-the-number-
|
# The discriminant of a quadratic equation is -5. Which answer describes the number and type of solutions of the equation: 1 complex solution 2 real solutions 2 complex solutions 1 real solution?
May 24, 2017
Your quadratic equation has $2$ complex solutions.
#### Explanation:
The discriminant of a quadratic equation can only give us information about an equation of the form:
$y = a {x}^{2} + b x + c$ or a parabola.
Because the highest degree of this polynomial is 2, it must have no more than 2 solutions.
The discriminant is simply the stuff underneath the square root symbol ($\pm \sqrt{\text{ }}$), but not the square root symbol itself.
$\pm \sqrt{{b}^{2} - 4 a c}$
If the discriminant, ${b}^{2} - 4 a c$, is less than zero (i.e., any negative number), then you would have a negative under a square root symbol. Negative values under square roots are complex solutions. The $\pm$ symbol indicates that there is both a $+$ solution and a $-$ solution.
Therefore, your quadratic equation must have $2$ complex solutions.
|
2021-06-17 06:47:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8392560482025146, "perplexity": 289.5248193217197}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487629209.28/warc/CC-MAIN-20210617041347-20210617071347-00472.warc.gz"}
|
http://mathhelpforum.com/advanced-algebra/202079-compact.html
|
# Math Help - which is compact
1. ## which is compact
which of the following is compact and why? please explain how?
1) {(x,y):|x|<=1, |y|>=2}
2) {(x,y):|x|<=1, |y|^2>=2}
3) {(x,y):x^2+3y^2<=5}
4) {(x,y):x^2<=y^2 +5}
2. ## Re: which is compact
All you have to do is to apply the Heine Borel theorem. Just see the introduction to this wiki article: Heine . Draw the sets and see if they are closed and bounded.
|
2016-07-31 00:42:17
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8890931606292725, "perplexity": 1834.5572090337755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469258944256.88/warc/CC-MAIN-20160723072904-00018-ip-10-185-27-174.ec2.internal.warc.gz"}
|
https://www.coursehero.com/file/p1e5i11/Solution-Labour-Efficiency-Variance-Standard-wage-rate-x-Standard-hours-Actual/
|
Solution Labour Efficiency Variance Standard wage rate x Standard hours Actual
# Solution labour efficiency variance standard wage
This preview shows page 31 - 38 out of 61 pages.
Solution: Labour Efficiency Variance = Standard wage rate x (Standard hours – Actual hours) = 4 x (15,300 – 15,000) = 12,000 (Adverse)
Idle Time Variance It is a sub-variance of Wage Efficiency or Time Variance. The standard cost of actual hours of any employee may remain idle due to abnormal circumstances like strikes, lock outs, power failure etc. Standard cost of such idle time is called Idle Time Variance. It is always adverse or unfavourable. Can be computed using the formula: Idle Time variance = Idle Hours x Standard Rate per hour If there are idle hours, actual hours used in mixed variance and yield variance will be reduced by idle hours. Revised standard hours will also be calculated on adjusted actual hours. But in the calculation of Efficiency and rate variance, total actual hours will be taken.
Labour Mix Variance The composition of actual gang of labour may differ from composition of standard gang due to shortage of a particular grade of workers or some other reason. It is that portion of the wages variance which is due to the difference between the actual labour grades utilized and the standard labour grades specified. Can be computed using the formula: Labour Mix variance = (Revised Standard labour hours – AH ) x Standard Wage rate Revised Standard hours = x SH
Labours Yield Variance The Labour yield variance occurs when there is a difference between standard output and actual output. It is that portion of the Labour Efficiency variance which is due to the difference between the actual yield obtained and the standard yield specified. Can be computed using the formula: Labour Yield variance = Standard labour Cost per unit x (Standard yield or output for actual mix– Actual yield or output) Standard yield is the output which should result on input of actual hours mix. Standard labour Cost per unit = Total cost of standard mix of Labour Net standard output
Example 11 A gang of workers usually consists of 10 men, 5 women and 5 boys in a factory. They are paid at standard hourly rates of Rs. 1.25, Rs. 0.80 and Rs. 0.70 respectively. In a normal week of 40 hours the gang is expected to produce 1000 units of output. In certain week, the gang consisted of 13 men, 4 women and 3 boys. Actual wages were paid at the rates of Rs. 1.20, Rs. 0.85 and Rs. 0.65 respectively. Two hours were lost due to abnormal idle time and 960 units of output were produced. Calculate various labour variances.
Solution 11 Workers Standard Actual Hours (Workers x week) Rate (Rs.) Amount (Rs.) Hours (Workers x week) Rate (Rs.) Amount (Rs.) Men 400 1.25 500 520 1.20 624 Women 200 0.80 160 160 0.85 136 Boys 200 0.70 140 120 0.65 78 Total 800 800 800 838 Solution: Direct Labour Cost Variance = Standard cost for actual output – actual cost Standard cost for actual output = Standard cost per unit x actual output = Rs. 800/1000 units x 960 units = Rs. 768 DLCV = 768 – 838 = Rs. 70 (A) Continued…
Solution 11 Solution: Direct Labour Rate Variance = Actual hours (Standard wage rate – actual wage rate) Men = 520 (1.25 – 1.20) = Rs. 26 (F) Women = 160 (0.80 – 0.85) = 8 (A) Boys = 120 (0.70 – 0.65) = 6 (F) Total Rs. 24 (F) Direct Labour efficiency variance = Standard wage rate (standard time for actual output – actual time paid for) Continued….
#### You've reached the end of your free preview.
Want to read all 61 pages?
• Fall '17
• nisha
• Cost Accounting, Variance, Direct material price variance, Rs., actual output, standard output
|
2020-10-22 07:10:58
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8137066960334778, "perplexity": 3842.566988992753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878921.41/warc/CC-MAIN-20201022053410-20201022083410-00596.warc.gz"}
|
https://stats.hohoweiya.xyz/tag/metropolis-hastings/
|
# WeiYa's Work Yard
## Tag: Metropolis Hastings
• Metropolis Algorithm
Monte Carlo plays a key role in evaluating integrals and simulating stochastic systems, and the most critical step of Monte Carlo algorithm is sampling from an appropriate probability distribution $\pi (\mathbf x)$. There are two ways to solve this problem, one is to do importance sampling, another is to produce statistically dependent samples based on the idea of Markov chain Monte Carlo sampling.
|
2022-01-22 05:40:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9513981342315674, "perplexity": 525.4980433462239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303747.41/warc/CC-MAIN-20220122043216-20220122073216-00072.warc.gz"}
|
https://judgegirl.csie.org/problem/0/55
|
# 55. The Robots
## I'm a slow walker, but I never walk backwards.
Write a program to predict the fate of two robots. Suppose we have two robots running in an $M$ (horizontal) by $N$ (vertical) unit grid. Both will move at the speed of one square per time step. The first robot carries $F1$ amount of fuel and the second robot carries $F2$. If a robot runs out of fuel, it will stop at that square. Moving to a new square requires one unit of fuel. During the first $N1$ time steps, the first robot $R1$ will move to the north; then, it will move towards the east during the next $E1$ time steps. The first robot will repeat this pattern until it runs out of fuel. The second robot $R2$ will move a little bit differently. During the first $E2$ time steps, the second robot $R2$ will move to the east; then, it will move towards the north during the next $N2$ time steps. Again the second robot will repeat this pattern until it runs out of fuel. If either robot moves "out of bound," it will "wrap around" and reappear (by sort of magic) on the other side of the field. For example, if $M = 7$ and $N = 6$ and a robot at $(5, 5)$ goes north, it will reappear at $(5, 0)$. In addition, if two robots move into the same square, they explode. Now given the starting position of the first robot at $(X1, Y1)$, and the second robot at $(X2, Y2)$, and the amount of fuel they carry ($F1$ and $F2$), determine whether the two robots will explode or not.
## Input
There is only one line of inputs that contains $M, ; N, ; X1, ; Y1, ; E1, ; N1, ; F1, ; X2, ; Y2, ; E2, ; N2, ; F2$, with the following constraints.
• $0 \lt N, ; M \lt 10000$
• $0 \le X1, ; X2 \lt M$
• $0 \le Y1, ; Y2 \lt N$
• $(X1, Y1) \text{ is not } (X2, Y2)$
• $0 \lt N1, ; E1, ; N2, ; E2$
• $0 \le F1, ; F2 \le 10000$
## Output
There are two cases of output. If two robots explode, output robots explode at time T, where $T$ is the time they explode. Otherwise, output robots will not explode.
## Sample input
7 6 2 0 9 2 100 3 5 2 7 100
## Sample output
robots explode at time 5
## Sample input
7 6 2 0 9 2 6 3 5 2 7 0
## Sample output
robots will not explode
|
2021-10-24 13:04:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3729206323623657, "perplexity": 522.4866215684215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585997.77/warc/CC-MAIN-20211024111905-20211024141905-00513.warc.gz"}
|
https://questioncove.com/updates/4d5f0c503147b764f2b0fc6f
|
Mathematics
OpenStudy (anonymous):
How do I solve the integral of x(x-10)^10?
OpenStudy (anonymous):
do a substitution with u=10-x
OpenStudy (anonymous):
sorry kill that u=x-10 that should say
OpenStudy (anonymous):
du/dx = 1
OpenStudy (anonymous):
wait
OpenStudy (anonymous):
that wont work
OpenStudy (anonymous):
this isnt hard
OpenStudy (anonymous):
use binomial formula to expand (x-10)^10
OpenStudy (anonymous):
no i see the problem, yeah it seemed to easy
OpenStudy (anonymous):
integral x [ x^10 + 10 choose 1 x^9 (-1) + 10 choose 2 x^8 * (-1)^2 + ...
OpenStudy (anonymous):
make a pascal triangle
OpenStudy (anonymous):
OpenStudy (anonymous):
online whiteboard
OpenStudy (anonymous):
i think you should be able to use the substitution u=x-10 du/dx = 1 so this is the integral of (u+10)u^10 which is simply u^11 +10u^10 so this gives soln (u^12)/12 +10(u^11)/11 and then put x back in for u
OpenStudy (anonymous):
oh
OpenStudy (anonymous):
You have to use a complex substitution: $\int\limits_{}^{} x(x-10)^{10} dx$ u = x-10 x = u + 10, dx = 1 du Substitute and profit!
OpenStudy (anonymous):
OpenStudy (anonymous):
it doesn't have to be complex this problem is purely in the reals
OpenStudy (anonymous):
ac, right. i think he meant . wrong choice of words
OpenStudy (anonymous):
Oh, I meant complex as in 'difficult', lol.
OpenStudy (anonymous):
hehe
OpenStudy (anonymous):
i became both of you guys fans
OpenStudy (anonymous):
acland, it would be much tougher if you had to do integral x^10 (x-10)^10 dx
OpenStudy (anonymous):
then there is no avoiding pascal's beautiful triangle
OpenStudy (anonymous):
bad choice of words, you get to used to dealing with complex number systems all the time. For the x^2 one you should be able to use the same idea and have the integral of $(u+10)^2+u^{10}$ yeah if it was x^10 then you would have to look at other ways of solving it
OpenStudy (anonymous):
u = x-10 du = dx u + 10 = x so int (u+10) u^10
OpenStudy (anonymous):
if it was x^10 i just wouldnt bother and would leave my final answer as an integral
OpenStudy (anonymous):
so int u^11 + 10 u^10
OpenStudy (anonymous):
i got a toughie
OpenStudy (anonymous):
A tract of land bordered by a highway along the y-axis, a dirt road along the x-axis, and a river whose path is given by the equation y=4-0.2x^2, where x and y are in hindreds of meters. The tract is 300m deep along the dirt road The value of the land is constant in any strip parallel to the highway and increases as you move way from the highway, with the value given by v=1000+50x dollars per 10,000 m^2 at the sample point (x,y). Find dW, the worth of a strip. Write an integral that equals the worth of the entire tract.
OpenStudy (anonymous):
i thought about using a double integral
Latest Questions
iosangel: Is this correct?
10 minutes ago 1 Reply 0 Medals
FrogGirlEmmy: Quack i made some dino chicken nugget art o
1 minute ago 2 Replies 0 Medals
martai: help
4 minutes ago 11 Replies 1 Medal
TETSXPREME: -(5^-5)(3^3)
25 seconds ago 55 Replies 2 Medals
|
2022-01-28 19:58:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6543553471565247, "perplexity": 4371.051863238913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306335.77/warc/CC-MAIN-20220128182552-20220128212552-00042.warc.gz"}
|
https://www.gktoday.in/topics/gravitational-lensing/
|
# Gravitational Lensing
Normal lenses such as the ones used in a magnifying glass or a pair of spectacles bends the light rays that pass through them by refraction and focuses the light somewhere (such as in your eye). Gravitational lensing is among the first evidences of Albert Einstein’s general theory of relativity – simply put, mass bends light. The gravitational field of a massive ..
|
2022-09-28 05:36:21
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8335771560668945, "perplexity": 775.8110361411015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00636.warc.gz"}
|
http://lesswrong.com/r/discussion/tag/quantum/
|
0 08 October 2016 11:27PM
## The Philosophical Implications of Quantum Information Theory
5 26 February 2016 02:00AM
I was asked to write up a pithy summary of the upshot of this paper. This is the best I could manage.
One of the most remarkable features of the world we live in is that we can make measurements that are consistent across space and time. By "consistent across space" I mean that you and I can look at the outcome of a measurement and agree on what that outcome was. By "consistent across time" I mean that you can make a measurement of a system at one time and then make the same measurement of that system at some later time and the results will agree.
It is tempting to think that the reason we can do these things is that there exists an objective reality that is "actually out there" in some metaphysical sense, and that our measurements are faithful reflections of that objective reality. This hypothesis works well (indeed, seems self-evidently true!) until we get to very small systems, where it seems to break down. We can still make measurements that are consistent across space and time, but as soon as we stop making measurements, then things start to behave very differently than they did before. The classical example of this is the two-slit experiment: whenever we look at a particle we only ever find it in one particular place. When we look continuously, we see the particle trace out an unambiguous and continuous trajectory. But when we don't look, the particle behaves as if it is in more than one place at once, a behavior that manifests itself as interference.
The problem of how to reconcile the seemingly incompatible behavior of physical systems depending on whether or not they are under observation has come to be called the measurement problem. The most common explanation of the measurement problem is the Copenhagen interpretation of quantum mechanics which postulates that the act of measurement changes a system via a process called wave function collapse. In the contemporary popular press you will often read about wave function collapse in conjunction with the phenomenon of quantum entanglement, which is usually referred to as "spooky action at a distance", a phrase coined by Einstein, and intended to be pejorative. For example, here's the headline and first sentence of the above piece:
More evidence to support quantum theory’s ‘spooky action at a distance’
It’s one of the strangest concepts in the already strange field of quantum physics: Measuring the condition or state of a quantum particle like an electron can instantly change the state of another electron—even if it’s light-years away." (emphasis added)
This sort of language is endemic in the popular press as well as many physics textbooks, but it is demonstrably wrong. The truth is that measurement and entanglement are actually the same physical phenomenon. What we call "measurement" is really just entanglement on a large scale. If you want to see the demonstration of the truth of this statement, read the paper or watch the video or read the original paper on which my paper and video are based. Or go back and read about Von Neumann measurements or quantum decoherence or Everett's relative state theory (often mis-labeled "many-worlds") or relational quantum mechanics or the Ithaca interpretation of quantum mechanics, all of which turn out to be saying exactly the same thing.
Which is: the reason that measurements are consistent across space and time is not because these measurements are a faithful reflection of an underlying objective reality. The reason that measurements are consistent across space and time is because this is what quantum mechanics predicts when you consider only parts of the wave function and ignore other parts.
Specifically, it is possible to write down a mathematical description of a particle and two observers as a quantum mechanical system. If you ignore the particle (this is a formal mathematical operation called a partial trace of an operator matrix ) what you are left with is a description of the observers. And if you then apply information theoretical operations to that, what pops out is that the two observers are in classically correlated states. The exact same thing happens for observations made of the same particle at two different times.
The upshot is that nothing special happens during a measurement. Measurements are not instantaneous (though they are very fast ) and they are in principle reversible, though not in practice.
The final consequence of this, the one that grates most heavily on the intuition, is that your existence as a classical entity is an illusion. Because measurements are not a faithful reflection of an underlying objective reality, your own self-perception (which is a kind of measurement) is not a faithful reflection of an underlying objective reality either. You are not, in point of metaphysical fact, made of atoms. Atoms are a very (very!) good approximation to the truth, but they are not the truth. At the deepest level, you are a slice of the quantum wave function that behaves, to a very high degree of approximation, as if it were a classical system but is not in fact a classical system. You are in a very real sense living in the Matrix, except that the Matrix you are living in is running on a quantum computer, and so you -- the very close approximation to a classical entity that is reading these words -- can never "escape" the way Neo did.
As a corollary to this, time travel is impossible, because in point of metaphysical fact there is no time. Your perception of time is caused by the accumulation of entanglements in your slice of the wave function, resulting in the creation of information that you (and the rest of your classically-correlated slice of the wave function) "remember". It is those memories that define the past, you could even say create the past. Going "back to the past" is not merely impossible it is logically incoherent, no different from trying to construct a four-sided triangle. (And if you don't buy that argument, here's a more prosaic one: having a physical entity suddenly vanish from one time and reappear at a different time would violate conservation of energy.)
## [LINK] The Wrong Objections to the Many-Worlds Interpretation of Quantum Mechanics
17 [deleted] 19 February 2015 06:06PM
Sean Carroll, physicist and proponent of Everettian Quantum Mechanics, has just posted a new article going over some of the common objections to EQM and why they are false. Of particular interest to us as rationalists:
Now, MWI certainly does predict the existence of a huge number of unobservable worlds. But it doesn’t postulate them. It derives them, from what it does postulate. And the actual postulates of the theory are quite simple indeed:
1. The world is described by a quantum state, which is an element of a kind of vector space known as Hilbert space.
2. The quantum state evolves through time in accordance with the Schrödinger equation, with some particular Hamiltonian.
That is, as they say, it. Notice you don’t see anything about worlds in there. The worlds are there whether you like it or not, sitting in Hilbert space, waiting to see whether they become actualized in the course of the evolution. Notice, also, that these postulates are eminently testable — indeed, even falsifiable! And once you make them (and you accept an appropriate “past hypothesis,” just as in statistical mechanics, and are considering a sufficiently richly-interacting system), the worlds happen automatically.
Given that, you can see why the objection is dispiritingly wrong-headed. You don’t hold it against a theory if it makes some predictions that can’t be tested. Every theory does that. You don’t object to general relativity because you can’t be absolutely sure that Einstein’s equation was holding true at some particular event a billion light years away. This distinction between what is postulated (which should be testable) and everything that is derived (which clearly need not be) seems pretty straightforward to me, but is a favorite thing for people to get confused about.
Very reminiscent of the quantum physics sequence here! I find that this distinction between number of entities and number of postulates is something that I need to remind people of all the time.
META: This is my first post; if I have done anything wrong, or could have done something better, please tell me!
## How many words do we have and how many distinct concepts do we have?
-4 [deleted] 17 December 2014 11:04PM
In another message, I suggested that, given how many cultures we have to borrow from, that our language may include multiple words from various sources that apply to a single concept.
An example is Reality, or Existence, or Being, or Universe, or Cosmos, or Nature, ect.
Another is Subjectivity, Mind, Consciousness, Experience, Qualia, Phenomenal, Mental, ect
Is there any problem with accepting these claims so far? Curious what case would be made to the contrary.
(Here's a bit of a contextual aside, between quantum mechanics and cosmology, the words "universe", "multiverse", and "observable universe" mean at least 10 different things, depending on who you ask. People often say the Multiverse comes from Hugh Everett. But what they are calling the multiverse, Everett called "universal wave function", or "universe". How did Everett's universe become the Multiverse? DeWitt came along and emphasized some part of the wave function branching into different worlds. So, if you're following, one Universe, many worlds. Over the next few decades, this idea was popularized as having "many parallel universes", which is obviously inaccurate. Well, a Scottish chap decided to correct this. He stated the Universe was the Universal Wave Function, where it was "a complete one", because that's what "uni" means. And that our perceived worlds of various objects is a "multiverse". One Universe, many Multiverses. Again, the "parallel universes" idea seemed cooler, so as it became more popular the Multiverse became one and the universe became many. What's my point? The use of these words is legitimate fiasco, and I suggest we abandon them altogether.)
If these claims are found to be palatable, what do they suggest?
I propose, respectfully and humbly as I can imagine there may be compelling alternatives presented here, that in the 21st century, we make a decision about which concepts are necessary, which term we will use to describe that concept, and respectfully leave the remaining terms for the domain of poetry.
Here are the words I think we need:
1. reality
2. model
3. absolute
4. relative
5. subjective
6. objective
7. measurement
8. observer
With these terms I feel we can construct a concise metaphysical framework, consistent with the great rationalists of history, and that accurately described Everett's "Relative State Formulation of Quantum Mechanics".
1. Absolute reality is what is. It is relative to no observer. It is real prior to measurement.
2. Subjective reality is what is, relative to a single observer. It exists at measurement.
3. Objective reality is the model relative to all observers. It exists post-measurement.
Everett's Relative State formulation, is roughly this:
1. The wave function is the "absolute state" of the model
2. The wave function contains an observer and their measurement apparatus
3. An observer makes a measurements and records the result in a memory
4. those measurement records are the "relative state" of the model
Here we see that the words multiverse and universe are abandoned for absolute and relative states, which is actually the language used in the Relative State Formulation.
My conclusion then, for you consideration and comment, is that a technical view of reality can be attained by having a select set of terms, and this view is not only consistent with themes of philosophy (which I didn't really explain) but also the proper framework in which to interpret quantum mechanics (ala Everett).
(I'm not sure how familiar everyone here is with Everett specifically or not. His thesis depended on "automatically function machines" that make measurements with sensory gear and record them. After receiving his PhD, he left theoretical physics, and had a life long fascination with computer vision and computer hearing. That suggests to me, the reason his papers have been largely confounding to the general physicists, is because they didn't realize the extent to which Everett really thought he could mathematically model an observer.)
I should note, it may clarify things to add another term "truth", though this would in general be taken as an analog of "real". For example, if something is absolute true, then it is of absolute reality. If something is objectively true, then it is of objective reality. The word "knowledge" in this sense is a poetic word for objective truth, understood on the premise that objective truth is not absolute truth.
## A new derivation of the Born rule
15 25 June 2014 03:07PM
This post is an explanation of a recent paper coauthored by Sean Carroll and Charles Sebens, where they propose a derivation of the Born rule in the context of the Many World approach to quantum mechanics. While the attempt itself is not fully successful, it contains interesting ideas and it is thus worthwhile to know.
A note to the reader: here I will try to enlighten the preconditions and give only a very general view of their method, and for this reason you won’t find any equation. It is my hope that if after having read this you’re still curious about the real math, you will point your browser to the preceding link and read the paper for yourself.
If you are not totally new to LessWrong, you should know by now that the preferred interpretation of quantum mechanics (QM) around here is the Many World Interpretation (MWI), which negates the collapse of the wave-function and postulates a distinct reality (that is, a branch) for every base state composing a quantum superposition.
MWI historically suffered from three problems: the absence of macroscopic superpositions, the preferred basis problem, the Born rule derivation. The development of decoherence famously solved the first and, to a lesser degree, the second problem, but the role of the third still remains one of the most poorly understood side of the theory.
Quantum mechanics assigns an amplitude, a complex number, to each branch of a superposition, and postulates that the probability of an observer to find the system in that branch is the (squared) norm of the amplitude. This, very briefly, is the content of the Born rule (for pure states).
Quantum mechanics remains agnostic about the ontological status of both amplitudes and probabilities, but MWI, assigning a reality status to every branch, demotes ontological uncertainty (which branch will become real after observation) to indexical uncertainty (which branch the observer will find itself correlated to after observation).
Simple indexical uncertainty, though, cannot reproduce the exact predictions of QM: by the Indifference principle, if you have no information privileging any member in a set of hypothesis, you should assign equal probability to each one. This leads to forming a probability distribution by counting the branches, which only in special circumstances coincides with amplitude-derived probabilities. This discrepancy, and how to account for it, constitutes the Born rule problem in MWI.
There have been of course many attempts at solving it, for a recollection I quote directly the article:
One approach is to show that, in the limit of many observations, branches that do not obey the Born Rule have vanishing measure. A more recent twist is to use decision theory to argue that a rational agent should act as if the Born Rule is true. Another approach is to argue that the Born Rule is the only well-defined probability measure consistent with the symmetries of quantum mechanics.
These proposals have failed to uniformly convince physicists that the Born rule problem is solved, and the paper by Carroll and Sebens is another attempt to reach a solution.
Before describing their approach, there are some assumptions that have to be clarified.
The first, and this is good news, is that they are treating probabilities as rational degrees of belief about a state of the world. They are thus using a Bayesian approach, although they never call it that way.
The second is that they’re using self-locating indifference, again from a Bayesian perspective.
Self-locating indifference is the principle that you should assign equal probabilities to find yourself in different places in the universe, if you have no information that distinguishes the alternatives. For a Bayesian, this is almost trivial: self-locating propositions are propositions like any other, so the principle of indifference should be used on them as it should on any other prior information. This is valid for quantum branches too.
The third assumption is where they start to deviate from pure Bayesianism: it’s what they call Epistemic Separability Principle, or ESP. In their words:
the outcome of experiments performed by an observer on a specific system shouldn’t depend on the physical state of other parts of the universe.
This is a kind of a Markov condition: the request that the system is such that it screens the interaction between the observer and the system observed from every possible influence of the environment.
It is obviously false for many partitions of a system into an experiment and an environment, but rather than taking it as a Principle, we can make it an assumption: an experiment is such only if it obeys the condition.
In the context of QM, this condition amounts to splitting the universal wave-function into two components, the experiment and the environment, so that there’s no entanglement between the two, and to consider only interactions that can factors as a product of an evolution for the environment and an evolution for the experiment. In this case, environment evolution act as the identity operator on the experiment, and does not affect the behavior of the experiment wave-function.
Thus, their formulation requires that the probability that an observer finds itself in a certain branch after a measurement is independent on the operations performed on the environment.
Note though, an unspoken but very important point: probabilities of this kind depends uniquely on the superposition structure of the experiment.
A probability, being an abstract degree of belief, can depend on all sorts of prior information. With their quantum version of ESP, Carroll and Sebens are declaring that, in a factored environment, probabilities of a subsystem does not depend on the information one has about the environment. Indeed, in this treatment, they are equating factorization and lack of logical connection.
This is of course true in quantum mechanics, but is a significant burden in a pure Bayesian treatment.
That said, let’s turn to their setup.
They imagine a system in a superposition of base states, which first interacts and decoheres with an environment, then gets perceived by an observer. This sequence is crucial: the Carroll-Sebens move can only be applied when the system already has decohered with a sufficiently large environment.
I say “sufficiently large” because the next step is to consider a unitary transformation on the “system+environment” block. This transformation needs to be of this kind:
- it respects ESP, in that it has to factor as an identity transformation on the “observer+system” block;
- it needs to equally distribute the probability of each branch in the original superposition on a different branch in the decohered block, according to their original relative measure.
Then, by a simple method of rearranging labels of the decohered base, one can show that the correct probabilities comes out by the indifference principle, in the very same way that the principle is used to derive the uniform probability distribution in the second chapter of Jaynes’ Probability Theory.
As an example, consider a superposition of a quantum bit, and say that one branch has a higher measure with respect to the other by a factor of square root of 2. The environment needs in this case to have at least 8 different base states to be relabeled in such a way to make the indifference principle work.
In theory, in this way you can only show that the Born rule is valid for amplitudes which differ one another by the square root of a rational number. Again I quote the paper for their conclusion:
however, since this is a dense set, it seems reasonable to conclude that the Born Rule is established.
Evidently, this approach suffers from a number of limits: the first and the most evident is that it works only in a situation where the system to be observed has already decohered with an environment. It is not applicable to, say, a situation where a detector reads a quantum superposition directly, e.g. in a Stern-Gerlach experiment.
The second limit, although less serious, is that it can work only when the system to be observed decoheres with an environment which has sufficiently base states to distribute the relative measure in different branches. This number, for a transcendental amplitude, is bound to be infinite.
The third limit is that it can only work if we are allowed to interact with the environment in such a way as to leave the amplitudes of the interaction between the system and the observer untouched.
All of these, which are understood as limits, can naturally be reversed and considered as defining conditions, saying: the Born rule is valid only within those limits.
I’ll leave it to you to determine if this constitutes a sufficient answers to the Born rule problem in MWI.
## Common sense quantum mechanics
11 15 May 2014 08:10PM
Related to: Quantum physics sequence.
TLDR: Quantum mechanics can be derived from the rules of probabilistic reasoning. The wavefunction is a mathematical vehicle to transform a nonlinear problem into a linear one. The Born rule that is so puzzling for MWI results from the particular mathematical form of this functional substitution.
This is a brief overview a recent paper in Annals of Physics (recently mentioned in Discussion):
by Hans De RaedtMikhail I. Katsnelson, and Kristel Michielsen. Abstract:
It is shown that the basic equations of quantum theory can be obtained from a straightforward application of logical inference to experiments for which there is uncertainty about individual events and for which the frequencies of the observed events are robust with respect to small changes in the conditions under which the experiments are carried out.
In a nutshell, the authors use the "plausible reasoning" rules (as in, e.g., Jaynes' Probability Theory) to recover the quantum-physical results for the EPR and SternGerlach experiments by adding a notion of experimental reproducibility in a mathematically well-formulated way and without any "quantum" assumptions. Then they show how the Schrodinger equation (SE) can be obtained from the nonlinear variational problem on the probability P for the particle-in-a-potential problem when the classical Hamilton-Jacobi equation holds "on average". The SE allows to transform the nonlinear variational problem into a linear one, and in the course of said transformation, the (real-valued) probability P and the action S are combined in a single complex-valued function ~P1/2exp(iS) which becomes the argument of SE (the wavefunction).
This casts the "serious mystery" of Born probabilities in a new light. Instead of the observed frequency being the square(d amplitude) of the "physically fundamental" wavefunction, the wavefunction is seen as a mathematical vehicle to convert a difficult nonlinear variational problem for inferential probability into a manageable linear PDE, where it so happens that the probability enters the wavefunction under a square root.
Below I will excerpt some math from the paper, mainly to show that the approach actually works, but outlining just the key steps. This will be followed by some general discussion and reflection.
1. Plausible reasoning and reproducibility
The authors start from the usual desiderata that are well laid out in Jaynes' Probability Theory and elsewhere, and add to them another condition:
There may be uncertainty about each event. The conditions under which the experiment is carried out may be uncertain. The frequencies with which events are observed are reproducible and robust against small changes in the conditions.
Mathematically, this is a requirement that the probability P(x|θ,Z) of observation x given an uncertain experimental parameter θ and the rest of out knowledge Z, is maximally robust to small changes in θ and independent of θ. Using log-probabilities, this amounts to minimizing the "evidence"
$\mathrm{Ev}=\ln\frac{P\left(x,y|\theta+\epsilon,Z\right)}{P\left(x,y|\theta,Z\right)}$
for any small ε so that |Ev| is not a function of θ (but the probability is).
2. The EinsteinPodolskyRosenBohm experiment
There is a source S that, when activated, sends a pair of signals to two routers R1,2. Each router then sends the signal to one of its two detectors Di+,– (i=1,2). Each router can be rotated and we denote as θ the angle between them. The experiment is repeated N times yielding the data set {x1,y1}, {x2,y2}, ... {xN,yN} where x and y are the outcomes from the two detectors (+1 or –1). We want to find the probability P(x,y|θ,Z).
After some calculations it is found that the single-trial probability can be expressed as P(x,y|θ,Z) = (1 + xyE12(θ) ) / 4, where E12(θ) = Σx,y=+–1 xyP(x,y|θ,Z) is a periodic function.
From the properties of Bernoulli trials it follows that, for a data set of N trials with nxy total outcomes of each type {x,y},
$\mathrm{Ev} = \sum_{x,y=\pm1} n_{xy} \ln\frac{P\left(x,y|\theta+\epsilon,Z\right)}{P\left(x,y|\theta,Z\right)}$
and expanding this in a Taylor series it is found that
$\mathrm{Ev} = -\frac{N\epsilon^{2}}{2} \sum_{x,y=\pm1} \frac{1}{P\left(x,y|\theta,Z\right)} \left(\frac{\partial P\left(x,y|\theta,Z\right)}{\partial\theta}\right)^{2} + O(\epsilon^{3})$
The expression in the sum is the Fisher information IF for P. The maximum robustness requirement means it must be minimized. Writing it down as IF = 1/(1 – E12(θ)2) (dE12(θ)/dθ)2 one finds that E12(θ) = cos(θIF1/2 + φ), and since E12 must be periodic in angle, IF1/2 is a natural number, so the smallest possible value is IF = 1. Choosing φ π it is found that E12(θ) = –cos(θ), and we obtain the result that
$P\left(x,y|\theta,Z\right) = \frac{1 - \cos\theta}{4}$
which is the well-known correlation of two spin-1/2 particles in the singlet state.
Needless to say, our derivation did not use any concepts of quantum theory. Only plain, rational reasoning strictly complying with the rules of logical inference and some elementary facts about the experiment were used
3. The SternGerlach experiment
This case is analogous and simpler than the previous one. The setup contains a source emitting a particle with magnetic moment S, a magnet with field in the direction a, and two detectors D+ and D.
Similarly to the previous section, P(x|θ,Z) = (1 + xE(θ) ) / 2, where E(θ) = P(+|θ,Z) – P(–|θ,Z) is an unknown periodic function. By complete analogy we seek the minimum of IF and find that E(θ) = +–cos(θ), so that
$P\left(x|\theta,Z\right) = \frac{\left(1 + x\mathbf{a}\cdot\mathbf{S}\right)}{2}$
In quantum theory, [this] equation is in essence just the postulate (Born’s rule) that the probability to observe the particle with spin up is given by the square of the absolute value of the amplitude of the wavefunction projected onto the spin-up state. Obviously, the variability of the conditions under which an experiment is carried out is not included in the quantum theoretical description. In contrast, in the logical inference approach, [equation] is not postulated but follows from the assumption that the (thought) experiment that is being performed yields the most reproducible results, revealing the conditions for an experiment to produce data which is described by quantum theory.
To repeat: there are no wavefunctions in the present approach. The only assumption is that a dependence of outcome on particle/magnet orientation is observed with robustness/reproducibility.
4. Schrodinger equation
A particle is located in unknown position θ on a line segment [–L, L]. Another line segment [–L, L] is uniformly covered with detectors. A source emits a signal and the particle's response is detected by one of the detectors.
After going to the continuum limit of infinitely many infinitely small detectors and accounting for translational invariance it is possible to show that the position of the particle θ and of the detector x can be interchanged so that dP(x|θ,Z)/dθ = –dP(x|θ,Z)/dx.
In exactly the same way as before we need to minimize Ev by minimizing the Fisher information, which is now
$I_F=\int{\frac{1}{P\left(x|\theta,Z\right)}\left(\frac{\partial P(x|\theta,Z)}{\partial x\right)^{2}}dx$
However, simply solving this minimization problem will not give us anything new because nothing so far accounted for the fact that the particle moves in a potential. This needs to be built into the problem. This can be done by requiring that the classical Hamilton-Jacobi equation holds on average. Using the Lagrange multiplier method, we now need to minimize the functional
$F(\theta)=\int{\left\{\frac{1}{P\left(x|\theta,Z\right)}\left(\frac{\partial P(x|\theta,Z)}{\partial x\right)^{2}+\lambda\left[\left(\frac{\partial S(x)}{\partial x}}\right)^{2}+2m\left[V\left(x\right)-E\right]\right]P\left(x|\theta,Z\right)}\right\}dx$
Here S(x) is the action (Hamilton's principal function). This minimization yields solutions for the two functions P(x|θ,Z) and S(x). It is a difficult nonlinear minimization problem, but it is possible to find a matching solution in a tractable way using a mathematical "trick". It is known that standard variational minimization of the functional
$Q\left(\theta\right)=\int{\left\{4\frac{\partial\psi^{*}\left(x|\theta,Z\right)}{\partial x}\frac{\partial\psi\left(x|\theta,Z\right)}{\partial x}+2m\lambda\left[V\left(x\right)-E\right]\psi^{*}\left(x|\theta,Z\right)\psi\left(x|\theta,Z\right)\right\}dx}$
yields the Schrodinger equation for its extrema. On the other hand, if one makes the substitution combining two real-valued functions P and S into a single complex-valued ψ,
$\psi\left(x|\theta,Z\right)=\sqrt{P\left(x|\theta,Z\right)}e^{iS\left(x\right)\sqrt{\lambda}/2}$
Q is immediately transformed into F, concluding the derivation of the Schrodinger equation. Incidentally, ψ is constructed so that P(x|θ,Z) = |ψ(x|θ,Z)|2, which is the Born rule.
Summing up the meaning of Schrodinger equation in the present context:
Of course, a priori there is no good reason to assume that on average there is agreement with Newtonian mechanics ... In other words, the time-independent Schrodinger equation describes the collective of repeated experiments ... subject to the condition that the averaged observations comply with Newtonian mechanics.
The authors then proceed to derive the time-dependent SE (independently from the stationary SE) in a largely similar fashion.
5. What it all means
Classical mechanics assumes that everything about the system's state and dynamics can be known (at least in principle). It starts from axioms and proceeds to derive its conclusions deductively (as opposed to inductive reasoning). In this respect quantum mechanics is to classical mechanics what probabilistic logic is to classical logic.
Quantum theory is viewed here not as a description of what really goes on at the microscopic level, but as an instance of logical inference:
in the logical inference approach, we take the point of view that a description of our knowledge of the phenomena at a certain level is independent of the description at a more detailed level.
and
quantum theory does not provide any insight into the motion of a particle but instead describes all what can be inferred (within the framework of logical inference) from or, using Bohr’s words, said about the observed data
Such a treatment of QM is similar in spirit to Jaynes' Information Theory and Statistical Mechanics papers (I, II). Traditionally statistical mechanics/thermodynamics is derived bottom-up from the microscopic mechanics and a series of postulates (such as ergodicity) that allow us to progressively ignore microscopic details under strictly defined conditions. In contrast, Jaynes starts with minimum possible assumptions:
"The quantity x is capable of assuming the discrete values xi ... all we know is the expectation value of the function f(x) ... On the basis of this information, what is the expectation value of the function g(x)?"
and proceeds to derive the foundations of statistical physics from the maximum entropy principle. Of course, these papers deserve a separate post.
This community should be particularly interested in how this all aligns with the many-worlds interpretation. Obviously, any conclusions drawn from this work can only apply to the "quantum multiverse" level and cannot rule out or support any other many-worlds proposals.
In quantum physics, MWI does quite naturally resolve some difficult issues in the "wavefunction-centristic" view. However, we see that the concept wavefunction is not really central for quantum mechanics. This removes the whole problem of wavefunction collapse that MWI seeks to resolve.
The Born rule is arguably a big issue for MWI. But here it essentially boils down to "x is quadratic in t where t = sqrt(x)". Without the wavefunction (only probabilities) the problem simply does not appear.
Here is another interesting conclusion:
if it is difficult to engineer nanoscale devices which operate in a regime where the data is reproducible, it is also difficult to perform these experiments such that the data complies with quantum theory.
In particular, this relates to the decoherence of a system via random interactions with the environment. Thus decoherence becomes not as a physical intrinsically-quantum phenomenon of "worlds drifting apart", but a property of experiments that are not well-isolated from the influence of environment and therefore not reproducible. Well-isolated experiments are robust (and described by "quantum inference") and poorly-isolated experiments are not (hence quantum inference does not apply).
In sum, it appears that quantum physics when viewed as inference does not require many-worlds any more than probability theory does.
## Quantum Decisions
1 12 May 2014 09:49PM
CFAR sometimes plays a Monday / Tuesday game (invented by palladias). Copying from the URL:
On Monday, your proposition is true. On Tuesday, your proposition is false. Tell me a story about each of the days so I can see how they are different. Don't just list the differences (because you're already not doing that well). Start with "I wake up" so you start concrete and move on in that vein, naming the parts of your day that are identical as well as those that are different.
So my question is (edited on 2014/05/13):
On Monday, I make my decisions by rolling a normal die. Example: should I eat vanilla or chocolate ice-cream? I then decide that if I roll 4 or higher on a 6-sided die, I'll pick vanilla. I roll the die, get a 3, and so proceed to eat chocolate ice-cream.
On Tuesday, I use the same procedure, but use a quantum random number generator instead. (For the purpose of this discussion, let's assume that I can actually find a true/reliable generator. May be I'm shooting a photon through a half-silvered mirror.)
What's the difference? (Relevant discussion pointed out Pfft.)
## Another problem with quantum measure
1 18 November 2013 11:03AM
Let's play around with the quantum measure some more. Specifically, let's posit a theory T that claims that the quantum measure of our universe is increasing - say by 50% each day. Why could this be happening? Well, here's a quasi-justification for it: imagine there are lots and lots of of universes, most of them in chaotic random states, jumping around to other chaotic random states, in accordance with the usual laws of quantum mechanics. Occasionally, one of them will partially tunnel, by chance, into the same state our universe is in - and then will evolve forwards in time exactly as our universe is. Over time, we'll accumulate an ever-growing measure.
That theory sounds pretty unlikely, no matter what feeble attempts are made to justify it. But T is observationally indistinguishable from our own universe, and has a non-zero probability of being true. It's the reverse of the (more likely) theory presented here, in which the quantum measure was being constantly diminished. And it's very bad news for theories that treat the quantum measure (squared) as akin to a probability, without ever renormalising. It implies that one must continually sacrifice for the long-term: any pleasure today is wasted, as that pleasure will be weighted so much more tomorrow, next week, next year, next century... A slight fleeting smile on the face of the last human is worth more than all the ecstasy of the previous trillions.
One solution to the "quantum measure is continually diminishing" problem was to note that as the measure of the universe diminished, it would eventually get so low that that any alternative, non-measure diminishing theory, not matter how initially unlikely, would predominate. But that solution is not available here - indeed, that argument runs in reverse, and makes the situation worse. No matter how initially unlikely the "quantum measure is continually increasing" theory is, eventually, the measure will become so high that it completely dominates all other theories.
## Quantum versus logical bombs
13 17 November 2013 03:14PM
Child, I'm sorry to tell you that the world is about to end. Most likely. You see, this madwoman has designed a doomsday machine that will end all life as we know it - painlessly and immediately. It is attached to a supercomputer that will calculate the 10100th digit of pi - if that digit is zero, we're safe. If not, we're doomed and dead.
However, there is one thing you are allowed to do - switchout the logical trigger and replaced it by a quantum trigger, that instead generates a quantum event that will prevent the bomb from triggering with 1/10th measure squared (in the other cases, the bomb goes off). You ok paying €5 to replace the triggers like this?
If you treat quantum measure squared exactly as probability, then you shouldn't see any reason to replace the trigger. But if you believed in many worlds quantum mechanics (or think that MWI is possibly correct with non-zero probability), you might be tempted to accept the deal - after all, everyone will survive in one branch. But strict total utilitarians may still reject the deal. Unless they refuse to treat quantum measure as akin to probability in the first place (meaning they would accept all quantum suicide arguments), they tend to see a universe with a tenth of measure-squared as exactly equally valued to a 10% chance of a universe with full measure. And they'd even do the reverse, replace a quantum trigger with a logical one, if you paid them €5 to do so.
Still, most people, in practice, would choose to change the logical bomb for a quantum bomb, if only because they were slightly uncertain about their total utilitarian values. It would seem self evident that risking the total destruction of humanity is much worse than reducing its measure by a factor of 10 - a process that would be undetectable to everyone.
Of course, once you agree with that, we can start squeezing. What if the quantum trigger only has 1/20 measured-squared "chance" of saving us? 1/000? 1/10000? If you don't want to fully accept the quantum immortality arguments, you need to stop - but at what point?
## Are coin flips quantum random to my conscious brain-parts?
6 19 February 2013 09:51AM
Hello rationality friends! I have a question that I bet some of you have thought about...
I hear lots of people saying that classical coin flips are not "quantum random events", because the outcome is very nearly determined by thumb movement when I flip the coin. More precisely, one can stay that the state of my thumb and the state of the landed coin are strongly entangled, such that, say, 99% of the quantum measure of the coin flips outcomes my post-flip thumb observes all land heads.
First of all, I've never actually seen an order of magnitude estimate to support this claim, and would love it if someone here can provide or link to one!
Second, I'm not sure how strongly entangled my thumb movement is with my subjective experience, i.e., with the parts of my brain that consciously process the decision to flip and the outcome. So even if the coin outcome is almost perfectly determined by my thumb, it might not be almost perfectly determined by my decision to flip the coin.
For example, while the thumb movement happens, a lot of calibration goes on between my thumb, my motor cortex, and my cerebellum (which certainly affects but does not seem to directly process conscious experience), precisely because my motor cortex is unable to send, on its own, a precise and accurate enough signal to my thumb that achieves the flicking motion that we eventually learn to do in order to flip coins. Some of this inability is due to small differences in environmental factors during each flip that the motor cortex does not itself process directly, but is processed by the cerebellum instead. Perhaps some of this inability also comes directly from quantum variation in neuron action potentials being reached, or perhaps some of the aforementioned environmental factors arise from quantum variation.
Anyway, I'm altogether not *that* convinced that the outcome of a coin flip is sufficiently dependent on my decision to flip as to be considered "not a quantum random event" by my conscious brain. Can anyone provide me with some order of magnitude estimates to convince me either way about this? I'd really appreciate it!
ETA: I am not asking if coin flips are "random enough" in some strange, undefined sense. I am actually asking about quantum entanglement here. In particular, when your PFC decides for planning reasons to flip a coin, does the evolution of the wave function produce a world that is in a superposition of states (coin landed heads)⊗(you observed heads) + (coin landed tails)⊗(you observed tails)? Or does a monomial state result, either (coin landed heads)⊗(you observed heads) or (coin landed tails)⊗(you observed tails) depending on the instance?
At present, despite having been told many times that coin flips are not "in superpositions" relative to "us", I'm not convinced that there is enough mutual information connecting my frontal lobe and the coin for the state of the coin to be entangled with me (i.e. not "in a superposed state") before I observe it. I realize this is somewhat testable, e.g., if the state amplitudes of the coin can be forced to have complex arguments differing in a predictable way so as to produce expected and measurable interference patterns. This is what we have failed to produce at a macroscopic level in attempts to produce visible superpositions. But I don't know if we fail to produce messier, less-visibly-self-interfering superpositions, which is why I am still wondering about this...
Any help / links / fermi estimates on this will be greatly appreciated!
## False vacuum: the universe playing quantum suicide
16 09 January 2013 05:04PM
Imagine that the universe is approximately as it appears to be (I know, this is a controversial proposition, but bear with me!). Further imagine that the many worlds interpretation of Quantum mechanics is true (I'm really moving out of Less Wrong's comfort zone here, aren't I?).
Now assume that our universe is in a situation of false vacuum - the universe is not in its lowest energy configuration. Somewhere, at some point, our universe may tunnel into true vacuum, resulting in a expanding bubble of destruction that will eat the entire universe at high speed, destroying all matter and life. In many worlds, such a collapse need not be terminal: life could go one on a branch of lower measure. In fact, anthropically, life will go on somewhere, no matter how unstable the false vacuum is.
So now assume that the false vacuum we're in is highly unstable - the measure of the branch in which our universe survives goes down by a factor of a trillion every second. We only exist because we're in the branch of measure a trillionth of a trillionth of a trillionth of... all the way back to the Big Bang.
None of these assumptions make any difference to what we'd expect to see observationally: only a good enough theory can say that they're right or wrong. You may notice that this setup transforms the whole universe into a quantum suicide situation.
The question is, how do you go about maximising expected utility in this situation? I can think of a few different approaches:
1. Gnaw on the bullet: take the quantum measure as a probability. This means that you now have a discount factor of a trillion every second. You have to rush out and get/do all the good stuff as fast as possible: a delay of a second costs you a reduction in utility of a trillion. If you are a negative utilitarian, you also have to rush to minimise the bad stuff, but you can also take comfort in the fact that the potential for negative utility across the universe is going down fast.
2. Use relative measures: care about the relative proportion of good worlds versus bad worlds, while assigning zero to those worlds where the vacuum has collapsed. This requires a natural zero to make sense, and can be seen as quite arbitrary: what would you do about entangled worlds, or about the non-zero probability that the vacuum-collapsed worlds may have worthwhile life in them? Would the relative measure user also put zero value to worlds that were empty of life for other reasons than vacuum collapse? For instance, would they be in favour of programming an AI's friendliness using random quantum bits, if it could be reassured that if friendliness fails, the AI would kill everyone immediately?
3. Deny the measure: construct a meta ethical theory where only classical probabilities (or classical uncertainties) count as probabilities. Quantum measures do not: you care about the sum total of all branches of the universe. Universes in which the photon went through the top slit, went through the bottom slit, or was in an entangled state that went through both slits... to you, there are three completely separate universes, and you can assign totally unrelated utilities to each one. This seems quite arbitrary, though: how are you going to construct these preferences across the whole of the quantum universe, when forged your current preferences on a single branch?
4. Cheat: note that nothing in life is certain. Even if we have the strongest evidence imaginable about vacuum collapse, there's always a tiny chance that the evidence is wrong. After a few seconds, that probability will be dwarfed by the discount factor of the collapsing universe. So go about your business as usual, knowing that most of the measure/probability mass remains in the non-collapsing universe. This can get tricky if, for instance the vacuum collapsed more slowly that a factor of a trillion a second. Would you be in a situation where you should behave as if you believed vacuum collapse for another decade, say, and then switch to a behaviour that assumed non-collapse afterwards? Also, would you take seemingly stupid bets, like bets at a trillion trillion trillion to one that the next piece of evidence will show no collapse (if you lose, you're likely in the low measure universe anyway, so the loss is minute)?
## The ongoing transformation of quantum field theory
22 29 December 2012 09:45AM
Quantum field theory (QFT) is the basic framework of particle physics. Particles arise from the quantized energy levels of field oscillations; Feynman diagrams are the simple tool for approximating their interactions. The "standard model", the success of which is capped by the recent observation of a Higgs boson lookalike, is a quantum field theory.
But just like everything mathematical, quantum field theory has hidden depths. For the past decade, new pictures of the quantum scattering process (in which particles come together, interact, and then fly apart) have incrementally been developed, and they presage a transformation in the understanding of what a QFT describes.
At the center of this evolution is "N=4 super-Yang-Mills theory", the maximally supersymmetric QFT in four dimensions. I want to emphasize that from a standard QFT perspective, this theory contains nothing but scalar particles (like the Higgs), spin-1/2 fermions (like electrons or quarks), and spin-1 "gauge fields" (like photons and gluons). The ingredients aren't something alien to real physics. What distinguishes an N=4 theory is that the particle spectrum and the interactions are arranged so as to produce a highly extended form of supersymmetry, in which particles have multiple partners (so many LWers should be comfortable with the notion).
In 1997, Juan Maldacena discovered that the N=4 theory is equivalent to a type of string theory in a particular higher-dimensional space. In 2003, Edward Witten discovered that it is also equivalent to a different type of string theory in a supersymmetric version of Roger Penrose's twistor space. Those insights didn't come from nowhere, they explained algebraic facts that had been known for many years; and they have led to a still-accumulating stockpile of discoveries about the properties of N=4 field theory.
What we can say is that the physical processes appearing in the theory can be understood as taking place in either of two dual space-time descriptions. Each space-time has its own version of a particular large symmetry, "superconformal symmetry", and the superconformal symmetry of one space-time is invisible in the other. And now it is becoming apparent that there is a third description, which does not involve space-time at all, in which both superconformal symmetries are manifest, but in which space-time locality and quantum unitarity are not "visible" - that is, they are not manifest in the equations that define the theory in this third picture.
I cannot provide an authoritative account of how the new picture works. But here is my impression. In the third picture, the scattering processes of the space-time picture become a complex of polytopes - higher-dimensional polyhedra, joined at their faces - and the quantum measure becomes the volume of these polyhedra. Where you previously had particles, you now just have the dimensions of the polytopes; and the fact that in general, an n-dimensional space doesn't have n special directions suggests to me that multi-particle entanglements can be something more fundamental than the separate particles that we resolve them into.
It will be especially interesting to see whether this polytope combinatorics, that can give back the scattering probabilities calculated with Feynman diagrams in the usual picture, can work solely with ordinary probabilities. That was Penrose's objective, almost fifty years ago, when he developed the theory of "spin networks" as a new language for the angular momentum calculations of quantum theory, and which was a step towards the twistor variables now playing an essential role in these new developments. If the probability calculus of quantum mechanics can be obtained from conventional probability theory applied to these "structures" that may underlie familiar space-time, then that would mean that superposition does not need to be regarded as ontological.
I'm talking about this now because a group of researchers around Nima Arkani-Hamed, who are among the leaders in this area, released their first paper in a year this week. It's very new, and so arcane that, among physics bloggers, only Lubos Motl has talked about it.
This is still just one step in a journey. Not only does the paper focus on the N=4 theory - which is not the theory of the real world - but the results only apply to part of the N=4 theory, the so-called "planar" part, described by Feynman diagrams with a planar topology. (For an impressionistic glimpse of what might lie ahead, you could try this paper, whose author has been shouting from the wilderness for years that categorical knot theory is the missing piece of the puzzle.)
The N=4 theory is not reality, but the new perspective should generalize. Present-day calculations in QCD already employ truncated versions of the N=4 theory; and Arkani-Hamed et al specifically mention another supersymmetric field theory (known as ABJM after the initials of its authors), a deformation of which is holographically dual to a theory-of-everything candidate from 1983.
When it comes to seeing reality in this new way, we still only have, at best, a fruitful chaos of ideas and possibilities. But the solid results - the mathematical equivalences - will continue to pile up, and the end product really ought to be nothing less than a new conception of how physics works.
## If MWI is correct, should we expect to experience Quantum Torment?
3 10 November 2012 04:32AM
If the many worlds of the Many Worlds Interpretation of quantum mechanics are real, there's at least a good chance that Quantum Immortality is real as well: All conscious beings should expect to experience the next moment in at least one Everett branch even if they stop existing in all other branches, and the moment after that in at least one other branch, and so on forever.
However, the transition from life to death isn't usually a binary change. For most people it happens slowly as your brain and the rest of your body deteriorates, often painfully.
Doesn't it follow that each of us should expect to keep living in this state of constant degradation and suffering for a very, very long time, perhaps forever?
I don't know much about quantum mechanics, so I don't have anything to contribute to this discussion. I'm just terrified, and I'd like, not to be reassured by well-meaning lies, but to know the truth. How likely is it that Quantum Torment is real?
## Question on decoherence and virtual particles
0 14 September 2012 04:33AM
Doing some insomniac reading of the Quantum Sequence, I think that I've gotten a reasonable grasp of the principles of decoherence, non-interacting bundles of amplitude, etc. I then tried to put that knowledge to work by comparing it with my understanding of virtual particles (whose rate of creation in any area is essentially equivalent to the electromagnetic field), and I had a thought I can't seem to find mentioned elsewhere.
If I understand decoherence right, then quantum events which can't be differentiated from each other get summed together into the same blob of amplitude. Most virtual particles which appear and rapidly disappear do so in ways that can't be detected, let alone distinguished. This seems as if it could potentially imply that the extreme evenness of a vacuum might have to do more with the overall blob of amplitude of the vacuum being smeared out among all the equally-likely vacuum fluctuations, than it does directly with the evenness of the rate of vacuum fluctuations themselves. It also seems possible that there could be some clever way to test for an overall background smear of amplitude, though I'm not awake enough to figure one out just now. (My imagination has thrown out the phrase 'collapse of the vacuum state', but I'm betting that that's just unrelated quantum buzzword bingo.)
Does anything similar to what I've just described have any correlation with actual quantum theory, or will I awaken to discover all my points have been voted away due to this being complete and utter nonsense?
## Debugging the Quantum Physics Sequence
32 05 September 2012 03:55PM
This article should really be called "Patching the argumentative flaw in the Sequences created by the Quantum Physics Sequence".
There's only one big thing wrong with that Sequence: the central factual claim is wrong. I don't mean the claim that the Many Worlds interpretation is correct; I mean the claim that the Many Worlds interpretation is obviously correct. I don't agree with the ontological claim either, but I especially don't agree with the epistemological claim. It's a strawman which reduces the quantum debate to Everett versus Bohr - well, it's not really Bohr, since Bohr didn't believe wavefunctions were physical entities. Everett versus Collapse, then.
I've complained about this from the beginning, simply because I've also studied the topic and profoundly disagree with Eliezer's assessment. What I would like to see discussed on this occasion is not the physics, but rather how to patch the arguments in the Sequences that depend on this wrong sub-argument. To my eyes, this is a highly visible flaw, but it's not a deep one. It's a detail, a bug. Surely it affects nothing of substance.
However, before I proceed, I'd better back up my criticism. So: consider the existence of single-world retrocausal interpretations of quantum mechanics, such as John Cramer's transactional interpretation, which is descended from Wheeler-Feynman absorber theory. There are no superpositions, only causal chains running forward in time and backward in time. The calculus of complex-valued probability amplitudes is supposed to arise from this.
The existence of the retrocausal tradition already shows that the debate has been represented incorrectly; it should at least be Everett versus Bohr versus Cramer. I would also argue that when you look at the details, many-worlds has no discernible edge over single-world retrocausality:
• Relativity isn't an issue for the transactional interpretation: causality forwards and causality backwards are both local, it's the existence of loops in time which create the appearance of nonlocality.
• Retrocausal interpretations don't have an exact derivation of the Born rule, but neither does many-worlds.
• Many-worlds finds hope of such a derivation in a property of the quantum formalism: the resemblance of density matrix entries to probabilities. But single-world retrocausality finds such hope too: the Born probabilities can be obtained from the product of ψ with ψ*, its complex conjugate, and ψ* is the time reverse of ψ.
• Loops in time just fundamentally bug some people, but splitting worlds have the same effect on others.
I am not especially an advocate of retrocausal interpretations. They are among the possibilities; they deserve consideration and they get it. Retrocausality may or may not be an element of the real explanation of why quantum mechanics works. Progress towards the discovery of the truth requires exploration on many fronts, that's happening, we'll get there eventually. I have focused on retrocausal interpretations here just because they offer the clearest evidence that the big picture offered by the Sequence is wrong.
It's hopeless to suggest rewriting the Sequence, I don't think that would be a good use of anyone's time. But what I would like to have, is a clear idea of the role that "the winner is ... Many Worlds!" plays in the overall flow of argument, in the great meta-sequence that is Less Wrong's foundational text; and I would also like to have a clear idea of how to patch the argument, so that it routes around this flaw.
In the wiki, it states that "Cleaning up the old confusion about QM is used to introduce basic issues in rationality (such as the technical version of Occam's Razor), epistemology, reductionism, naturalism, and philosophy of science." So there we have it - a synopsis of the function that this Sequence is supposed to perform. Perhaps we need a working group that will identify each of the individual arguments, and come up with a substitute for each one.
## Utility functions and quantum mechanics
6 31 August 2012 03:41AM
Interpreting quantum mechanics throws an interesting wrench into utility calculation.
Utility functions, according to the interpretation typical in these parts, are a function of the state of the world, and an agent with consistent goals acts to maximize the expected value of their utility function. Within the many-worlds interpretation (MWI) of quantum mechanics (QM), things become interesting because "the state of the world" refers to a wavefunction which contains all possibilities, merely in differing amounts. With an inherently probabilistic interpretation of QM, flipping a quantum coin has to be treated linearly by our rational agent - that is, when calculating expected utility, they have to average the expected utilities from each half. But if flipping a quantum coin is just an operation on the state of the world, then you can use any function you want when calculating expected utility.
And all coins, when you get down to it, are quantum. At the extreme, this leads to the possible rationality of quantum suicide - since you're alive in the quantum state somewhere, just claim that your utility function non-linearly focuses on the part where you're alive.
As you may have heard, there have been several papers in the quantum mechanics literature that claim to recover ordinary rules for calculating expected utility in MWI - how does that work?
Well, when they're not simply wrong (for example, by replacing a state labeled by the number a+b with the state |a> + |b>), they usually go about it with the Von Neumann-Morgenstern axioms, modified to refer to quantum mechanics:
1. Completeness: Every state can be compared to every other, preferencewise.
2. Transitivity: If you prefer |A> to |B> and |B> to |C>, you also prefer |A> to |C>.
3. Continuity: If you prefer |A> to |B> and |B> to |C>, there's some quantum-mechanical measure (note that this is a change from "probability") X such that you're indifferent between (1-X)|A> + X|C> and |B>.
4. Independence: If you prefer |A> to |B>, then you also prefer (1-X)|A> + X|C> to (1-X)|B> + X|C>, where |C> can be anything and X isn't 1.
In classical cases, these four axioms are easy to accept, and lead directly to utility functions with X as a probability. In quantum mechanical cases, the axioms are harder to accept, but the only measure available is indeed the ordinary amplitude-squared measure (this last fact features prominently in Everett's original paper). This gives you back the traditional rule for calculating expected utilities.
For an example of why these axioms are weird in quantum mechanics, consider the case of light. Linearly polarized light is actually the same thing as an equal superposition of right-handed and left-handed circularly polarized light. This has the interesting consequence that even when light is linearly polarized, if you shine it on atoms, those atoms will change their spins - they'll just change half right and half left. Or if you take circularly polarized light and shine it on a linear polarizer, half of it will go through. So anyhow, we can make axiom 4 read "If you are indifferent between left-polarized light and right-polarized light, then you must also be indifferent between linearly polarized light (i.e. left+right) and circularly polarized light (right+right)." But... can't a guy just want circularly polarized light?
Under what sort of conditions does the independence axiom make intuitive sense? Ones where something more complicated than a photon is being considered. Something like you. If MWI is correct and you measure the polarization of linearly polarized light vs. circularly polarized light, this puts your brain in a superposition of linear vs. circular. But nobody says "boy, I really want a circularly polarized brain."
A key factor, as is often the case when talking about recovering classical behavior from quantum mechanics, is decoherence. If you carefully prepare your brain in a circularly polarized state, and you interact with an enormous random system (like by breathing air, or emitting thermal radiation), your carefully prepared brain-state is going to get shredded. It's a fascinating property of quantum mechanics that once you "leak" information to the outside, things are qualitatively different. If we have a pair of entangled particles and a classical phone line, I can send you an exact quantum state - it's called quantum teleportation, and it's sweet. But if one of our particles leaks even the tiniest bit, even if we just end up with three particles entangled instead of two, our ability to transmit quantum states is gone completely.
In essence, the states we started with were "close together" in the space where quantum mechanics lives (Hilbert space), and so they could interact via quantum mechanics. Interacting with the outside even a little scattered our entangled particles farther apart.
Any virus, dust speck, or human being is constantly interacting with the outside world. States that are far enough apart to be perceptibly different to us aren't just "one parallel world away," like would make a good story - they are cracked wide open, spread out in the atmosphere as soon as you breathe it, spread by the Earth as soon as you push on it with your weight. If we were photons, one could easily connect with their "other selves" - if you try to change your polarization, whether you succeed or fail will depend on the orientation of your oppositely-polarized "other self"! But once you've interacted with the Earth, this quantum interference becomes negligible - so negligible that we seem to neglect it. When we make a plan, we don't worry that our nega-self might plan the opposite and we'll cancel each other out.
Does this sort of separation explain an approximate independence axiom, which is necessary for the usual rules for expected utility? Yes.
Because of decoherence, non-classical interactions are totally invisible to unaided primates, so it's expected that our morality neglects them. And if the states we are comparing are noticeably different, they're never going to interact, so independence is much more intuitive than in the case of a single photon. Taken together with the other axioms, which still make a lot of sense, this defines expected utility maximization with the Born rule.
So this is my take on utility functions in quantum mechanics - any living thing big enough to have a goal system will also be big enough to neglect interaction between noticeably different states, and thus make decisions as if the amplitude squared was a probability. With the help of technology, we can create systems where the independence axiom breaks down, but these systems are things like photons or small loops of superconducting wire, not humans.
## E.T. Jaynes and Hugh Everett - includes a previously unpublished review by Jaynes of a published short version of Everett's dissertation
11 02 July 2012 04:49AM
E.T. Jaynes had a brief exchange of correspondence with Hugh Everett in 1957. The exchange was initiated by Everett, who commented on recently published works by Jaynes. Jaynes responded to Everett's comments, and finally sent Everett a letter reviewing a short version of Everett's thesis published that year.
Jaynes reaction was extremely positive at first: "It seems fair to say that your theory is the logical completion of quantum theory, in exactly the same sense that relativity was the logical completion of classical theory." High praise. But Jaynes swiftly follows up the praise with fundamental objections: "This is just the fundamental cause of Einstein's most serious objections to quantum theory, and it seems to me that the things that worried Einstein still cause trouble in your theory, but in an entirely new way." His letter goes on to detail his concerns, and insist, wtih Bohm, that "Einstein's objections to quantum theory have never been satisfactorily answered.
The Collected Works of Everett has some narrative about their interaction:
Hugh Everett marginal notes on page from E. T. Jaynes' "Information Theory and Statistical Mechanics"
http://ucispace.lib.uci.edu/handle/10575/1140
Hugh Everett handwritten draft letter to E.T. Jaynes, 15-May-1957
http://ucispace.lib.uci.edu/handle/10575/1186
Hugh Everett letter to E. T. Jaynes, 11-June-1957
http://ucispace.lib.uci.edu/handle/10575/1124
E.T. Jaynes letter to Hugh Everett, 15-October-1957 - Never before published
Directory at Google site with all the links and docs above. Also links to Washington University at St. Louis copyright form for this doc, Everett's thesis, long and short forms, and Jaynes' paper (the papers they were discussing in their correspondence). I hope to be adding the final letter in this exchange, Jaynes to Hewitt 17-June-1957, within a couple of weeks. , and maybe some documents from the Yahoo Group ETJaynesStudy as well.
For perspective on Jaynes more recent thoughts on quantum theory:
Jaynes paper on EPR and Bell's Theorem: http://bayes.wustl.edu/etj/articles/cmystery.pdf
Jaynes speculations on quantum theory: http://bayes.wustl.edu/etj/articles/scattering.by.free.pdf
## Timeless Physics Question
-5 28 April 2012 08:33PM
Timeless physics is what you end up with if you take MWI, assume the universe is a standing wave, and remove the extraneous variables. From what I understand, for the most part you can take a standing wave and add a time-reversed version, you end up with a standing wave that only uses real numbers. The problem with this is that the universe isn't quite time symmetric.
If I ignore that complex numbers ever were used in quantum physics, it seems unlikely that complex numbers is the correct solution. Is there another one? Should I be reversing charge and parity as well as time when I make the standing real-only wave?
## Personal research update
4 29 January 2012 09:32AM
Synopsis: The brain is a quantum computer and the self is a tensor factor in it - or at least, the truth lies more in that direction than in the classical direction - and we won't get Friendly AI right unless we get the ontology of consciousness right.
Followed by: Does functionalism imply dualism?
Sixteen months ago, I made a post seeking funding for personal research. There was no separate Discussion forum then, and the post was comprehensively downvoted. I did manage to keep going at it, full-time, for the next sixteen months. Perhaps I'll get to continue; it's for the sake of that possibility that I'll risk another breach of etiquette. You never know who's reading these words and what resources they have. Also, there has been progress.
I think the best place to start is with what orthonormal said in response to the original post: "I don't think anyone should be funding a Penrose-esque qualia mysterian to study string theory." If I now took my full agenda to someone out in the real world, they might say: "I don't think it's worth funding a study of 'the ontological problem of consciousness in the context of Friendly AI'." That's my dilemma. The pure scientists who might be interested in basic conceptual progress are not engaged with the race towards technological singularity, and the apocalyptic AI activists gathered in this place are trying to fit consciousness into an ontology that doesn't have room for it. In the end, if I have to choose between working on conventional topics in Friendly AI, and on the ontology of quantum mind theories, then I have to choose the latter, because we need to get the ontology of consciousness right, and it's possible that a breakthrough could occur in the world outside the FAI-aware subculture and filter through; but as things stand, the truth about consciousness would never be discovered by employing the methods and assumptions that prevail inside the FAI subculture.
Perhaps I should pause to spell out why the nature of consciousness matters for Friendly AI. The reason is that the value system of a Friendly AI must make reference to certain states of conscious beings - e.g. "pain is bad" - so, in order to make correct judgments in real life, at a minimum it must be able to tell which entities are people and which are not. Is an AI a person? Is a digital copy of a human person, itself a person? Is a human body with a completely prosthetic brain still a person?
I see two ways in which people concerned with FAI hope to answer such questions. One is simply to arrive at the right computational, functionalist definition of personhood. That is, we assume the paradigm according to which the mind is a computational state machine inhabiting the brain, with states that are coarse-grainings (equivalence classes) of exact microphysical states. Another physical system which admits the same coarse-graining - which embodies the same state machine at some macroscopic level, even though the microscopic details of its causality are different - is said to embody another instance of the same mind.
An example of the other way to approach this question is the idea of simulating a group of consciousness theorists for 500 subjective years, until they arrive at a consensus on the nature of consciousness. I think it's rather unlikely that anyone will ever get to solve FAI-relevant problems in that way. The level of software and hardware power implied by the capacity to do reliable whole-brain simulations means you're already on the threshold of singularity: if you can simulate whole brains, you can simulate part brains, and you can also modify the parts, optimize them with genetic algorithms, and put them together into nonhuman AI. Uploads won't come first.
But the idea of explaining consciousness this way, by simulating Daniel Dennett and David Chalmers until they agree, is just a cartoon version of similar but more subtle methods. What these methods have in common is that they propose to outsource the problem to a computational process using input from cognitive neuroscience. Simulating a whole human being and asking it questions is an extreme example of this (the simulation is the "computational process", and the brain scan it uses as a model is the "input from cognitive neuroscience"). A more subtle method is to have your baby AI act as an artificial neuroscientist, use its streamlined general-purpose problem-solving algorithms to make a causal model of a generic human brain, and then to somehow extract from that, the criteria which the human brain uses to identify the correct scope of the concept "person". It's similar to the idea of extrapolated volition, except that we're just extrapolating concepts.
It might sound a lot simpler to just get human neuroscientists to solve these questions. Humans may be individually unreliable, but they have lots of cognitive tricks - heuristics - and they are capable of agreeing that something is verifiably true, once one of them does stumble on the truth. The main reason one would even consider the extra complication involved in figuring out how to turn a general-purpose seed AI into an artificial neuroscientist, capable of extracting the essence of the human decision-making cognitive architecture and then reflectively idealizing it according to its own inherent criteria, is shortage of time: one wishes to develop friendly AI before someone else inadvertently develops unfriendly AI. If we stumble into a situation where a powerful self-enhancing algorithm with arbitrary utility function has been discovered, it would be desirable to have, ready to go, a schema for the discovery of a friendly utility function via such computational outsourcing.
Now, jumping ahead to a later stage of the argument, I argue that it is extremely likely that distinctively quantum processes play a fundamental role in conscious cognition, because the model of thought as distributed classical computation actually leads to an outlandish sort of dualism. If we don't concern ourselves with the merits of my argument for the moment, and just ask whether an AI neuroscientist might somehow overlook the existence of this alleged secret ingredient of the mind, in the course of its studies, I do think it's possible. The obvious noninvasive way to form state-machine models of human brains is to repeatedly scan them at maximum resolution using fMRI, and to form state-machine models of the individual voxels on the basis of this data, and then to couple these voxel-models to produce a state-machine model of the whole brain. This is a modeling protocol which assumes that everything which matters is physically localized at the voxel scale or smaller. Essentially we are asking, is it possible to mistake a quantum computer for a classical computer by performing this sort of analysis? The answer is definitely yes if the analytic process intrinsically assumes that the object under study is a classical computer. If I try to fit a set of points with a line, there will always be a line of best fit, even if the fit is absolutely terrible. So yes, one really can describe a protocol for AI neuroscience which would be unable to discover that the brain is quantum in its workings, and which would even produce a specific classical model on the basis of which it could then attempt conceptual and volitional extrapolation.
Clearly you can try to circumvent comparably wrong outcomes, by adding reality checks and second opinions to your protocol for FAI development. At a more down to earth level, these exact mistakes could also be made by human neuroscientists, for the exact same reasons, so it's not as if we're talking about flaws peculiar to a hypothetical "automated neuroscientist". But I don't want to go on about this forever. I think I've made the point that wrong assumptions and lax verification can lead to FAI failure. The example of mistaking a quantum computer for a classical computer may even have a neat illustrative value. But is it plausible that the brain is actually quantum in any significant way? Even more incredibly, is there really a valid apriori argument against functionalism regarding consciousness - the identification of consciousness with a class of computational process?
I have previously posted (here) about the way that an abstracted conception of reality, coming from scientific theory, can motivate denial that some basic appearance corresponds to reality. A perennial example is time. I hope we all agree that there is such a thing as the appearance of time, the appearance of change, the appearance of time flowing... But on this very site, there are many people who believe that reality is actually timeless, and that all these appearances are only appearances; that reality is fundamentally static, but that some of its fixed moments contain an illusion of dynamism.
The case against functionalism with respect to conscious states is a little more subtle, because it's not being said that consciousness is an illusion; it's just being said that consciousness is some sort of property of computational states. I argue first that this requires dualism, at least with our current physical ontology, because conscious states are replete with constituents not present in physical ontology - for example, the "qualia", an exotic name for very straightforward realities like: the shade of green appearing in the banner of this site, the feeling of the wind on your skin, really every sensation or feeling you ever had. In a world made solely of quantum fields in space, there are no such things; there are just particles and arrangements of particles. The truth of this ought to be especially clear for color, but it applies equally to everything else.
In order that this post should not be overlong, I will not argue at length here for the proposition that functionalism implies dualism, but shall proceed to the second stage of the argument, which does not seem to have appeared even in the philosophy literature. If we are going to suppose that minds and their states correspond solely to combinations of mesoscopic information-processing events like chemical and electrical signals in the brain, then there must be a mapping from possible exact microphysical states of the brain, to the corresponding mental states. Supposing we have a mapping from mental states to coarse-grained computational states, we now need a further mapping from computational states to exact microphysical states. There will of course be borderline cases. Functional states are identified by their causal roles, and there will be microphysical states which do not stably and reliably produce one output behavior or the other.
Physicists are used to talking about thermodynamic quantities like pressure and temperature as if they have an independent reality, but objectively they are just nicely behaved averages. The fundamental reality consists of innumerable particles bouncing off each other; one does not need, and one has no evidence for, the existence of a separate entity, "pressure", which exists in parallel to the detailed microphysical reality. The idea is somewhat absurd.
Yet this is analogous to the picture implied by a computational philosophy of mind (such as functionalism) applied to an atomistic physical ontology. We do know that the entities which constitute consciousness - the perceptions, thoughts, memories... which make up an experience - actually exist, and I claim it is also clear that they do not exist in any standard physical ontology. So, unless we get a very different physical ontology, we must resort to dualism. The mental entities become, inescapably, a new category of beings, distinct from those in physics, but systematically correlated with them. Except that, if they are being correlated with coarse-grained neurocomputational states which do not have an exact microphysical definition, only a functional definition, then the mental part of the new combined ontology is fatally vague. It is impossible for fundamental reality to be objectively vague; vagueness is a property of a concept or a definition, a sign that it is incomplete or that it does not need to be exact. But reality itself is necessarily exact - it is something - and so functionalist dualism cannot be true unless the underdetermination of the psychophysical correspondence is replaced by something which says for all possible physical states, exactly what mental states (if any) should also exist. And that inherently runs against the functionalist approach to mind.
Very few people consider themselves functionalists and dualists. Most functionalists think of themselves as materialists, and materialism is a monism. What I have argued is that functionalism, the existence of consciousness, and the existence of microphysical details as the fundamental physical reality, together imply a peculiar form of dualism in which microphysical states which are borderline cases with respect to functional roles must all nonetheless be assigned to precisely one computational state or the other, even if no principle tells you how to perform such an assignment. The dualist will have to suppose that an exact but arbitrary border exists in state space, between the equivalence classes.
This - not just dualism, but a dualism that is necessarily arbitrary in its fine details - is too much for me. If you want to go all Occam-Kolmogorov-Solomonoff about it, you can say that the information needed to specify those boundaries in state space is so great as to render this whole class of theories of consciousness not worth considering. Fortunately there is an alternative.
Here, in addressing this audience, I may need to undo a little of what you may think you know about quantum mechanics. Of course, the local preference is for the Many Worlds interpretation, and we've had that discussion many times. One reason Many Worlds has a grip on the imagination is that it looks easy to imagine. Back when there was just one world, we thought of it as particles arranged in space; now we have many worlds, dizzying in their number and diversity, but each individual world still consists of just particles arranged in space. I'm sure that's how many people think of it.
Among physicists it will be different. Physicists will have some idea of what a wavefunction is, what an operator algebra of observables is, they may even know about path integrals and the various arcane constructions employed in quantum field theory. Possibly they will understand that the Copenhagen interpretation is not about consciousness collapsing an actually existing wavefunction; it is a positivistic rationale for focusing only on measurements and not worrying about what happens in between. And perhaps we can all agree that this is inadequate, as a final description of reality. What I want to say, is that Many Worlds serves the same purpose in many physicists' minds, but is equally inadequate, though from the opposite direction. Copenhagen says the observables are real but goes misty about unmeasured reality. Many Worlds says the wavefunction is real, but goes misty about exactly how it connects to observed reality. My most frustrating discussions on this topic are with physicists who are happy to be vague about what a "world" is. It's really not so different to Copenhagen positivism, except that where Copenhagen says "we only ever see measurements, what's the problem?", Many Worlds says "I say there's an independent reality, what else is left to do?". It is very rare for a Many World theorist to seek an exact idea of what a world is, as you see Robin Hanson and maybe Eliezer Yudkowsky doing; in that regard, reading the Sequences on this site will give you an unrepresentative idea of the interpretation's status.
One of the characteristic features of quantum mechanics is entanglement. But both Copenhagen, and a Many Worlds which ontologically privileges the position basis (arrangements of particles in space), still have atomistic ontologies of the sort which will produce the "arbitrary dualism" I just described. Why not seek a quantum ontology in which there are complex natural unities - fundamental objects which aren't simple - in the form of what we would presently called entangled states? That was the motivation for the quantum monadology described in my other really unpopular post. :-) [Edit: Go there for a discussion of "the mind as tensor factor", mentioned at the start of this post.] Instead of saying that physical reality is a series of transitions from one arrangement of particles to the next, say it's a series of transitions from one set of entangled states to the next. Quantum mechanics does not tell us which basis, if any, is ontologically preferred. Reality as a series of transitions between overall wavefunctions which are partly factorized and partly still entangled is a possible ontology; hopefully readers who really are quantum physicists will get the gist of what I'm talking about.
I'm going to double back here and revisit the topic of how the world seems to look. Hopefully we agree, not just that there is an appearance of time flowing, but also an appearance of a self. Here I want to argue just for the bare minimum - that a moment's conscious experience consists of a set of things, events, situations... which are simultaneously "present to" or "in the awareness of" something - a conscious being - you. I'll argue for this because even this bare minimum is not acknowledged by existing materialist attempts to explain consciousness. I was recently directed to this brief talk about the idea that there's no "real you". We are given a picture of a graph whose nodes are memories, dispositions, etc., and we are told that the self is like that graph: nodes can be added, nodes can be removed, it's a purely relational composite without any persistent part. What's missing in that description is that bare minimum notion of a perceiving self. Conscious experience consists of a subject perceiving objects in certain aspects. Philosophers have discussed for centuries how best to characterize the details of this phenomenological ontology; I think the best was Edmund Husserl, and I expect his work to be extremely important in interpreting consciousness in terms of a new physical ontology. But if you can't even notice that there's an observer there, observing all those parts, then you won't get very far.
My favorite slogan for this is due to the other Jaynes, Julian Jaynes. I don't endorse his theory of consciousness at all; but while in a daydream he once said to himself, "Include the knower in the known". That sums it up perfectly. We know there is a "knower", an experiencing subject. We know this, just as well as we know that reality exists and that time passes. The adoption of ontologies in which these aspects of reality are regarded as unreal, as appearances as only, may be motivated by science, but it's false to the most basic facts there are, and one should show a little more imagination about what science will say when it's more advanced.
I think I've said almost all of this before. The high point of the argument is that we should look for a physical ontology in which a self exists and is a natural yet complex unity, rather than a vaguely bounded conglomerate of distinct information-processing events, because the latter leads to one of those unacceptably arbitrary dualisms. If we can find a physical ontology in which the conscious self can be identified directly with a class of object posited by the theory, we can even get away from dualism, because physical theories are mathematical and formal and make few commitments about the "inherent qualities" of things, just about their causal interactions. If we can find a physical object which is absolutely isomorphic to a conscious self, then we can turn the isomorphism into an identity, and the dualism goes away. We can't do that with a functionalist theory of consciousness, because it's a many-to-one mapping between physical and mental, not an isomorphism.
So, I've said it all before; what's new? What have I accomplished during these last sixteen months? Mostly, I learned a lot of physics. I did not originally intend to get into the details of particle physics - I thought I'd just study the ontology of, say, string theory, and then use that to think about the problem. But one thing led to another, and in particular I made progress by taking ideas that were slightly on the fringe, and trying to embed them within an orthodox framework. It was a great way to learn, and some of those fringe ideas may even turn out to be correct. It's now abundantly clear to me that I really could become a career physicist, working specifically on fundamental theory. I might even have to do that, it may be the best option for a day job. But what it means for the investigations detailed in this essay, is that I don't need to skip over any details of the fundamental physics. I'll be concerned with many-body interactions of biopolymer electrons in vivo, not particles in a collider, but an electron is still an electron, an elementary particle, and if I hope to identify the conscious state of the quantum self with certain special states from a many-electron Hilbert space, I should want to understand that Hilbert space in the deepest way available.
My only peer-reviewed publication, from many years ago, picked out pathways in the microtubule which, we speculated, might be suitable for mobile electrons. I had nothing to do with noticing those pathways; my contribution was the speculation about what sort of physical processes such pathways might underpin. Something I did notice, but never wrote about, was the unusual similarity (so I thought) between the microtubule's structure, and a model of quantum computation due to the topologist Michael Freedman: a hexagonal lattice of qubits, in which entanglement is protected against decoherence by being encoded in topological degrees of freedom. It seems clear that performing an ontological analysis of a topologically protected coherent quantum system, in the context of some comprehensive ontology ("interpretation") of quantum mechanics, is a good idea. I'm not claiming to know, by the way, that the microtubule is the locus of quantum consciousness; there are a number of possibilities; but the microtubule has been studied for many years now and there's a big literature of models... a few of which might even have biophysical plausibility.
As for the interpretation of quantum mechanics itself, these developments are highly technical, but revolutionary. A well-known, well-studied quantum field theory turns out to have a bizarre new nonlocal formulation in which collections of particles seem to be replaced by polytopes in twistor space. Methods pioneered via purely mathematical studies of this theory are already being used for real-world calculations in QCD (the theory of quarks and gluons), and I expect this new ontology of "reality as a complex of twistor polytopes" to carry across as well. I don't know which quantum interpretation will win the battle now, but this is new information, of utterly fundamental significance. It is precisely the sort of altered holistic viewpoint that I was groping towards when I spoke about quantum monads constituted by entanglement. So I think things are looking good, just on the pure physics side. The real job remains to show that there's such a thing as quantum neurobiology, and to connect it to something like Husserlian transcendental phenomenology of the self via the new quantum formalism.
It's when we reach a level of understanding like that, that we will truly be ready to tackle the relationship between consciousness and the new world of intelligent autonomous computation. I don't deny the enormous helpfulness of the computational perspective in understanding unconscious "thought" and information processing. And even conscious states are still states, so you can surely make a state-machine model of the causality of a conscious being. It's just that the reality of how consciousness, computation, and fundamental ontology are connected, is bound to be a whole lot deeper than just a stack of virtual machines in the brain. We will have to fight our way to a new perspective which subsumes and transcends the computational picture of reality as a set of causally coupled black-box state machines. It should still be possible to "port" most of the thinking about Friendly AI to this new ontology; but the differences, what's new, are liable to be crucial to success. Fortunately, it seems that new perspectives are still possible; we haven't reached Kantian cognitive closure, with no more ontological progress open to us. On the contrary, there are still lines of investigation that we've hardly begun to follow.
## Problems of the Deutsch-Wallace version of Many Worlds
4 16 December 2011 06:55AM
The subject has already been raised in this thread, but in a clumsy fashion. So here is a fresh new thread, where we can discuss, calmly and objectively, the pros and cons of the "Oxford" version of the Many Worlds interpretation of quantum mechanics.
This version of MWI is distinguished by two propositions. First, there is no definite number of "worlds" or "branches". They have a fuzzy, vague, approximate, definition-dependent existence. Second, the probability law of quantum mechanics (the Born rule) is to be obtained, not by counting the frequencies of events in the multiverse, but by an analysis of rational behavior in the multiverse. Normally, a prescription for rational behavior is obtained by maximizing expected utility, a quantity which is calculated by averaging "probability x utility" for each possible outcome of an action. In the Oxford school's "decision-theoretic" derivation of the Born rule, we somehow start with a ranking of actions that is deemed rational, then we "divide out" by the utilities, and obtain probabilities that were implicit in the original ranking.
I reject the two propositions. "Worlds" or "branches" can't be vague if they are to correspond to observed reality, because vagueness results from an object being dependent on observer definition, and the local portion of reality does not owe its existence to how we define anything; and the upside-down decision-theoretic derivation, if it ever works, must implicitly smuggle in the premises of probability theory in order to obtain its original rationality ranking.
Some references:
"Decoherence and Ontology: or, How I Learned to Stop Worrying and Love FAPP" by David Wallace. In this paper, Wallace says, for example, that the question "how many branches are there?" "does not... make sense", that the question "how many branches are there in which it is sunny?" is "a question which has no answer", "it is a non-question to ask how many [worlds]", etc.
"Quantum Probability from Decision Theory?" by Barnum et al. This is a rebuttal of the original argument (due to David Deutsch) that the Born rule can be justified by an analysis of multiverse rationality.
## A case study in fooling oneself
-2 15 December 2011 05:25AM
Note: This post assumes that the Oxford version of Many Worlds is wrong, and speculates as to why this isn't obvious. For a discussion of the hypothesis itself, see Problems of the Deutsch-Wallace version of Many Worlds.
smk asks how many worlds are produced in a quantum process where the outcomes have unequal probabilities; Emile says there's no exact answer, just like there's no exact answer for how many ink blots are in the messy picture; Tetronian says this analogy is a great way to demonstrate what a "wrong question" is; Emile has (at this writing) 9 upvotes, and Tetronian has 7.
My thesis is that Emile has instead provided an example of how to dismiss a question and thereby fool oneself; Tetronian provides an example of treating an epistemically destructive technique of dismissal as epistemically virtuous and fruitful; and the upvotes show that this isn't just their problem. [edit: Emile and Tetronian respond.]
I am as tired as anyone of the debate over Many Worlds. I don't expect the general climate of opinion on this site to change except as a result of new intellectual developments in the larger world of physics and philosophy of physics, which is where the question will be decided anyway. But the mission of Less Wrong is supposed to be the refinement of rationality, and so perhaps this "case study" is of interest, not just as another opportunity to argue over the interpretation of quantum mechanics, but as an opportunity to dissect a little bit of irrationality that is not only playing out here and now, but which evidently has a base of support.
The question is not just, what's wrong with the argument, but also, how did it get that base of support? How was a situation created where one person says something irrational (or foolish, or however the problem is best understood), and a lot of other people nod in agreement and say, that's an excellent example of how to think?
On this occasion, my quarrel is not with the Many Worlds interpretation as such; it is with the version of Many Worlds which says there's no actual number of worlds. Elsewhere in the thread, someone says there are uncountably many worlds, and someone else says there are two worlds. At least those are meaningful answers (although the advocate of "two worlds" as the answer, then goes on to say that one world is "stronger" than the other, which is meaningless).
But the proposition that there is no definite number of worlds, is as foolish and self-contradictory as any of those other contortions from the history of thought that rationalists and advocates of common sense like to mock or boggle at. At times I have wondered how to place Less Wrong in the history of thought; well, this is one way to do it - it can have its own chapter in the history of intellectual folly; it can be known by its mistakes.
Then again, this "mistake" is not original to Less Wrong. It appears to be one of the defining ideas of the Oxford-based approach to Many Worlds associated with David Deutsch and David Wallace; the other defining idea being the proposal to derive probabilities from rationality, rather than vice versa. (I refer to the attempt to derive the Born rule from arguments about how to behave rationally in the multiverse.) The Oxford version of MWI seems to be very popular among thoughtful non-physicist advocates of MWI - even though I would regard both its defining ideas as nonsense - and it may be that its ideas get a pass here, partly because of their social status. That is, an important faction of LW opinion believes that Many Worlds is the explanation of quantum mechanics, and the Oxford school of MWI has high status and high visibility within the world of MWI advocacy, and so its ideas will receive approbation without much examination or even much understanding, because of the social and psychological mechanisms which incline people to agree with, defend, and laud their favorite authorities, even if they don't really understand what these authorities are saying or why they are saying it.
However, it is undoubtedly the case that many of the LW readers who believe there's no definite number of worlds, believe this because the idea genuinely makes sense to them. They aren't just stringing together words whose meaning isn't known, like a Taliban who recites the Quran without knowing a word of Arabic; they've actually thought about this themselves; they have gone through some subjective process as a result of which they have consciously adopted this opinion. So from the perspective of analyzing how it is that people come to hold absurd-sounding views, this should be good news. It means that we're dealing with a genuine failure to reason properly, as opposed to a simple matter of reciting slogans or affirming allegiance to a view on the basis of something other than thought.
At a guess, the thought process involved is very simple. These people have thought about the wavefunctions that appear in quantum mechanics, at whatever level of technical detail they can muster; they have decided that the components or substructures of these wavefunctions which might be identified as "worlds" or "branches" are clearly approximate entities whose definition is somewhat arbitrary or subject to convention; and so they have concluded that there's no definite number of worlds in the wavefunction. And the failure in their thinking occurs when they don't take the next step and say, is this at all consistent with reality? That is, if a quantum world is something whose existence is fuzzy and which doesn't even have a definite multiplicity - that is, we can't even say if there's one, two, or many of them - if those are the properties of a quantum world, then is it possible for the real world to be one of those? It's the failure to ask that last question, and really think about it, which must be the oversight allowing the nonsense-doctrine of "no definite number of worlds" to gain a foothold in the minds of otherwise rational people.
If this diagnosis is correct, then at some level it's a case of "treating the map as the territory" syndrome. A particular conception of the quantum-mechanical wavefunction is providing the "map" of reality, and the individual thinker is perhaps making correct statements about what's on their map, but they are failing to check the properties of the map against the properties of the territory. In this case, the property of reality that falsifies the map is, the fact that it definitely exists, or perhaps the corollary of that fact, that something which definitely exists definitely exists at least once, and therefore exists with a definite, objective multiplicity.
Trying to go further in the diagnosis, I can identify a few cognitive tendencies which may be contributing. First is the phenomenon of bundled assumptions which have never been made distinct and questioned separately. I suppose that in a few people's heads, there's a rapid movement from "science (or materialism) is correct" to "quantum mechanics is correct" to "Many Worlds is correct" to "the Oxford school of MWI is correct". If you are used to encountering all of those ideas together, it may take a while to realize that they are not linked out of logical necessity, but just contingently, by the narrowness of your own experience.
Second, it may seem that "no definite number of worlds" makes sense to an individual, because when they test their own worldview for semantic coherence, logical consistency, or empirical adequacy, it seems to pass. In the case of "no-collapse" or "no-splitting" versions of Many Worlds, it seems that it often passes the subjective making-sense test, because the individual is actually relying on ingredients borrowed from the Copenhagen interpretation. A semi-technical example would be the coefficients of a reduced density matrix. In the Copenhagen interpetation, they are probabilities. Because they have the mathematical attributes of probabilities (by this I just mean that they lie between 0 and 1), and because they can be obtained by strictly mathematical manipulations of the quantities composing the wavefunction, Many Worlds advocates tend to treat these quantities as inherently being probabilities, and use their "existence" as a way to obtain the Born probability rule from the ontology of "wavefunction yes, wavefunction collapse no". But just because something is a real number between 0 and 1, doesn't yet explain how it manages to be a probability. In particular, I would maintain that if you have a multiverse theory, in which all possibilities are actual, then a probability must refer to a frequency. The probability of an event in the multiverse is simply how often it occurs in the multiverse. And clearly, just having the number 0.5 associated with a particular multiverse branch is not yet the same thing as showing that the events in that branch occur half the time.
I don't have a good name for this phenomenon, but we could call it "borrowed support", in which a belief system receives support from considerations which aren't legitimately its own to claim. (Ayn Rand apparently talked about a similar notion of "borrowed concepts".)
Third, there is a possibility among people who have a capacity for highly abstract thought, to adopt an ideology, ontology, or "theory of everything" which is only expressed in those abstract terms, and to then treat that theory as the whole of reality, in a way that reifies the abstractions. This is a highly specific form of treating the map as the territory, peculiar to abstract thinkers. When someone says that reality is made of numbers, or made of computations, this is at work. In the case at hand, we're talking about a theory of physics, but the ontology of that theory is incompatible with the definiteness of one's own existence. My guess is that the main psychological factor at work here is intoxication with the feeling that one understands reality totally and in its essence. The universe has bowed to the imperial ego; one may not literally direct the stars in their courses, but one has known the essence of things. Combine that intoxication, with "borrowed support" and with the simple failure to think hard enough about where on the map the imperial ego itself might be located, and maybe you have a comprehensive explanation of how people manage to believe theories of reality which are flatly inconsistent with the most basic features of subjective experience.
I should also say something about Emile's example of the ink blots. I find it rather superficial to just say "there's no definite number of blots". To say that the number of blots depends on definition is a lot closer to being true, but that undermines the argument, because that opens the possibility that there is a right definition of "world", and many wrong definitions, and that the true number of worlds is just the number of worlds according to the right definition.
Emile's picture can be used for the opposite purpose. All we have to do is to scrutinize, more closely, what it actually is. It's a JPEG that is 314 pixels by 410 pixels in size. Each of those pixels will have an exact color coding. So clearly we can be entirely objective in the way we approach this question; all we have to do is be precise in our concepts, and engage with the genuine details of the object under discussion. Presumably the image is a scan of a physical object, but even in that case, we can be precise - it's made of atoms, they are particular atoms, we can make objective distinctions on the basis of contiguity and bonding between these atoms, and so the question will have an objective answer, if we bother to be sufficiently precise. The same goes for "worlds" or "branches" in a wavefunction. And the truly pernicious thing about this version of Many Worlds is that it prevents such inquiry. The ideology that tolerates vagueness about worlds serves to protect the proposed ontology from necessary scrutiny.
The same may be said, on a broader scale, of the practice of "dissolving a wrong question". That is a gambit which should be used sparingly and cautiously, because it easily serves to instead justify the dismissal of a legitimate question. A community trained to dismiss questions may never even notice the gaping holes in its belief system, because the lines of inquiry which lead towards those holes are already dismissed as invalid, undefined, unnecessary. smk came to this topic fresh, and without a head cluttered with ideas about what questions are legitimate and what questions are illegitimate, and as a result managed to ask something which more knowledgeable people had already prematurely dismissed from their own minds.
## How Many Worlds?
2 14 December 2011 02:51PM
How many universes "branch off" from a "quantum event", and in how many of them is the cat dead vs alive, and what about non-50/50 scenarios, and please answer so that a physics dummy can maybe kind of understand?
(Is it just 1 with the live cat and 1 with the dead one?)
## [LINK] New experiment observes macroscopic quantum entaglement
5 02 December 2011 04:18AM
## [Link] New paper: "The quantum state cannot be interpreted statistically"
9 18 November 2011 06:13PM
From a recent paper that is getting non-trivial attention...
"Quantum states are the key mathematical objects in quantum theory. It is therefore surprising that physicists have been unable to agree on what a quantum state represents. There are at least two opposing schools of thought, each almost as old as quantum theory itself. One is that a pure state is a physical property of system, much like position and momentum in classical mechanics. Another is that even a pure state has only a statistical significance, akin to a probability distribution in statistical mechanics. Here we show that, given only very mild assumptions, the statistical interpretation of the quantum state is inconsistent with the predictions of quantum theory. This result holds even in the presence of small amounts of experimental noise, and is therefore amenable to experimental test using present or near-future technology. If the predictions of quantum theory are confirmed, such a test would show that distinct quantum states must correspond to physically distinct states of reality."
From my understanding, the result works by showing how, if a quantum state is determined only statistically by some true physical state of the universe, then it is possible for us to construct clever quantum measurements that put statistical probability on outcomes for which there is literally zero quantum amplitude, which is a contradiction of Born's rule. The assumptions required are very mild, and if this is confirmed in experiment it would give a lot of justification for a phyicalist / realist interpretation of the Many Worlds point of view.
More from the paper:
"We conclude by outlining some consequences of the result. First, one motivation for the statistical view is the obvious parallel between the quantum process of instantaneous wave function collapse, and the (entirely non-mysterious) classical procedure of updating a probability distribution when new information is acquired. If the quantum state is a physical property of a system -- as it must be if one accepts the assumptions above -- then the quantum collapse must correspond to a real physical process. This is especially mysterious when two entangled systems are at separate locations, and measurement of one leads to an instantaneous collapse of the quantum state of the other.
In some versions of quantum theory, on the other hand, there is no collapse of the quantum state. In this case, after a measurement takes place, the joint quantum state of the system and measuring apparatus will contain a component corresponding to each possible macroscopic measurement outcome. This is unproblematic if the quantum state merely reflects a lack of information about which outcome occurred. But if the quantum state is a physical property of the system and apparatus, it is hard to avoid the conclusion that each marcoscopically different component has a direct counterpart in reality.
On a related, but more abstract note, the quantum state has the striking property of being an exponentially complicated object. Specifically, the number of real parameters needed to specify a quantum state is exponential in the number of systems n. This has a consequence for classical simulation of quantum systems. If a simulation is constrained by our assumptions -- that is, if it must store in memory a state for a quantum system, with independent preparations assigned uncorrelated states -- then it will need an amount of memory which is exponential in the number of quantum systems.
For these reasons and others, many will continue to hold that the quantum state is not a real object. We have shown that this is only possible if one or more of the assumptions above is dropped. More radical approaches[14] are careful to avoid associating quantum systems with any physical properties at all. The alternative is to seek physically well motivated reasons why the other two assumptions might fail."
On a related note, in one of David Deutsch's original arguments for why Many Worlds was straightforwardly obvious from quantum theory, he mentions Shor's quantum factoring algorithm. Essentially he asks any opponent of Many Worlds to give a real account, not just a parochial calculational account, of why the algorithm works when it is using exponentially more resources than could possibly be classically available to it. The way he put it was: "where was the number factored?"
I was never convinced that regular quantum computation could really be used to convince someone of Many Worlds who did not already believe it, except possibly for bounded-error quantum computation where one must accept the fact that there are different worlds to find one's self in after the computation, namely some of the worlds where the computation had an error due to the algorithm itself (or else one must explain the measurement problem in some different way as per usual). But I think that in light of the paper mentioned above, Deutsch's "where was the number factored" argument may deserve more credence.
Added: Scott Aaronson discusses the paper here (the comments are also interesting).
## That cat: not dead and alive
3 30 August 2011 08:23AM
I've read through the Quantum Physics sequence and feel that I managed to understand most of it. But now it seems to me that the Double Slit and Schrodinger's cat experiments are not described quite correctly. So I'd like to try to re-state them and see if anybody can correct any misunderstandings I likely have.
With the Double Slit experiment we usually hear it said the particle travels through both slits and then we see interference bands. The more precise explanation is that there is an complex valued amplitude flow corresponding to the particle moving through the left slit and another for the right slit. But if we could manage to magically "freeze time" then we would find ourselves in one position in configuration space where the particle is unambiguously in one position (let's say the left slit). Now any observer will have no way of knowing this at the time, and if they did detect the particle's position in any way it would change the configuration and there would be no interference banding.
But the particle really is going through the left slit right now (as far as we are concerned), simply because that is what it means to be at some point in configuration space. The particle is going through the right slit for other versions of ourselves nearby in configuration space.
The amplitude flow then continues to the point in configuration space where it arrives at the back screen, and it is joined by the amplitude flow via the right slit to the same region of configuration space, causing an interference pattern. So this present moment in time now has more than one past, now we can genuinely say that it did go through both. Both pasts are equally valid. The branching tree of amplitude flow has turned into a graph.
So far so good I hope (or perhaps I'm about to find out I'm completely wrong). Now for the cat.
I read recently that experimenters have managed to keep two clouds of caesium atoms in a coherent state for a hour. So what would this look like if we could scale it up to a cat?
The problem with this experiment is that a cat is a very complex system and the two particular types of states we are interested in (i.e. dead or alive) are very far apart in configuration space. It may help to imagine that we could rearrange configuration space a little to put all the points labelled "alive" on the left and all the dead points on the right of some line. If we want to make the gross simplification that we can treat the cat as a very simple system then this means that "alive" points are very close to the "dead" points in configuration space. In particular it means that there are significant amplitude flows between the two sets of points, that is significant flows across the line in both directions. Of course such flows happen all the time, but the key point is here the direction of the complex flow vectors would be aligned so as to cause a significant change in the magnitude of the final values in configuration space instead of tending to cancel out.
This means that as time proceeds the cat can move from alive to dead to alive to dead again, in the sense that in any point of configuration space that we find ourselves will contain an amplitude contribution both from alive states and from dead states. In other words two different pasts are contributing to the present.
So sometime after the experiment starts we magically stop the clock on the wall of the universe. Since we are at a particular point the cat is either alive or dead, let's say dead. So the cat is not alive and dead at the same time because we find ourselves at a single point in configuration space. There are also other points in the configuration space containing another instance of ourselves along with an alive cat. But since we have not entangled anything else in the universe with the cat/box system as time ticks along the cat would be buzzing around from dead to alive and back to dead again. When we open the box things entangle and we diverge far apart in configuration space, and now the cat remains completely dead or alive, at least for the point in configuration space we find ourselves in.
How to sum up? Cats and photons are never dead or alive or going left or right at the same moment from the point of view of one observer somewhere in configuration space, but the present has an amplitude contribution from multiple pasts.
If you're still reading this then thanks for hanging in there. I know there's some more detail about observations only being from a set of eigenvalues and so forth, but can I get some comments about whether I'm on the right track or way off base?
## Schroedinger's cat is always dead
-14 26 August 2011 05:58PM
Suppose you believe in the Copenhagen interpretation of quantum mechanics. Schroedinger puts his cat in a box, with a device that has a 50% chance of releasing a deathly poisonous gas. He will then open the box, and observe a live or dead cat, collapsing that waveform.
But Schroedinger's cat is lazy, and spends most of its time sleeping. Schroedinger is a pessimist, or else an optimist who hates cats; and so he mistakes a sleeping cat for a dead cat with probability P(M) > 0, but never mistakes a dead cat for a living cat.
So if the cat is dead with probability P(D) >= .5, Schroedinger observes a dead cat with probability P(D) + P(M)(1-P(D)).
If observing a dead cat causes the waveform to collapse such that the cat is dead, then P(D) = P(D) + P(M)(1-P(D)). This is possible only if P(D) = 1.
## Does quantum mechanics make simulations negligible?
0 13 August 2011 01:53AM
I've written a prior post about how I think that the Everett branching factor of reality dominates that of any plausible simulation, whether the latter is run on a Von Neumann machine, on a quantum machine, or on some hybrid; and thus the probability and utility weight that should be assigned to simulations in general is negligible. I also argued that the fact that we live in an apparently quantum-branching world could be construed as weak anthropic evidence for this idea. My prior post was down-modded into oblivion for reasons that are not relevant here (style, etc.) If I were to replace this text you're reading with a version of that idea which was more fully-argued, but still stylistically-neutral (unlike my prior post), would people be interested?
## Polarized gamma rays and manifest infinity
16 30 July 2011 06:56AM
Most people (not all, but most) are reasonably comfortable with infinity as an ultimate (lack of) limit. For example, cosmological theories that suggest the universe is infinitely large and/or infinitely old, are not strongly disbelieved a priori.
By contrast, most people are fairly uncomfortable with manifest infinity, actual infinite quantities showing up in physical objects. For example, we tend to be skeptical of theories that would allow infinite amounts of matter, energy or computation in a finite volume of spacetime.
## 2011 Buhl Lecture, Scott Aaronson on Quantum Complexity
8 09 July 2011 04:43AM
I was planning to post this in the main area, but my thoughts are significantly less well-formed than I thought they were. Anyway, I hope that interested parties find it nonetheless.
In the Carnegie Mellon 2011 Buhl Lecture, Scott Aaronson gives a remarkably clear and concise review of P, NP, other fundamentals in complexity theory, and their quantum extensions. In particular, beginning around the 46 minute mark, a sequence of examples is given in which the intuition from computability theory would have accurately predicted physical results (and in some cases this actually happened, so it wasn't just hindsight bias).
In previous posts we have learned about Einstein's arrogance and Einstein's speed. This pattern of results flowing from computational complexity to physical predictions seems odd to me in that context. Here we are using physical computers to derive abstractions about the limits of computation, and from there we are successfully able to intuit limits of physical computation (e.g. brains computing abstractions of the fundamental limits of brains computing abstractions...) At what point do we hit the stage where individual scientists can rationally know that results from computational complexity theory are more fundamental than traditional physics? It seems like a paradox wholly different than Einstein rationally knowing (from examining bits of theory-space evidence rather than traditional-experiment-space evidence) that relativity would hold true. In what sort of evidence space can physical brain computation yielding complexity limits count as bits of evidence factoring into expected physical outcomes (such as the exponential smallness of the spectral gap of NP-hard-Hamiltonians from the quantum adiabatic theorem)?
Maybe some contributors more well-versed in complexity theory can steer this in a useful direction.
## Quantum computing for the determined (Link)
16 13 June 2011 05:42PM
21 videos, which cover subjects including the basic model of quantum computing, entanglement, superdense coding, and quantum teleportation.
To work through the videos you need to be comfortable with basic linear algebra, and with assimilating new mathematical terminology. If you’re not, working through the videos will be arduous at best! Apart from that background, the main prerequisite is determination, and the willingness to work more than once over material you don’t fully understand.
In particular, you don’t need a background in quantum mechanics to follow the videos.
The videos are short, from 5-15 minutes, and each video focuses on explaining one main concept from quantum mechanics or quantum computing. In taking this approach I was inspired by the excellent Khan Academy.
Author: Michael Nielsen
## States of knowledge as amplitude configurations
0 08 June 2011 06:38PM
I am reading through the sequence on quantum physics and have had some questions which I am sure have been thought about by far more qualified people. If you have any useful comments or links about these ideas, please share.
Most of the strongest resistance to ideas about rationalism that I encounter comes not from people with religious beliefs per se, but usually from mathematicians or philosophers who want to assert arguments about the limits of knowledge, the fidelity of sensory perception as a means for gaining knowledge, and various (what I consider to be) pathological examples (such as the zombie example). Among other things, people tend to reduce the argument to the existence of proper names a la Wittgenstein and then go on to assert that the meaning of mathematics or mathematical proofs constitutes something which is fundamentally not part of the physical world.
As I am reading the quantum physics sequence (keep in mind that I am not a physicist; I am an applied mathematician and statistician and so the mathematical framework of Hilbert spaces and amplitude configurations makes vastly much more sense to me than billiard balls or waves, yet connecting it to reality is still very hard for me) I am struck by the thought that all thoughts are themselves fundamentally just amplitude configurations, and by extension, all claims about knowledge about things are also statements about amplitude configurations. For example, my view is that the color red does not exist in and of itself but rather that the experience of the color red is a statement about common configurations of particle amplitudes. When I say "that sign is red", one could unpack this into a detailed statement about statistical properties of configurations of particles in my brain.
The same reasoning seems to apply just as well to something like group theory. States of knowledge about the Sylow theorems, just as an example, would be properties of particle amplitude configurations in a brain. The Sylow theorems are not separately existing entities which are of themselves "true" in any sense.
Perhaps I am way off base in thinking this way. Can any philosophers of the mind point me in the right direction to read more about this?
## the Universe, Computability, and the Singularity
-4 05 January 2011 05:19PM
EDIT at Karma -5: Could the next "good citizen" to vote this down leave me a comment as to why it is getting voted down, and if other "good citizens" to pile on after that, either upvote that comment or put another comment giving your different reason?
Original Post:
Questions about the computability of various physical laws recently had me thinking: "well of course every real physical law is computable or else the universe couldn't function." That is to say that in order of the time-evolution of anything in the universe to proceed "correctly," the physical processes themselves must be able to, and in real-time, keep up with the complexity of their actual evolution. This seems to me a proof that every real physical process is computable by SOME sort of real computer, in the degenerate case that real computer is simply an actual physical model of the process itself, create that model, observe whichever features of its time-evolution you are trying to compute, and there you have your computer.
Then if we have a physical law whose use in predicting time evolution is provably uncomputable, either we know that this physical law is NOT the only law that might be formulated to describe what it is purporting to describe, or that our theory of computation is incomplete. In some sense what I am saying is consistent with the idea that quantum computing can quickly collapse down to plausibly tractable levels the time it takes to compute some things which, as classical computation problems, blow up. This would be a good indication that quantum is an important theory about the universe, that it not only explains a bunch of things that happen in the universe, but also explains how the universe can have those things happen in real-time without making mistakes.
What I am wondering is, where does this kind of consideration break with traditional computability theory? Is traditional computability theory limited to what Turing machines can do, while perhaps it is straightforward to prove that the operation of this Universe requires computation beyond what Turing machines can do? Is traditional computability theory limited to digital representations whereas the degenerate build-it-and-measure-it computer is what has been known as an analog computer? Is there somehow a level or measure of artificiality which must be present to call something a computer, which rules out such brute-force approaches as build-it-and-measure-it?
At least one imagining of the singularity is absorbing all the resources of the universe into some maximal intelligence, the (possibly asymptotic) endpoint of intelligences desiging greater intelligences until something makes them stop. But the universe is already just humming along like clockwork, with quantum and possibly even subtler-than-quantum gears turning in real time. What does the singularity add to this picture that isn't already there?
## Quantum Joint Configuration article: need help from physicists
15 22 December 2010 06:32PM
EDIT: 1:19 PM PST 22 December 2010 I completed this post. I didn't realize an uncompleted version was already posted earlier.
I wanted to read the quantum sequence because I've been intrigued by the nature of measurement throughout my physics career. I was happy to see that articles such as joint configuration use beams of photons and half and fully silvered mirrors to make its points. I spent years in graduate school working with a two-path interferometer with one moving mirror which we used to make spectrometric measurements on materials and detectors. I studied the quantization of the electromagnetic field, reading and rereading books such as Yariv's Quantum Electronics and Marcuse's Principles of Quantum Electronics. I developed with my friend David Woody a photodetector ttheory of extremely sensitive heterodyne mixers which explained the mysterious noise floor of these devices in terms of the shot noise from detecting the stream of photons which are the "Local Oscillator" of that mixer.
My point being that I AM a physicist, and I am even a physicist who has worked with the kinds of configurations shown in this blog post, both experimentally and theoretically. I did all this work 20 years ago and have been away from any kind of Quantum optics stuff for 15 years, but I don't think that is what is holding me back here.
So when I read and reread the joint configuration blog post, I am concerned that it makes absolutely no sense to me. I am hoping that someone out there DOES understand this article and can help me understand it. Someone who understands the more traditional kinds of interferometer configurations such as that described for example here and could help put this joint configuration blog post in terms that relate it to this more usual interferometer situation.
I'd be happy to be referred to this discussion if it has already taken place somewhere. Or I'd be happy to try it in comments to this discussion post. Or I'd be happy to talk to someone on the phone or in primate email, if you are that person email me at mwengler at gmail dot com.
To give you an idea of the kinds of things I think would help:
1) How might you build that experiment? Two photons coming in from right angles could be two radio sources at the same frequency and amplitude but possibly different phase as they hit the mirror. In that case, we get a stream of photons to detector 1 proportional to sin(phi+pi/4)^2 and a stream of photons to detector 2 proportional to cos(phi+pi/4)^2 where phi is the phase difference of the two waves as they hit the mirror, and I have not attempted to get the sign of the pi/4 term right to match the exact picture. Are they two thermal sources? In which case we get random phases at the mirror and the photons split pretty randomly between detector 1 and detector 2, but there are no 2-photon correlations, it is just single photon statistics.
2) The half-silvered mirror is a linear device: two photons passing through it do not interact with each other. So any statistical effect correlating the two photons (that is, they must either both go to detector 1 or both go to detector 2, but we will never see one go to 1 and the other go to 2) must be due to something going in the source of the photons. Tell me what the source of these photons is that gives this gedanken effect.
3) The two-photon aspect of the statistical prediction of this seems at least vaguely EPR-ish. But in EPR the correlations of two photons come about because both photons originate from a single process, if I recall correctly. Is this intending to look EPRish, but somehow leaving out some necessary features of the source of the two photons to get the correlation involved?
I remaing quite puzzled and look forward to anything anybody can tell me to relate the example given here to anything else in quantum optics or interferometers that I might already have some knowledge of.
Thanks,
Mike
## Help: Is there a quick and dirty way to explain quantum immortality?
2 20 October 2010 03:00AM
I had an incredibly frustrating conversation this morning trying to explain the idea of quantum immortality to someone whose understanding of MWI begins and ends at pop sci fi movies. I think I've identified the main issue that I wasn't covering in enough depth (continuity of identity between near-identical realities) but I was wondering whether anyone has ever faced this problem before, and whether anyone has (or knows where to find) a canned 5 minute explanation of it.
## Deep Structure Determinism
1 10 October 2010 06:54PM
Sort of a response to: Collapse Postulate
Abstract: There are phenomena in mathematics where certain structures are distributed "at random;" that is, statistical statements can be made and probabilities can be used to predict the outcomes of certain totally deterministic calculations. These calculations have a deep underlying structure which leads a whole class of problems to behave in the same way statistically, in a way that appears random, while being entirely deterministic. If quantum probabilities worked in this way, it would not require collapse or superposition.
This is a post about physics, and I am not a physicist. I will reference a few technical details from my (extremely limited) research in mathematical physics, but they are not necessary to the fundamental concept. I am sure that I have seen similar ideas somewhere in the comments before, but searching the site for "random + determinism" didn't turn much up so if anyone recognizes it I would like to see other posts on the subject. However my primary purpose here is to expose the name "Deep Structure Determinism" that jasonmcdowell used for it when I explained it to him on the ride back from the Berkeley Meetup yesterday.
Again I am not a physicist; it could be that there is a one or two sentence explanation for why this is a useless theory--of course that won't stop the name "Deep Structure Determinism" from being aesthetically pleasing and appropriate.
For my undergraduate thesis in mathematics, I collected numerical evidence for a generalization of the Sato-Tate Conjecture. The conjecture states, roughly, that if you take the right set of polynomials, compute the number of solutions to them over finite fields, and scale by a consistent factor, these results will have a probability distribution that is precisely a semicircle.
The reason that this is the case has something to do with the solutions being symmetric (in the way that y=x2 if and only if y=(-x)2 is a symmetry of the first equation) and their group of symmetries being a circle. And stepping back one step, the conjecture more properly states that the numbers of solutions will be roots of a certain polynomial which will be the minimal polynomial of a random matrix in SU2.
That is at least as far as I follow the mathematics, if not further. However, it's far enough for me to stop and do a double take.
A "random matrix?" First, what does it mean for a matrix to be random? And given that I am writing up a totally deterministic process to feed into a computer, how can you say that the matrix is random?
A sequence of matrices is called "random" if when you integrate of that sequence, your integral converges to integrating over an entire group of matrices. Because matrix groups are often smooth manifolds they are designed to be integrated over, and this ends up being sensible. However a more practical characterization, and one that I used in the writeup for my thesis, is that if you take a histogram of the points you are measuring, the histogram's shape should converge to the shape of the group--that is, if you're looking at the matrices that determine a circle, your histogram should look more and more like a semicircle as you do more computing. That is, you can have a probability distribution over the matrix space for where your matrix is likely to show up.
The actual computation that I did involved computing solutions to a polynomial equation--a trivial and highly deterministic procedure. I then scaled them, and stuck them in place. If I had not know that these numbers were each coming from a specific equation I would have said that they were random; they jumped around through the possibilities, but they were concentrated around the areas of higher probability.
So bringing this back to quantum physics: I am given to understand that quantum mechanics involves a lot of random matrices. These random matrices give the impression of being "random" in that it seems like there are lots of possibilities, and one must get "chosen" at the end of the day. One simple way to deal with this is postulate many worlds, wherein there no one choice has a special status.
However my experience with random matrices suggests that there could just be some series of matrices, which satisfies the definition of being random, but which is inherently determined (in the way that the Jacobian of a given elliptic curve is "determined.") If all quantum random matrices were selected from this list, it would leave us with the subjective experience of randomness, and given that this sort of computation may not be compressible, the expectation of dealing with these variables as though they are random forever. It would also leave us in a purely deterministic world, which does not branch, which could easily be linear, unitary, differentiable, local, symmetric, and slower-than-light.
View more: Next
|
2016-10-22 07:06:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6082034707069397, "perplexity": 705.2891285246343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718840.18/warc/CC-MAIN-20161020183838-00541-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://byjus.com/jee/chemical-kinetics-for-iit-jee/
|
# Chemical Kinetics IIT JEE Study Material
Chemical kinetics is a branch of physical chemistry that explains the rate of the chemical reactions and the mechanism by which the chemical reaction takes place including the effects of pressure, temperature, concentration, etc. The rate of the chemical reaction is defined as the change in concentration of products or reactants in a specific time interval. The Reaction Rate is measured in mol $\mathbf{L^{-1}}$ $\mathbf{s^{-1}}$ or mol $\mathbf{L^{-1}}$ $\mathbf{min^{-1}}$.
The Factors affecting the reaction rates:
1. Reaction temperature
2. Concentration of reactants
3. Presence of catalyst
4. If a catalyst or a reactant is solid then surface area also affects the reaction rates.
Chemical Kinetics IIT JEE Important Topics
A. Rates of Chemical Reactions
B. Order of reactions
C. Rate constant
D. First order reactions
E. Temperature dependence of rate constant (Arrhenius Equation)
The General Form of a Chemical Reaction:
$\mathbf{a\;A\;+\;b\;B\; \rightarrow \;c\;C\;+\;d\;D}$
The Rate of disappearance of A = $\mathbf{-\;\frac{d[A]}{dt}}$ and the Rate of disappearance of B = $\mathbf{-\;\frac{d[B]}{dt}}$
The Rate of appearance of C = $\mathbf{\frac{d[C]}{dt}}$ and the Rate of appearance of D = $\mathbf{\frac{d[D]}{dt}}$
Therefore, the rate of General Reaction:
$\mathbf{-\;\frac{1}{a}\;\frac{d[A]}{dt}\;=\;-\;\frac{1}{b}\;\frac{d[B]}{dt}\;=\;\frac{1}{c}\;\frac{d[C]}{dt}\;=\;\frac{1}{d}\;\frac{d[D]}{dt}}$
The + ve sign indicates that the concentration of compound C and D increases with time and the – ve sign indicates that the concentration of compound A and B decreases with time.
Rate Constant:
The rate law or the rate equation is the expression that relates the rate of any reaction with the concentration of the reactants.
$\mathbf{Rate\;prop\; to \;[A]^{a}.[B]^{b}\;\;or\;\;Rate\;=\;k\;[A]^{a}.[B]^{b}}$
Here, k = constant of proportionality known as the Rate Constant. The value of k is independent of the initial concentrations of the reactants and dependent on the temperature. At any fixed temperature, k is a constant characteristic of the reaction. Greater values of k indicate faster reaction rates whereas small values of k indicate slow reaction rates.
Order of Reaction:
If the rate of reaction = $\mathbf{\;k\;[A]^{a}\;\;[B]^{b}\;\;[C]^{c}}$
Then, a + b + c = order of the reaction
And, the order with respect to A, B, and C are a, b, and c respectively.
Zero Order Reaction:
In zero order reactions, the rate of reaction is independent of the concentration of the reactants.
$\mathbf{A\; \rightarrow \;Products\;\;\;and\;\;\;Rate\;=\;k\;[A]^{0}\;=\;k\;mol\;L^{-1}\;s^{-1}}$
The Time required for the completion of reaction:
$\mathbf{t\;=\;\frac{[A]_{0}}{k}\;\;and\;\;t_{\frac{1}{2}}\;=\;\frac{0.5\;[A]_{0}}{k}}$
Unit of rate constant (k) is: mol $\mathbf{L^{-1}}$ $\mathbf{time^{-1}}$
Examples of Zero Order Reaction:
1. $\mathbf{2\;HI\;(g)\;\;xr\rightarrow[Surface]{Au}\;\;H_{2}\;(g)\;+\;I_{2}\;(g)}$
2. $\mathbf{2\;NH_{3}\;(g)\;\;xr\rightarrow[Surface]{Mo\;or\;W}\;\;N_{2}\;+\;3\;H_{2}}$
First Order Reaction:
In first order reactions, the rate of reaction is proportional to the concentration of one reactant only.
$\mathbf{A\;\;\; \rightarrow \;\;\;Products}$
$\mathbf{Rate\;=\;k_{1}\;[A]\;\;or\;\;\frac{dx}{dt}\;=\;k_{1}\;(a\;-\;x)}$
A. Integrated 1st order rate equation is: $\mathbf{k_{1}\;=\;\frac{2.303}{t}\;log\;\left [ \frac{a}{a\;-\;x} \right ]}$
B. Exponential form of 1st order equation is: $\mathbf{C_{t}\;=\;C_{0}\;e^{\;-\;k_{1}\;t}}$
C. Unit of rate constant (k) is: $\mathbf{Time^{-1}}$
D. Average Life = $\mathbf{\frac{1}{k}}$ and Half Life = $\mathbf{\frac{0.693}{k_{1}}}$
Examples of First Order Reaction:
1. Mineral acid-catalyzed hydrolysis of esters.
2.$\mathbf{C_{12}\;H_{22}\;O_{11}+H_{2}\;Oxr\rightarrow[Inversion]{H^{+}\;cat. \;hy.}\;C_{6}\;H_{12}\;O_{6}\;(G)+C_{6}\;H_{12}\;O_{6}\;(F)}$
Second Order Reaction:
Case 1: When the concentrations of both the reactants are equal or two molecules of the same reactant are involved
A. Differential rate equation: $\mathbf{\frac{dx}{dt}\;=\;k_{2}\;(a\;-\;x)^{2}}$
B. Integrated rate equation: $\mathbf{k_{2}\;t\;=\;\frac{1}{a\;-\;x}\;-\;\frac{1}{a}}$
Case 2: When the initial concentrations of the two reactants are different
A. Differential rate equation: $\mathbf{\frac{dx}{dt}\;=\;k_{2}\;(a\;-\;x)\;(b\;-\;x)}$
B. Integrated rate equation: $\mathbf{k_{2}\;=\;\frac{2.303}{t\;(a\;-\;b)}\;\;log_{10}\;\frac{b\;(a\;-\;x)}{a\;(b\;-\;x)}}$
Unit of rate constant (k) is: L $\mathbf{mol^{-1}}$ $\mathbf{L^{-1}}$
Examples of Second Order Reaction:
1. $\mathbf{2\;O_{3}\;\; \rightarrow \;\;3\;O_{2}}$
2. Hydrogenation of ethane:
$\mathbf{C_{2}\;H_{4}\;+\;H_{2}\;\;\overset{100\;^{0}C}{ \rightarrow}\;\;C_{2}\;H_{6}}$
Higher Order Reaction:
A. $\mathbf{A\;\; \rightarrow \;\;Product}$
B. $\mathbf{k_{n}\;t\;=\;\frac{1}{n\;-\;1}\;\left [ \frac{1}{\left ( a\;-\;x \right )^{n\;-\;1}}\;-\;\frac{1}{a^{\;n\;-\;1}} \right ]}$ [Where, n ≠ 1 and n = order]
C. $\mathbf{t_{\frac{1}{2}}\;=\;\frac{1}{k_{n}\;(n\;-\;1)}\;\left [ \frac{2^{n-1}\;-\;1}{a^{n\;-\;1}} \right ]}$
#### Practise This Question
The points with position vectors 60^i+3^j, 40^i8^j,a^i52^j are collinear, if
|
2018-12-17 20:23:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8600433468818665, "perplexity": 1353.0953689231746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829115.83/warc/CC-MAIN-20181217183905-20181217205905-00250.warc.gz"}
|
http://www.pratiyogi.com/assessment/nuclear-chemistry-test-4/379
|
# Nuclear Chemistry Test 4
Total Questions:50 Total Time: 75 Min
Remaining:
## Questions 1 of 50
Question:Heavy water freezes at
$${0^0}C$$
$${3.8^0}C$$
$${38^0}C$$
$$- {0.38^0}C$$
## Questions 2 of 50
Question:To determine the masses of the isotopes of an element which of the following techniques is useful
The acceleration of charged atoms by an electric field and their subsequent deflection by a variable magnetic field
The spectroscopic examination of the light emitted by vaporised
elements subjected to electric discharge
The photographing of the diffraction patterns which arise when X-rays are passed through crystals
The bombardment of metal foil with alpha particles
## Questions 3 of 50
Question:The radioisotope, tritium $$(_1^3H)$$ has a half-life of 12.3 years. If the initial amount of tritium is 32 mg. How many milligrams of it would remain after 49.2 years
8 mg
1 mg
2 mg
4 mg
## Questions 4 of 50
Question:When a radioactive substance is subjected to vacuum, the rate of disintegration per second
Increases considerably
Increases only if the products are gaseous
Is not affected
Suffers a slight decrease
## Questions 5 of 50
Question:A radio isotope will not emit
Gamma and alpha rays simultaneously
Gamma rays only
Alpha and beta rays simultaneously
Beta and gamma rays simultaneously
## Questions 6 of 50
Question:$${D_2}O$$ is used in
Industry
Nuclear reactor
Medicine
Insecticide
## Questions 7 of 50
Question:India conducted an underground nuclear test at
Tarapur
Narora
Pokhran
Pushkar
## Questions 8 of 50
$$Tim{e^{{\rm{-1}}}}$$
Time
$$Mole-tim{e^{-1}}$$
$$Time-mol{e^{-1}}$$
## Questions 9 of 50
Question:Which of the following is used in dating archeological findings or In a method of absolute dating of fossils a radioactive element is used. It is
$$_{92}{U^{235}}$$
$$_6{C^{14}}$$
$$_6{C^{12}}$$
$$_{20}C{a^{40}}$$
## Questions 10 of 50
Question:The radioactivity isotope $$_{27}^{60}Co$$ which is used in the treatment of cancer can be made by (n, p) reaction. For this reaction the target nucleus is
$$_{28}^{60}Ni$$
$$_{27}^{60}Co$$
$$_{28}^{59}Ni$$
$$_{27}^{59}Co$$
## Questions 11 of 50
Question:Fusion bomb involves
Combination of lighter nuclei into bigger nucleus
Destruction of heavy nucleus into smaller nuclei
Combustion of oxygen
Explosion of TNT
## Questions 12 of 50
Question:The number of neutrons in the parent nucleus which gives $${N^{14}}$$ on $$\beta$$-emission and the parent nucleus is
$$8,\,{C^{14}}$$
$$6,\,{C^{12}}$$
$$4,\,{C^{13}}$$
None of these
## Questions 13 of 50
Question:After the emission of $$\alpha$$-particle from the atom $$_{92}{X^{238}}$$, the number of neutrons in the atom will be
138
140
144
150
## Questions 14 of 50
Question:A nuclide of an alkaline earth metal undergoes radioactive decay by emission of the $$\alpha-$$particles in succession. The group of the periodic table to which the resulting daughter element would belong is
Gr.14
Gr.16
Gr.4
Gr.6
## Questions 15 of 50
Question:Which one of the following is not correct
$$_3L{i^7}+{\,_1}{H^1}\to{\,_4}B{e^7}+{\,_0}{n^1}$$
$$_{21}S{c^{45}}+{\,_0}{n^1}\to{\,_{20}}C{a^{45}}+{\,_0}{n^1}$$
$$_{33}A{s^{75}}+{\,_2}H{e^4}\to{\,_{35}}B{r^{78}}+{\,_0}{n^1}$$
$$_{83}B{i^{209}}+{\,_1}{H^2}\to{\,_{84}}P{o^{210}}+{\,_0}{n^1}$$
## Questions 16 of 50
Question:Atomic number after a $$\beta$$-emission from a nucleus having atomic number 40, will be
36
39
41
44
## Questions 17 of 50
Question:A certain nuclide has a half-life period of 30 minutes. If a sample containing 600 atoms is allowed to decay for 90 minutes, how many atoms will remain
200 atoms
450 atoms
75 atoms
500 atoms
## Questions 18 of 50
Question:If half-life of a certain radioactive nucleus is $$1000\,s,$$ the disintegration constant is
$$6.93\times{10^2}{s^{-1}}$$
$$6.93\times{10^{-4}}s$$
$$6.93\times{10^{-4}}{s^{-1}}$$
$$6.93\times{10^3}s$$
## Questions 19 of 50
Question:Radioactivity of naptunium stops when it is converted to
Bi
Rn
Th
Pb
## Questions 20 of 50
Question:Substances which have identical chemical properties but differ in atomic weights are called
Isothermals
Isotopes
Isentropus
Elementary particles
## Questions 21 of 50
Question:Tritium is an isotope of
Hydrogen
Titanium
Tantalum
Tellurium
## Questions 22 of 50
Question:An isotope of ‘parent’ is produced, when its nucleus loses
One $$\alpha$$-particle
One $$\beta$$-particle
One $$\alpha$$ and two $$\beta$$-particles
One $$\beta$$ and two $$\alpha$$- particles
## Questions 23 of 50
Question:Which of the following isotopes is likely to be most stable
$$_{30}Z{n^{71}}$$
$$_{30}Z{n^{66}}$$
$$_{30}Z{n^{64}}$$
None of these
## Questions 24 of 50
Question:Addition of two neutrons in an atom $$A$$ would
Change the chemical nature of $$A$$
Produce an isobar of $$A$$
Produce an isotope of A
Produce another element
## Questions 25 of 50
Question:Atomic weight of the isotope of hydrogen which contains 2 neutrons is the nucleus would be
2
3
1
4
## Questions 26 of 50
Question:In chlorine gas, ratio of $$C{l^{35}}$$ and $$C{l^{37}}$$is
1/3
3/1
1/1
1/4
## Questions 27 of 50
Question:An ordinary oxygen contains
Only $$O-16$$ isotopes
Only $$O-17$$ isotopes
A mixture of $$O-16$$ and $$O-18$$ isotopes
A mixture of O – 16, $$O-17$$ and $$O-18$$ isotopes
## Questions 28 of 50
Question:Which can be used for carrying out nuclear reaction
Uranium – 238
Neptunium – 239
Thorium – 232
Plutonium – 239
## Questions 29 of 50
Question:On comparing chemical reactivity of $${C^{12}}$$ and $${C^{14}}$$, it is revealed that
$${C^{12}}$$ is more reactive
$${C^{14}}$$ is more reactive
Both are inactive
Both are equally active
## Questions 30 of 50
Question:In the reaction $$_{93}N{p^{239}}{\to_{94}}P{u^{239}}$$ + (?), the missing particle is
Proton
Positron
Electron
Neutron
## Questions 31 of 50
Question:According to the nuclear reaction $$_4Be{+_2}H{e^4}{\to_6}{C^{12}}{+_0}n{^1}$$, mass number of $$(Be)$$ atom is
4
9
7
6
## Questions 32 of 50
Question:A particle having the same charge and 200 times greater mass than that of electron is
Positron
Proton
Neutrino
Meson
## Questions 33 of 50
Question:The positron is
$$_{-1}{e^0}$$
$$_{+1}{e^0}$$
$$_1{H^1}$$
$$_0{n^1}$$
## Questions 34 of 50
Question:Nuclear reactivity of Na and $$N{a^+}$$ is same because both have
Same electron and proton
Same proton and same neutron
Different electron and proton
Different proton and neutron
## Questions 35 of 50
Question:Which of the following is the heaviest metal
Hg
Pb
Ra
U
## Questions 36 of 50
Question:Alpha rays consist of a stream of
$${H^+}$$
$$H{e^{+2}}$$
Only electrons
Only neutrons
## Questions 37 of 50
Question:Which is the correct statement
$$\beta$$-rays are always negatively charged particles
$$\alpha$$-rays are always negatively charged particles
$$\gamma$$-rays can be deflected in magnetic field
## Questions 38 of 50
Question:There exists on $$\gamma$$-rays
Positive charge
Negative charge
No charge
Sometimes positive charge, sometimes negative charge
## Questions 39 of 50
Question:Which is not emitted by radioactive substance
$$\alpha$$-rays
$$\beta$$-rays
Positron
Proton
## Questions 40 of 50
Question:The atomic mass of an element is 12.00710 amu. If there are 6 neutrons in the nucleus of the atom of the element, the binding energy per nucleon of the nucleus will be
7.64 MeV
76.4 MeV
764 MeV
0.764 MeV
## Questions 41 of 50
Question:Half-life period of a metal is 20 days. What fraction of metal does remain after 80 days
1
1/16
1/4
1/8
## Questions 42 of 50
Question:During a negative $$\beta$$-decay
An atomic electron is ejected
An electron which is already present within the nucleus is ejected
A neutron in the nucleus decays emitting an electron
A part of the binding energy of the nucleus is converted into an electron
## Questions 43 of 50
Question:The decay constant of a radioactive sample is $$'\lambda'$$. The half-life and mean life of the sample are respectively
$$\frac{1}{\lambda},\,\frac{{\ln\,2}}{\lambda}$$
$$\frac{{\ln\,2}}{\lambda},\,\frac{1}{\lambda}$$
$$\lambda\,\ln\,2,\,\frac{1}{\lambda}$$
$$\frac{\lambda}{{\ln\,2}},\,\frac{1}{\lambda}$$
## Questions 44 of 50
Question:Half-life of $$10gm$$ of radioactive substance is 10 days. The half-life of $$20gm$$ is
10 days
20 days
25 days
Infinite
## Questions 45 of 50
Question:$$8gm$$ of the radioactive isotope, cesium-137 were collected on February 1 and kept in a sealed tube. On July 1, it was found that only $$0.25gm$$ of it remained. So the half-life period of the isotope is
37.5 days
30 days
25 days
50 days
## Questions 46 of 50
Question:The half-life of $$_{92}{U^{238}}$$ is $$4.5\times{10^9}$$ years. After how many years, the amount of $$_{92}{U^{238}}$$ will be reduced to half of its present amount
$$9.0\times{10^9}$$ years
$$13.5\times{10^9}$$ years
$$4.5\times{10^9}$$ years
$$4.5\times{10^{4.5}}$$ years
## Questions 47 of 50
Question:Radium has atomic weight 226 and a half-life of 1600 years. The number of disintegrations produced per second from $$1gm$$ are
$$4.8\times{10^{10}}$$
$$9.2\times{10^6}$$
$$3.7\times{10^{10}}$$
Zero
## Questions 48 of 50
Question:The activity of radio isotope changes with
Temperature
Pressure
Chemical environment
None of these
## Questions 49 of 50
Question:A certain nuclide has a half-life of 25 minutes. If one starts with 100 g of it, how much of it will remain at the end of 100 minutes
1.0 g
4.0 g
6.25 g
12.50 g
## Questions 50 of 50
Question:If 8.0 g of a radioactive substance has a half-life of 10 hrs., the half life of 2.0 g of the same substance is
|
2018-09-25 03:25:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5477851033210754, "perplexity": 4362.853375470431}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160923.61/warc/CC-MAIN-20180925024239-20180925044639-00244.warc.gz"}
|
https://uprivateta.com/shuxuedaixietongjiyunchouxuedaixie/
|
Problem 1.
A new electricity retailer starts up with a call centre consisting of, initially, just one customer service agent, to whom every incoming call is routed. Callers wait on hold in a first-come-first-served queue to speak to the agent. Once they get through, the durations of the ensuing conversations will be modelled as independent random variables having the exponential distribution with mean 9 minutes. Consider a day on which customer calls arrive at rate $0.1$ per minute, enough to create a small queue.
(a) We’ll model the customer arrivals as a Poisson process. Explain why this is a reasonable thing to do. Also, in what respects might the Poisson model be not so realistic?
(b) Find the mean time required for a customer to contact the company, including time spent waiting on hold as well as service time, according to this model.
(c) When there are four or more customers in the system (including the one being served), any further callers hear a message advising them that call volumes are unusually high today, and they can expect to be waiting for a while. What fraction of callers will hear this message? (You can assume that callers never give up – they always wait on hold until they get through.)
(d) Use the queueing-simulation web page to perform a simulation of this system. Make an estimate, with $95 \%$ confidence interval, of the mean time that a customer spends in the system. Compare with your answer to part $1 b$.
(e) In its planning for future growth, the company wants to model a day on which the customer arrival rate is $0.2$ per minute, and there are two employees doing customer service. Make point estimates (no confidence intervals required) of the fraction of customers who will arrive at a time when four or more other customers are in the system if (i) waiting customers form a single hold queue; (ii) each server has their own hold queue, and arriving customers join the shorter of the two queues.
Problem 2.
JJ takes Christmas seriously: not only does he like to dress up as Santa and give away presents to any small children within reach, he uses OR to plan ahead for the occasion. During the year, he plans to stock up on suitable small toys by buying them on TradeMe as and when they become available. He believes can make one such purchase every two weeks on average. If he doesn’t acquire enough presents this way, he can buy more just before 25 December, paying the full retail price.
(a) JJ models the number of TradeMe purchases during the year as a random variable $T$ having a Poisson distribution with mean 26. How do you think he came up with this distribution (and its parameter value)?
$\mathrm{JJ}$ is unsure how many presents he will need at Christmas (it’s a year in advance after all, and much may happen in that time), but he decides to model this quantity as a random variable $S$ having a uniform distribution on $[10,30]$ and independent of $T$. (JJ regards this as a reasonable modelling approximation, even though the actual number of presents required is an integer and $S$ has a continuous distribution.) So, the total number of presents he will buy is $X=\max (S, T)$, the greater of $S$ and $T$.
(b) Write some $\mathrm{R}$ code for generating random variates according to the distribution of $X$.
(c) Use your method from (2b) to generate a sample of 10000 random variates. Show them on a histogram.
(d) Use your sample from (2c) to estimate, with confidence intervals, (i) the mean of the distribution; (ii) $P(X<25)$.
(e) Calculate the exact value of $P(X<25)$ (the $\mathrm{R}$ function ppois is helpful). Does your confidence interval in $2 \mathrm{~d}$ (ii) contain the true value?
(f) JJ estimates that TradeMe presents will cost $\$ 12$each on average, while presents bought at retail will cost$\$19$ each. Use your simulation to estimate the expected total amount he spends on presents. A confidence interval is not required.
Problem 3.
Big Al’s tyre shop sees rising demand for cold-weather tyres each April and May as winter approaches. We will model Big Al’s customer arrivals as an inhomogeneous Poisson process with rate function $\lambda(t)=4+0.1 t$ per day for $0 \leq t \leq 60$, where $t$ is in days and $t=0$ corresponds to the first day of April. Each customer wants to buy either a single tyre (with probability $0.7$ ) or a complete set of four tyres (with probability $0.3$ ); these random demands are independent of everything else. Big Al’s current policy is to order 100 cold-weather tyres from a distributor whenever the number of tyres he has left in stock falls to 30 or less; the distributor always delivers the order 2 days after it is placed.
(a) Big Al reckons (correctly) that the expected number of customers for these tyres he’ll see during the 60-day period beginning on 1 April is about 420. Explain how he might have calculated this number.
(b) According to this model, what is the expected total customer demand for these tyres over the 60 -day period?
Construct a simulation, with 5000 runs, of the tyre sales and reordering process using the sim.inventory function. Assume that the shop has 100 tyres in stock initially. (Having initial stock equal to the reorder quantity is the default assumption made by sim.inventory.) You may find it useful to turn off plotting with show . plot=FALSE. Hand in
(c) a plot showing the number of tyres in stock over time, for one simulation run;
(d) a histogram estimating the distribution of the unmet demand fraction: the fraction of total tyre demand that Big Al cannot convert into sales, due to the shop being out of stock when a customer turns up;
(e) an estimate, including a $95 \%$ confidence interval, of the expected unmet demand fraction;
(f) the $R$ code for your simulation.
Now suppose that on 31 March, the distributor notifies Big Al that the delivery time will henceforth be three days instead of two.
(g) What (if anything) should Big Al do in response? Use additional simulations to support your answer.
BS equation代写
|
2022-07-02 23:27:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29703396558761597, "perplexity": 851.0122515364252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104205534.63/warc/CC-MAIN-20220702222819-20220703012819-00104.warc.gz"}
|
https://www.tutorialspoint.com/How-to-include-an-input-field-in-HTML
|
# How to include an input field in HTML?
HTMLWeb DevelopmentFront End Technology
#### Kickstart HTML, CSS and PHP: Build a Responsive Website
Featured
59 Lectures 8.5 hours
#### Web Design for Beginners: Build Websites in HTML & CSS 2022
68 Lectures 8 hours
#### Complete HTML/CSS Course 2022
Featured
47 Lectures 12.5 hours
The HTML <input>tag is used within a form to declare an input element - a control that allows the user to input data.
The following are the attributes −
Attribute
Value
Description
accept
content types
Specifies a comma-separated list of content types that the server accepts
align
left
right
top
middle
bottom
Deprecated − Defines the alignment of content
alt
Text
This specifies text to be used in case the browser/user agent can't render the input control
autocomplete
on
off
Specifies for enabling or disabling of autocomplete in <input> element
autofocus
Autofocus
specifies that <input> element should automatically get focus when the page loads
checked
Checked
If type = "radio" or type = "checkbox" it will already be selected when the page loads.
disabled
Disabled
Disables the input control. The button won't accept changes from the user. It also cannot receive focus and will be skipped when tabbing.
form
form_id
Specifies one or more forms
formaction
URL
Specifies the URL of the file that will process the input control when the form is submitted
formenctype
application/x-www-form-urlencoded
multipart/form-data
text/plain
Specifies how the form-data should be encoded when submitting it to the serve
formmethod
post
get
Defines the HTTP method for sending data to the action URL
formnovalidate
Formnovalidate
Defines that form elements should not be validated when submitted
fromtarget
_blank
_self
_parent
_top
Specifies the target where the response will be a display that is received after submitting the form
height
Pixels
Specifies the height
list
datalist_id
Specifies the <datalist> element that contains pre-defined options for an <input> element
max
Autofocus
Specifies the maximum value.
maxlength
Number
Defines the maximum number of characters allowed in a text field
min
Number
Specifies the minimum value
multiple
Multiple
Specifies that a user can enter multiple values
name
Text
Assigns a name to the input control.
pattern
Regexp
Specifies a regular expression that an <input> element's value is checked against
placeholder
Text
Specifies a short hint that describes the expected value.
Sets the input control to read-only. It won't allow the user to change the value. The control, however, can receive focus and are included when tabbing through the form controls
required
required
Specifies that an input field must be filled out before submitting the form
size
Number
Specifies the width of the control. If type = "text" or type = "password" this refers to the width in characters. Otherwise, it's in pixels.
src
URL
Defines the URL of the image to display. Used only for type = "image".
step
Number
Specifies the legal number intervals for an input field
type
button checkboxcolor date datetime
datetime-local email
file
hidden
image
month
range
reset
search
submit
tel
text
time
url
week
Specifies the type of control.
value
Text
Specifies the intial value for the control.If type = "checkbox" or type = "radio" this attribute is required.
width
Pixels
Specifies the width
## Example
You can try to run the following code to implement <input> element in HTML −
<!DOCTYPE html>
<html>
<title>HTML input Tag</title>
<body>
<form action = "/cgi-bin/hello_get.cgi" method = "get">
First name:
<input type = "text" name = "first_name" value = "" maxlength = "100" />
<br />
Last name:
<input type = "text" name = "last_name" value = "" maxlength = "100" />
<input type = "submit" value = "Submit" />
</form>
</body>
</html>
Updated on 03-Mar-2020 07:24:01
|
2022-12-05 04:17:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2435600757598877, "perplexity": 6183.667710958559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711003.56/warc/CC-MAIN-20221205032447-20221205062447-00621.warc.gz"}
|
https://tex.stackexchange.com/questions/321630/making-a-single-equation-larger-inside-beginalign-endalign?noredirect=1
|
making a single equation larger inside \begin{align} … \end{align}
Hello you wonderful TeX friends!
I have an issue making a single equation that bigger.
I'm restricted to use the align environment for my equations. I've used this answer to make inline math bigger, and I looked at this, but it's as the align environment is a bit tricky here. I would however like to stick to that. Any help or suggestions would be greatly appreciated.
code,
\documentclass{article}
\usepackage{amsmath}
\begin{document}
\begin{align}
2+2 = 4 \\
2=3 = 5 % I only want to make this single line bigger
\end{align}
% I'm aware I can something like this,
\begin{align}
2+2 = 4
\end{align}
\vspace{-15mm}
{\LARGE
\begin{align}
2+3 = 5
\end{align}
}
% but that enlarges the equation number,
% and I don't want that, along withing
\end{document}
• i'm not in a position to test anything, but amsart goes to some effort to keep the equation number the same size, regardless of the size applied to the content of the display. you may be able to steal the method from there. – barbara beeton Jul 30 '16 at 11:31
• Your LaTeX code is confusing. For instance, no use is made of alignment points inside the align environments, and the first align environment is missing a line-break directive between the rows. Is this deliberate? – Mico Jul 31 '16 at 11:33
• @Mico, thanks for the heads up, I've updated the code. – Eric Fail Jul 31 '16 at 12:17
Although the example code uses align there are no alignment points specified, which means that you can use a simpler setup such as gather
\documentclass{article}
\usepackage{amsmath}
\begin{document}
\begin{gather}
2+2 = 4 \\
% I only want to make this single line bigger
\mbox{\LARGE$\displaystyle2=3 = 5$}
\end{gather}
\end{document}
• Thanks! I am however restricted to use align. – Eric Fail Oct 31 '16 at 9:48
• @EricFail (you could just use the above with align) but it makes no sense, (sorry:-) saying you have to use align even when there is no alignment is like saying you have to use color to make black text or \textsubscript to make a superscript. Of course any of these things are possible but why???? – David Carlisle Oct 31 '16 at 9:54
|
2019-06-18 01:23:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9198076725006104, "perplexity": 972.7409719303828}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998600.48/warc/CC-MAIN-20190618003227-20190618025227-00294.warc.gz"}
|
https://www.wyzant.com/resources/answers/topics/feet
|
32 Answered Questions for the topic Feet
02/10/21
#### Square feet math
So if the rectangle desk is 3 times its width. If the perimeter of the rectangle is 16 feet what is the square feet.
02/09/21
#### I have no idea how to find the exact area and approximate area of a circle...
The diameter is 13 ft1) The exact area is? Then it asks feet, square feet, or cubic feet. It also says type an exact answer in terms of pi. Use integers or decimals for any numbers in the... more
02/08/21
#### Find the exact circumference of the circle and approximate the circumference using 3.14 as an approximation for pi (full circle)
The line goes through the whole circle and says 21 feet.The exact area is blank (feet, square feet, or cubic feet) (exact number in terms of pi. Integers or decimals for any numbers in the... more
04/08/20
#### Geometry/math problem
How much of the field is being watered ?-It's a rectangle-it's 92 circles inside-diameter is d=50ftarea of the rectangle (550ft x350ft=192,500) minus the area of the circle,then divide.
09/13/18
#### Find the actual length and width of the garden.
The plans for a garden show the dimensions as 8 yards long and 5 yards wide. The actual garden is 5 feet longer and 2 feet narrower than the plans show. Find the actual length and width of the... more
09/07/18
#### how high does the ladder reach on the building in feet?
A ladder of height 11 feet leans against a building so that the angle between the ground and the ladder is t∘, where 0<t<90. In terms of t, how high does the ladder reach on the building in... more
Feet
08/01/18
#### A bike wheel has a diameter of 24 inches how many feet will it travel in 7 revolutions
How many feet will a 24 inch bike tire travel in 7 revolutions
02/11/18
#### The perimeter of a rectangle is 360 feet, and the length of the rectangle is thrice the width.
Find the dimensions of the rectangle.
12/04/17
08/07/17
#### Kristen is flying a kite...
Kristen is flying a kite. The length of the kite string is55 feet and she is positioned 33 feet away from beneaththe kite. About how high is the kite?A) 47 ft B) 45 ft C) 44 ft D) 40 ft I think... more
07/18/17
#### Need help with real world math problems?
For problems 7 – 8, use the Falling Object Model (Object is dropped not thrown) h = -16t2 + s where h = height, t = time (seconds) and s = initial height. In each problem below, remember, h = 0... more
07/17/17
#### I need help with a word problem
The braking distance d (in feet) of a car is divided into two components. One part depends onreaction time. The number of feet for reaction time is about the same as the speed of the carin miles... more
12/05/16
#### How long does he have until the ball reaches the ground?
Draco steals and throws Neville's red ball straight up at 78 feet per second from about 10 feet above the ground. The ball's height in feet above the ground after t seconds is given by... more
12/02/16
#### find the formula for maximum height and find a sin function
the path traveled by an object (neglecting air resistance) that is projected by an initial height of h0 feet, an initial velocity of v0 feet per second and an initial angle theta is given by... more
11/29/16
#### What are the lengths of the sides of the mirror and the painting?
A square mirror has sides measuring 2 feet less than the sides of a square painting. If the difference between their areas is 32 square feet, find the lengths of the sides of the mirror and painting. more
11/29/16
#### A picture window has a length of 9 feet and a height of 8 feet, with a semicircular cap on each end.
Window Space. A picture window has a length of 9 feet and a height of 8 feet, with a semicircular cap on each end.How much metal trim is needed for the perimeter of the entire window? ___ft How... more
11/28/16
#### What are its dimensions?
Someone installs 96 feet of electric fencing around a rectangular headquarters. If the headquarters covers 540 square feet, what are its dimensions?
09/12/16
#### Help ME!!!!!!!!!!!!!!
Maggie has a ribbon 27 feet long. What is the length of ribbon in yards? equation: ???? ?? Answer: ???? ??
09/09/16
#### How many feet is needed?
A baseboard molding in a room is being replaced; the room has a rectangular floor with dimensions of 12 feet by 15 feet. Then room also has a door that is 3 feet wide. No molding will be placed in... more
Feet
03/30/16
#### a rectangular courtyard measures 20 feet by 15 feet and contains an 8.5-foot square flower garden.
The remaining space is covered by grass. What is the area of the courtyard that is covered by grass
Feet Measure
03/22/16
03/21/16
#### What's the height of the tree?
Use similar triangles to solve. A person who is 5 feet tall is standing 117 feet from the base of a tree, and the tree casts a 126 foot shadow. The person's is 9 feet in length. What is the height... more
12/04/15
#### Help me pretty please! I'm confused
To measure the height of the Washington Monument, a student 6 ft tall measures his shadow to be 4.75 ft. At the same time of day, he measured the shadow of the Washington Monument to be 438 ft... more
11/19/15
#### What would the dimensions of the hexagonal base be?
A farmer plans to build a regular hexagonal corn crib with sides 8 feet high. If he wants the crib to hold up to 1,000 bushels of corn, what would the dimensions of the hexagonal base be? (1,000... more
## Still looking for help? Get the right answer, fast.
Get a free answer to a quick problem.
Most questions answered within 4 hours.
#### OR
Choose an expert and meet online. No packages or subscriptions, pay only for the time you need.
|
2021-05-07 14:43:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5547232031822205, "perplexity": 1284.4268302780713}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988793.99/warc/CC-MAIN-20210507120655-20210507150655-00591.warc.gz"}
|
https://socratic.org/questions/how-do-you-find-the-amplitude-period-phase-shift-given-y-3sinpix-5cospix
|
# How do you find the amplitude, period, phase shift given y=3sinpix-5cospix?
Oct 17, 2016
Amplitude is $\sqrt{34}$, period is $2$ and phase shift is $\frac{1}{\pi} {\tan}^{- 1} \left(- \frac{5}{3}\right)$.
#### Explanation:
We have $y = 3 \sin \pi x - 5 \cos \pi x$ (Note ${3}^{2} + {5}^{2} = 34$)
= $\sqrt{34} \left(\frac{3}{\sqrt{34}} \sin \pi x - \frac{5}{\sqrt{34}} \cos \pi x\right)$
= $\sqrt{34} \left(\sin \pi x \cos \alpha - \cos \pi x \sin \alpha\right)$
= $\sqrt{34} \sin \left(\pi x - \alpha\right)$,
where, as $\sin \alpha = \frac{5}{\sqrt{34}}$ and $\cos \alpha = \frac{3}{\sqrt{34}}$,
$\alpha = {\tan}^{- 1} \left(- \frac{5}{3}\right)$
Now, as in $y = r \sin \left(p x + q\right)$
amplitude is $r$, period is $\frac{2 \pi}{p}$ and phase shift is $- \frac{q}{p}$.
In $y = 3 \sin \pi x - 5 \cos \pi x = \sqrt{34} \sin \left(\pi x - \alpha\right)$
amplitude is $\sqrt{34}$, period is $\frac{2 \pi}{\pi} = 2$ and phase shift is $\frac{\alpha}{\pi} = \frac{1}{\pi} {\tan}^{- 1} \left(- \frac{5}{3}\right)$.
|
2021-04-13 07:28:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 19, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9792181849479675, "perplexity": 1236.016393622174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072175.30/warc/CC-MAIN-20210413062409-20210413092409-00583.warc.gz"}
|
https://pub.uni-bielefeld.de/publication/2913497
|
# Global gradient estimates for the $p(\cdot)$-Laplacian
Diening L, Schwarzacher S (2014)
Nonlinear Analysis. Theory, Methods & Applications. An International Multidisciplinary Journal 106: 70-85.
No fulltext has been uploaded. References only!
Journal Article | Published | English
Author
;
Department
Abstract
We consider Calder\'on-Zygmund type estimates for the non-homogeneous $p(\cdot)$-Laplacian system $-\text{div}(|D u|^{p(\cdot)-2} Du) = -\text{div}(|G|^{p(\cdot)-2} G),$ where $p$ is a variable exponent. We show that $|G|^{p(\cdot)} \in L^q(\mathbb{R}^n)$ implies $|D u|^{p(\cdot)} \in L^q(\mathbb{R}^n)$ for any $q \geq 1$. We also prove local estimates independent of the size of the domain and introduce new techniques to variable analysis.
Publishing Year
ISSN
PUB-ID
### Cite this
Diening L, Schwarzacher S. Global gradient estimates for the $p(\cdot)$-Laplacian. Nonlinear Analysis. Theory, Methods & Applications. An International Multidisciplinary Journal. 2014;106:70-85.
Diening, L., & Schwarzacher, S. (2014). Global gradient estimates for the $p(\cdot)$-Laplacian. Nonlinear Analysis. Theory, Methods & Applications. An International Multidisciplinary Journal, 106, 70-85. doi:10.1016/j.na.2014.04.006
Diening, L., and Schwarzacher, S. (2014). Global gradient estimates for the $p(\cdot)$-Laplacian. Nonlinear Analysis. Theory, Methods & Applications. An International Multidisciplinary Journal 106, 70-85.
Diening, L., & Schwarzacher, S., 2014. Global gradient estimates for the $p(\cdot)$-Laplacian. Nonlinear Analysis. Theory, Methods & Applications. An International Multidisciplinary Journal, 106, p 70-85.
L. Diening and S. Schwarzacher, “Global gradient estimates for the $p(\cdot)$-Laplacian”, Nonlinear Analysis. Theory, Methods & Applications. An International Multidisciplinary Journal, vol. 106, 2014, pp. 70-85.
Diening, L., Schwarzacher, S.: Global gradient estimates for the $p(\cdot)$-Laplacian. Nonlinear Analysis. Theory, Methods & Applications. An International Multidisciplinary Journal. 106, 70-85 (2014).
Diening, Lars, and Schwarzacher, Sebastian. “Global gradient estimates for the $p(\cdot)$-Laplacian”. Nonlinear Analysis. Theory, Methods & Applications. An International Multidisciplinary Journal 106 (2014): 70-85.
This data publication is cited in the following publications:
This publication cites the following data publications:
### Export
0 Marked Publications
Open Data PUB
arXiv 1312.5570
|
2017-09-26 02:10:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20583578944206238, "perplexity": 9827.474601279742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818693940.84/warc/CC-MAIN-20170926013935-20170926033935-00166.warc.gz"}
|
https://www.educative.io/blog/level-up-python-skills?eid=5082902844932096
|
Home/Blog/Languages/Level up your Python skills with these 6 challenges
# Level up your Python skills with these 6 challenges
Oct 28, 2019 - 7 min read
Amanda Fawcett
The best way to learn Python is to practice, practice, practice.
That’s why we’re sharing this article: so you can test out your basic Python skills with these six challenges.
These exercises are useful for everyone, especially if you’re a beginner with basic knowledge of Python concepts.
The solutions are offered on the tab to the right of the challenge. There is a hint for each challenge if you get stuck.
Here are the questions we’ll explore today:
## Challenge #1: Test your basic skills
Your challenge is to write a Python program that prints the following messages. Each of the three inputs should print on a new line.
Hello World
Let's Learn Python
Sincerely, (your name here)
Try it out yourself before looking at the solution!
# write your code here
### Explanation of Challenge #1
First, we use the print statement, which will display the result. We type our text within the ( ) . We have to surround our text with “quotations marks” since it is a string.
To start our second line of text, we hit enter and repeat this process on line 2 and 3 because each call to print will move the output to a new line. If you’re dealing with numerical input, you won’t need to add quotation marks.
Quick tip: For strings, you can use either double quotation marks (" ") or single (' '). Double quotation marks should be used if your sentence or word makes use of an apostrophe. In our example, "Let's learn Python", if you were to use single quotes you’d receive an error.
## Challenge #2: Test your knowledge of data types and variables
Your challenge is to write a Python program that calculates and prints the gravitational force between the Earth and the Sun using the grav_force variable.
The formula for gravitational force is as follows:
$F = \frac{GMm}{r^2}$
F is the total gravitation force. G is the gravitational constant. M and m are the masses to be compared, and r is the distance between those masses.
The values you will need to calculate the gravitation force of the Earth and Sun are offered below.
• G = 6.67 x 10-11
• MSun = 2.0 x 1030
• mEarth = 6.0 x 1024
• r = 1.5 x 1011
Try it out yourself before looking at the answer.
# write your code here
### Explanation of Challenge #2
This challenge is actually simpler than it seems! Don’t let the big numbers concern you. We use our arithmetic operators to perform all the operations. The * operator multiplies variables, the \ operator divides variables, and the ** performs an exponent operation.
First, we define each of our values (G, M, m, r). This stores them in the grav_force variable. We then define the grav_force equation using Python syntax and ask the program to print the answer. The parentheses are actually optional: we just used them to separate the top and bottom of the fraction.
## Challenge #3: Test your knowledge of conditional statements
Your challenge is to write a Python program that calculates the discounted price of an object using the if-elif-else statement.
Your conditions are as follows:
• If the price is 300 or above, it will be discounted by 30%
• If the price falls between 200 and 300, it will be discounted by 20%
• If the price falls between 100 and 200, it will be discounted by 10%
• If the price is less than 100, it will be discounted by 5%
• There is no discount for negative prices
Your inputs:
price = 350
Try it out yourself before looking at the answer.
# write your code here
### Explanation of Challenge #3
To complete this challenge, we first define the price and then all our conditions using Python syntax. You only need to specify the lowest number of each condition since the higher number is accounted for in the previous condition. We calculate a discounted price with price * (1 - discount).
We then ask the program to print our discounted price.
## Challenge #4: Test your knowledge of functions.
Your challenge is to write a Python program using the rep_cat function, which takes two integers (x and y) and converts them into strings. The string value of x should repeat 8 times, and the string value of y should repeat 5 times. The y string must then be concatenated to the x string and returned as a single piece of data.
Your inputs:
x = 7
y = 2
Try it out yourself before looking at the solution.
# write your code here
### Explanation of Challenge #4
First, we convert the integers to strings using the str( ) method. We then can use the * operator to replicate the strings the required number of times. To link the strings together, we used the + operator, and finally, that new string is returned using the return statement.
We use print to display the final statement.
## Challenge #5: Test your knowledge of loops
Your challenge is to write the Fibonacci function so that is takes a positive number, n, and returns the n-th number in the Fibonacci sequence using loops.
The Fibonacci sequence is a famous mathematical formula where each number in the sequence is the sum of the two numbers before it. The sequence goes as follows: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, etc.
Your input:
n = 10
Try it out yourself before looking at the solution.
# write your code here
### Explanation of Challenge #5
This challenge requires conditional statements. If n is less than 1, it will return -1, and if n is 1 or 2, it returns the first or second value. Note that the first two values are set, as they will always be fixed with this sequence.
We used a while loop to complete the challenge, but a for loop would also get the same result. We use the count variable and start at 3 because we already know the first two values. The two previous terms, second and fib_n become first and second with every iteration of the code.
We then ask the program to print our specified value, n.
## Challenge #6: Test your knowledge of data structures
Write a Python program that separates the highs and lows of a list of numbers (num_list) then returns a list of the number of lows and highs, in that order. You will use the count_low_high() function. These are your parameters.
• If a number is more than 50 or divisible by 3, it is considered high
• If these conditions are not met, the number is considered low
• If the list is empty, it will return none
Your inputs:
num_list = [77, 9, 95, 2, 51, 29, 12, 136]
Try it out yourself before looking at the solution.
# write your code here
### Explanation to Challenge #6
To complete this challenge, you need to use the filter ( ) function, which filters the high numbers into one list and the low numbers into another. We set the parameters for sorting. We can then use the len function, which counts the elements in both lists. It isn’t necessary to use the lambdas, but it simplifies the code. The lambdas keyword makes a shortcut to declare functions.
We then input our num_list and ask the program to print our new list.
## What to learn next
Well done! You completed all six Python challenges! You’re now a more seasoned and practiced developer. Don’t be discouraged if you got stuck on some of the challenges. The best way to learn is to identify places for improvement and study.
Remember that everyone learns at their own pace and style, so try not to compare your progress to others.
If you want to keep learning, check out our free Learn Python from Scratch course that walks you through all of these concepts in more detail.
Happy learning!
WRITTEN BYAmanda Fawcett
Join a community of 1.7 million readers. Enjoy a FREE, weekly newsletter rounding up Educative's most popular learning resources, coding tips, and career advice.
Learn in-demand tech skills in half the time
Copyright ©2023 Educative, Inc. All rights reserved.
|
2023-03-27 17:39:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19674323499202728, "perplexity": 967.4969977633785}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948673.1/warc/CC-MAIN-20230327154814-20230327184814-00696.warc.gz"}
|
https://papers.neurips.cc/paper/2016/file/291597a100aadd814d197af4f4bab3a7-Reviews.html
|
NIPS 2016
Mon Dec 5th through Sun the 11th, 2016 at Centre Convencions Internacional Barcelona
Paper ID: 642 Proximal Stochastic Methods for Nonsmooth Nonconvex Finite-Sum Optimization
### Reviewer 1
#### Summary
A good paper on randomized incremental gradient methods for non-convex problems.
#### Qualitative Assessment
The authors did an excellent job in extending randomized incremental gradient methods to non-convex composite optimization. Their key contribution was to show that the size of mini-batch should be dependent on the number of terms in order to gain an O(1/n^{1/3}) factor in the complexity bound. The authors should point out to the readers that the SGD method for non-convex optimization was designed for solving stochastic optimization problems possibly with continuous random variables and/or for solving the generalization (rather than empirical) risk minimization in machine learning. Therefore, the numerical results seem to be a bit biased towards SAGA or SVRG since the error measure was empirical. To have a fair comparison, the authors should at least make this point clear in both the introduction and numerical experiment parts
#### Confidence in this Review
2-Confident (read it all; understood it all reasonably well)
### Reviewer 2
#### Summary
The paper studies stochastic algortihms for minimizing a regularized finite sum problem, where the regularizer is nonsmoth but convex; and the functions in the finite sum are smooth but possibly nonconvex. This problem is not well studied and hence understood from a complexity point of view. Previous known results give complexity bounds (to a stationary point defined using the notion of the proximal mapping) for a method with increasing minibatch sizes. The present paper sets out to give the first results which do not require growing minibatches. Linear convergence results are obtained under a Polyak-Łojasiewicz condition. Minibatch proximal versions of the SAGA and SVRG methods are proposed and studied.
#### Qualitative Assessment
The paper is a pleasure to read – excellent writing. The ideas are well explained, the paper proceeds smoothly. The paper is well set in the context of existing results and literature (one small issue is the omission to mention that a minibatch version of Prox SVRG was analysed before in the case of smooth losses – and is known as the mS2GD method; see line 124). The complexity (in terms of number of stochastic gradient evaluations; IFO) for both is shown to be O(n + n^{2/3}/eps); which is better than O(1/eps^2) rate of ProxSGD and O(n/eps) rate of ProxGD. The complexity is also improved for PL functions. The new results are a good addition to our knowledge of complexity of stochastic methods for an important class of nonconvex problems relevant to ML. Some previous results are included as special cases of this approach (e.g., [27], [34]). Experiments with non-negative PCA show these methods work about the same as much better than ProxSGD. Q: Are any lower bounds known for the IFO complexity for this problem? Remark: The word “fast” in the title evokes the use of Nesterov’s “acceleration” – but this is not the case. It is tempting to call a method fast, but if this is to be justified, either extensive computational experiments should be performed against all or most competing methods; or the word should have a special meaning (such as Nesterov’s acceleration). I suggest the word be dropped from the title and replaced by “Complexity of”. Small issues: a) Line 40: to -> to be b) Table 1 contains complexity results for proxSGD under PL assumption. Where have these been proved (citation is missing)? Confusingly, the caption says that no specific complexity results exist. Do they exist or not? c) Line 151: For the -> For d) Line 154: is a -> be a e) Line 236: worser -> worse f) References: [9] : capitalize P and L in PL; [22] appeared in ICML 2015; [25] – capitalize journal name; [26] appeared in Math Prog 2016; [33] appeared; [34] – write SVRG and not svrg. ----- post-rebuttal comments ----- I have read the rebuttal. I have the same opinion of the paper as before and keep my scores.
#### Confidence in this Review
3-Expert (read the paper in detail, know the area, quite certain of my opinion)
### Reviewer 3
#### Summary
This paper analyzes two types of stochastic procedures for solving a very important and general class of optimization problems. In particular, the objective function is a superposition of a non-smooth convex function and another non-convex smooth function. The paper focuses on two important quantities: (1) PO complexity, and (2) IFO complexity. The authors show that the algorithms can provably converge to some stationary points with the desired PO and IFO complexities, in the presence of constant mini-batches. The paper is very well-written and is a pleasure to read. I believe that developing fast stochastic procedures is an important contribution for solving many nonconvex problems. The only part that I think can be improved is to provide more interpretations regarding the particular parametric choices used in the theorems (like m and b). It would be nice to discuss whether a broader range of parameter choices can also lead to similar performance, either theoretically or practically.
#### Qualitative Assessment
1. Theorem 2 chooses b = n^{2/3} and m = n^{1/3} and obtains intriguing performance guarantees. But these two choices of parameters seem a bit mysterious to me. Can the authors provide some intuition / interpretation for such choices of b and m? Can we find a wider range of b and m that can also lead to the desired IFO and PO complexities? 2. It would be good to define or explain the parameters T and b in Algorithm 1. 3. Explain the parameter "epoch length" for the benefits of those readers who are not familiar with this subject. In general, how does it affect the performance? 4. For ProxSAGA, can the authors provide some brief interpretation regarding the analytical difficulty of using just one set (as opposed to two independent sets I_t and J_t)? Which version (one set vs. two sets) performs better in practice? 5. Similar to Comment 1, why does Theorem 4 set b=n^{2/3}? Any particular reason? And is the result sensitive to this particular choice?
#### Confidence in this Review
1-Less confident (might not have understood significant parts)
### Reviewer 4
#### Summary
The author analyse the rate of convergence of stochastic algorithm with constant minibatch size to a stationary point, in the case where the function is non-smooth and non-convex. They also analyse the convergence rate when the function satisfies the PL-inequality (can be compared to strongly convex functions in the convex case).
#### Qualitative Assessment
Overall, the article is well written and clear. They explain clearly what are the contributions of their work. However, even if the theoretical analysis is complete, the impact of the results as well as the numerical experiments are not convincing. The authors shows the convergence of stochastic algorithm using constant-size minimatches, but the (optimal) size must be of the order of n^{2/3}, which is quite big and comparable to the size of the deterministic gradient method. Moreover, the comparison (deterministic vs stochastic) is not done in the numerical experiments section. Also, the algorithms are unchanged, they just adapted the size of the minibatch to have theoretical guarantee (but we can not deny that this case is more realistic than having a size-varying minibatch). Also, in the experiments section the authors use a batch size of 1 but they proved that the size n^{2/3} must be better. Moreover, the starting point is too close to the optimal point (f(x0)-f(x*) ~= 10^{-3}, which is an acceptable accuracy in machine learning). They should start from less accurate starting points (Especially since the authors insists they proved non-asymptotic rate of convergence, unlike in previous works). A few other remarks: - Parameter $m$ not defined - The used quantities, e.g. $m$, $T$, etc..., should be explained (in an intuitive way if possible). - The title is way too general ("nonsmooth non-convex optimization") and should be changed (for example, by including the fact that it uses proximal operator or by mentioning the sum of smooth functions). - The captions in the figures are too small. In conclusion: - The article is very clear and the contributions are well presented. - The theoretical analysis is complete and the theorems are easily understandable. - The ideas are not original, since people already use constant minibatch for practical problem. However it gives guarantees about the performances of this method. - The theoretical analysis shows the convergence for constant size minibatch, but the size is much too large for practical algorithms. - The numerical analysis is not consistent with the theoretical results (it uses minibatch size of 1 instead of n^{2/3}, starting point too close to the optimal point). - Since the batch size is big, there is no comparison with deterministic gradient method.
#### Confidence in this Review
1-Less confident (might not have understood significant parts)
### Reviewer 5
#### Summary
This paper analyzed the convergence of proximal-SVRG and proximal-SAGA for the non-convex problems. Then their analysis was extent to Polyak-Lojasiewicz functions. Last, some numerical experiments were conducted on non-negative PCA problem.
#### Qualitative Assessment
This paper is well written. Theoretical analysis and numerical experiments are solid. However, I personally think that providing a convergence bound for the proximal versions of some existing methods is enough, considering NIPS is so competitive now. Though I did not check the proof line by line, I feel the analyzing method mainly follows [22] and [23] and so there is not too much new. This is why I give a low score concerning novelty.
#### Confidence in this Review
3-Expert (read the paper in detail, know the area, quite certain of my opinion)
### Reviewer 6
#### Summary
This paper tackles the challenging problem of minimizing nonconvex, nonsmooth, finite-sum objective functions. The authors adapted the analysis of two recent SGD-based algorithms with variance reduction, namely SVRG and SAGA, to the search of stationary points in the nonconvex case. They provide theoretical choice for the mini-batch size to ensure faster convergence. They also considered a particular case where the convergence is linear, and gave experimental results.
#### Qualitative Assessment
The article is overall interessant. The introduction is clear and sums all the recent results proved in the domain. It also gives insight about the differences from the convex setting to the nonconvex setting. My main comment is for Corollary 2 and Corollary 4 about the IFO complexity. Its computation is done too fast and it hides an error. Let's focus on lines 158 to 167. I'm using the notations of Algorithm 1. - First comment: the IFO complexity of one inner loop equals b*m + n. But you have said on Theorem 2 that b = n^{2/3} and m = n^{1/3}, so that the product b*m = n. Then the total IFO complexity should write O( n^{2/3} / \epsilon ). - Second comment: I'm ok with the part O( n^{2/3} / \epsilon ) in the IFO complexity. However, for the second part, the complexity writes S*n=(T/m)*n, and you said on line 165 that since "T is a multiple of m", you can write O( (T/m)*n ) = O( n ), which is not true. T/m may still be large, image the case where T = m * n. Keeping the dependency on n, and \epsilon, the second term writes O( 1 / \epsilon * n^{-1/3} * n ) = O( n^{2/3} / \epsilon ), which equals the former. These comments doesn't alter the quality of the bounds proved by the authors. But this wrong complexity occurs many times in the article. Also, I've noticed a self-reference on line 95: "We build upon...".
#### Confidence in this Review
2-Confident (read it all; understood it all reasonably well)
|
2021-09-26 02:48:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7221433520317078, "perplexity": 897.0539905532213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057796.87/warc/CC-MAIN-20210926022920-20210926052920-00633.warc.gz"}
|
http://math.stackexchange.com/questions/436290/why-does-6x-%e2%89%a1-210-x-pmod11-when-0%e2%89%a4x%e2%89%a410
|
# Why does $6^x ≡ 2^{10-x} \pmod{11}$ when $0≤x≤10$?
I was messing around with my calculator earlier today. I graphed the function $6^x \pmod{11}$, and I noticed a pattern, and I "discovered" the following:
$$6^x ≡ 2^{10-x} \pmod{11}$$
This works whenever $x$ is an integer between $0$ and $10$, inclusive. Likewise, these also seem to work:
$$4^x ≡ 3^{10-x} \pmod{11}$$ $$5^x ≡ 9^{10-x} \pmod{11}$$ $$7^x ≡ 8^{10-x} \pmod{11}$$ $$10^x ≡ 1^{10-x} \pmod{11}$$
I have two main questions:
• What causes the pairs $(1,10)$, $(2,6)$, $(3,4)$, $(5,9)$, and $(7,8)$?
• Why do relationships like this exist in the first place?
-
Because $6\cdot 2 = 12 \equiv 1 \pmod{11}$, and $11$ is prime, so $a^{11-1} \equiv 1 \pmod{11}$ for all $a$ not divisible by $11$. Your pair $(1,\,10)$ doesn't actually work, $10^3 \equiv 10 \pmod{11}$. – Daniel Fischer Jul 4 '13 at 18:24
With the pair $(1,10)$, I must have accidentally tested only even exponents. – PhiNotPi Jul 4 '13 at 18:40
First note that, for any integer $x$, $$6^x\equiv 2^{10-x}\bmod 11 \;\iff\; 12^x=6^x\cdot 2^x\equiv 2^{10}\bmod 11.$$ Then note that $12\equiv 1\bmod 11$, so that $12^x\equiv 1\bmod 11$ for any $x$, and also that $2^{10}\equiv 1\bmod 11$, which one can compute directly: $$2^{10}=1024=(11\cdot 93)+1\equiv 1\bmod 11$$ or just appeal to Fermat's little theorem.
We can generalize your observation as follows: for any positive integer $n$, and for any integers $a$ and $b$ such that $ab\equiv 1\bmod n$, we have $$a^x\equiv b^{\varphi(n)-x}\bmod n$$ for all $0\leq x\leq \varphi(n)$ (and indeed, for all integers $x$, because we can make sense of the multiplicative inverse of $a$ and $b$ modulo $n$).
|
2015-07-08 00:34:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9794853925704956, "perplexity": 168.53982882075985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375635143.91/warc/CC-MAIN-20150627032715-00261-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://www.theevolvingclassroom.com/youtube-best-workout-videos/
|
2
2185
# 5个健身视频让您在家中保持健康
Here we have curated 5 YouTube最佳健身影片 that are easy to follow and have variety.
|
2021-04-20 13:09:14
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9910322427749634, "perplexity": 8764.623320063143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039398307.76/warc/CC-MAIN-20210420122023-20210420152023-00117.warc.gz"}
|
https://www.aimsciences.org/article/doi/10.3934/jimo.2014.10.883
|
# American Institute of Mathematical Sciences
• Previous Article
Second order optimality conditions and reformulations for nonconvex quadratically constrained quadratic programming problems
• JIMO Home
• This Issue
• Next Article
A hybrid approach for index tracking with practical constraints
July 2014, 10(3): 883-903. doi: 10.3934/jimo.2014.10.883
## A nonlinear conjugate gradient method for a special class of matrix optimization problems
1 Department of Mathematics and Computer Science, Faculty of Science, Alexandria University, Moharam Bey 21511, Alexandria, Egypt
Received August 2012 Revised July 2013 Published November 2013
In this article, a nonlinear conjugate gradient method is studied and analyzed for finding the local solutions of two matrix optimization problems resulting from the decentralized static output feedback problem for continuous and discrete-time systems. The global convergence of the proposed method is established. Several numerical examples of decentralized static output feedback are presented that demonstrate the applicability of the considered approach.
Citation: El-Sayed M.E. Mostafa. A nonlinear conjugate gradient method for a special class of matrix optimization problems. Journal of Industrial & Management Optimization, 2014, 10 (3) : 883-903. doi: 10.3934/jimo.2014.10.883
##### References:
[1] A. G. Aghdam, E. J. Davison and R. Becerril-Arreola, Structural modification of systems using discretization and generalized sampled-data hold functions,, Automatica, 42 (2006), 1935. doi: 10.1016/j.automatica.2006.06.005. Google Scholar [2] M. Aldeen and J. F. Marsh, Decentralized observer-based control scheme for interconnected dynamical systems with unknown inputs,, IEEE Proc. Control Theory Appl., 146 (1999), 349. Google Scholar [3] Z. Artstein, Linear systems with delayed controls: A reduction,, IEEE Transactions on Automatic Control, 27 (1982), 869. doi: 10.1109/TAC.1982.1103023. Google Scholar [4] Y.-Y. Cao and J. Lam, A computational method for simultaneous LQ optimal control design via piecewise constant output feedback,, IEEE Transaction on Systems, 31 (2001), 836. Google Scholar [5] Z.-F. Dai, Two modified HS type conjugate gradient methods for unconstrained optimization problems,, Nonlinear Analysis, 74 (2011), 927. doi: 10.1016/j.na.2010.09.046. Google Scholar [6] Z. Gong, Decentralized robust control of uncertain interconnected systems with prescribed degree of exponential convergence,, IEEE Transaction on Automatic Control, 40 (1995), 704. doi: 10.1109/9.376105. Google Scholar [7] W. W. Hager and H. Zhang, A survey of nonlinear conjugate gradient methods,, Pacific Journal of Optimization, 2 (2006), 35. Google Scholar [8] M. Ikeda, Decentralized control of large scale systems,, in Three Decades of Mathematical System Theory, (1989), 219. doi: 10.1007/BFb0008464. Google Scholar [9] M. S. Mahmoud, M. F. Hassan and S. J. Saleh, Decentralized structures for a stream water quality control problems,, Optimal Control Applications & Methods, 6 (1985), 167. doi: 10.1002/oca.4660060209. Google Scholar [10] D. Jiang and J. B. Moore, A gradient flow approach to decentralized output feedback optimal control,, Systems & Control Letters, 27 (1996), 223. doi: 10.1016/0167-6911(96)80519-6. Google Scholar [11] K. H. Lee, J. H. Lee and W. H. Kwon, Sufficient LMI conditions for $H_\infty$ output feedback stabilization of linear discrete-time systems,, IEEE Transactions on Automatic Control, 51 (2006), 675. doi: 10.1109/TAC.2006.872766. Google Scholar [12] F. Leibfritz, COMPlib: COnstraint Matrix-Optimization Problem library-A Collection of Test Examples for Nonlinear Semi-Definite Programs, Control System Design and Related Problems,, Technical Report, (2004). Google Scholar [13] T. Liu, Z.-P. Jiang and D. J. Hill, Decentralized output-feedback control of large-scale nonlinear systems with sensor noise,, Automatica J. IFAC, 48 (2012), 2560. doi: 10.1016/j.automatica.2012.06.054. Google Scholar [14] W. Q. Liu and V. Sreeram, New algorithm for computing LQ suboptimal output feedback gains of decentralized control systems,, Journal of Optimization Theory and Applications, 93 (1997), 597. doi: 10.1023/A:1022647230641. Google Scholar [15] P. M. Mäkilä and H. T. Toivonen, Computational methods for parametric LQ problems-a survey,, IEEE Transactions on Automatic Control, 32 (1987), 658. doi: 10.1109/TAC.1987.1104686. Google Scholar [16] E. M. E. Mostafa, A trust region method for solving the decentralized static output feedback design problem,, Journal of Applied Mathematics & Computing, 18 (2005), 1. doi: 10.1007/BF02936553. Google Scholar [17] E. M. E. Mostafa, Computational design of optimal discrete-time output feedback controllers,, Journal of the Operations Research Society of Japan, 51 (2008), 15. Google Scholar [18] E. M. E. Mostafa, On the computation of optimal static output feedback controllers for discrete-time systems,, Numerical Functional Analysis and Optimization, 33 (2012), 591. doi: 10.1080/01630563.2012.661381. Google Scholar [19] E. M. E. Mostafa, A conjugate gradient method for discrete-time output feedback control design,, Journal of Computational Mathematics, 30 (2012), 279. doi: 10.4208/jcm.1109-m3364. Google Scholar [20] E. M. E. Mostafa, Nonlinear conjugate gradient method for continuous time output feedback design,, Journal of Applied Mathematics and Computing, 40 (2012), 529. doi: 10.1007/s12190-012-0574-8. Google Scholar [21] W. Nakamura, Y. Narushima and H. Yabe, Nonlinear conjugate gradient methods with sufficient descent properties for unconstrained optimization,, Journal of Industrial and Management Optimization, 9 (2013), 595. doi: 10.3934/jimo.2013.9.595. Google Scholar [22] P. R. Pagilla and Y. Zhu, A decentralized output feedback controller for a class of large-scale interconnected nonlinear systems,, ASME J. Dynam. Syst. Meas. Control, 127 (2004), 167. doi: 10.1115/1.1870047. Google Scholar [23] T. Rautert and E. W. Sachs, Computational design of optimal output feedback controllers,, SIAM Journal on Optimization, 7 (1997), 837. doi: 10.1137/S1052623495290441. Google Scholar [24] M. Saif and Y. Guan, Decentralized state estimation in large-scale interconnected dynamical systems,, Automatica J. IFAC, 28 (1992), 215. doi: 10.1016/0005-1098(92)90024-A. Google Scholar [25] D. D. Šiljak, Decentralized Control of Complex Systems,, Mathematics in Science and Engineering, (1991). Google Scholar [26] D.D. Šiljak and D. M. Stipanović, Robust stabilization of nonlinear systems: The LMI approach,, Math. Problems Eng., 6 (2000), 461. doi: 10.1155/S1024123X00001435. Google Scholar [27] V. L. Syrmos, C. T. Abdallah, P. Dorato and K. Grigoriadis, Static output feedback-a survey,, Automatica J. IFAC, 33 (1997), 125. doi: 10.1016/S0005-1098(96)00141-0. Google Scholar [28] S. Tong, Y. Li and T. Wang, Adaptive fuzzy decentralized output feedback control for stochastic nonlinear large-scale systems using DSC technique,, International Journal of Robust and Nonlinear Control, 23 (2013), 381. doi: 10.1002/rnc.1834. Google Scholar [29] R. J. Veilette, J. V. Medanić and W. R. Perkins, Design of reliable control systems,, IEEE Transaction on Automatic Control, 37 (1992), 290. doi: 10.1109/9.119629. Google Scholar [30] Z. Wei G. Li and L. Qi, New nonlinear conjugate gradient formulas for large-scale unconstrained optimization problems,, Applied Mathematics and Computation, 179 (2006), 407. doi: 10.1016/j.amc.2005.11.150. Google Scholar [31] G. Yu, L. Guan and Z. Wei, Globally convergent Polak-Ribière-Polyak conjugate gradient methods under a modified Wolfe line search,, Applied Mathematics and Computation, 215 (2009), 3082. doi: 10.1016/j.amc.2009.09.063. Google Scholar [32] G. Zhai, M. Ikeda and Y. Fujisaki, Decentralized Hinf controller design: A matrix inequality approach using a homotopy method,, Automatica J. IFAC, 37 (2001), 565. doi: 10.1016/S0005-1098(00)00190-4. Google Scholar [33] L. Zhang, W. Zhou and D. Li, Some descent three-term conjugate gradient methods and their global convergence,, Optimization Methods and Software, 22 (2007), 697. doi: 10.1080/10556780701223293. Google Scholar
show all references
##### References:
[1] A. G. Aghdam, E. J. Davison and R. Becerril-Arreola, Structural modification of systems using discretization and generalized sampled-data hold functions,, Automatica, 42 (2006), 1935. doi: 10.1016/j.automatica.2006.06.005. Google Scholar [2] M. Aldeen and J. F. Marsh, Decentralized observer-based control scheme for interconnected dynamical systems with unknown inputs,, IEEE Proc. Control Theory Appl., 146 (1999), 349. Google Scholar [3] Z. Artstein, Linear systems with delayed controls: A reduction,, IEEE Transactions on Automatic Control, 27 (1982), 869. doi: 10.1109/TAC.1982.1103023. Google Scholar [4] Y.-Y. Cao and J. Lam, A computational method for simultaneous LQ optimal control design via piecewise constant output feedback,, IEEE Transaction on Systems, 31 (2001), 836. Google Scholar [5] Z.-F. Dai, Two modified HS type conjugate gradient methods for unconstrained optimization problems,, Nonlinear Analysis, 74 (2011), 927. doi: 10.1016/j.na.2010.09.046. Google Scholar [6] Z. Gong, Decentralized robust control of uncertain interconnected systems with prescribed degree of exponential convergence,, IEEE Transaction on Automatic Control, 40 (1995), 704. doi: 10.1109/9.376105. Google Scholar [7] W. W. Hager and H. Zhang, A survey of nonlinear conjugate gradient methods,, Pacific Journal of Optimization, 2 (2006), 35. Google Scholar [8] M. Ikeda, Decentralized control of large scale systems,, in Three Decades of Mathematical System Theory, (1989), 219. doi: 10.1007/BFb0008464. Google Scholar [9] M. S. Mahmoud, M. F. Hassan and S. J. Saleh, Decentralized structures for a stream water quality control problems,, Optimal Control Applications & Methods, 6 (1985), 167. doi: 10.1002/oca.4660060209. Google Scholar [10] D. Jiang and J. B. Moore, A gradient flow approach to decentralized output feedback optimal control,, Systems & Control Letters, 27 (1996), 223. doi: 10.1016/0167-6911(96)80519-6. Google Scholar [11] K. H. Lee, J. H. Lee and W. H. Kwon, Sufficient LMI conditions for $H_\infty$ output feedback stabilization of linear discrete-time systems,, IEEE Transactions on Automatic Control, 51 (2006), 675. doi: 10.1109/TAC.2006.872766. Google Scholar [12] F. Leibfritz, COMPlib: COnstraint Matrix-Optimization Problem library-A Collection of Test Examples for Nonlinear Semi-Definite Programs, Control System Design and Related Problems,, Technical Report, (2004). Google Scholar [13] T. Liu, Z.-P. Jiang and D. J. Hill, Decentralized output-feedback control of large-scale nonlinear systems with sensor noise,, Automatica J. IFAC, 48 (2012), 2560. doi: 10.1016/j.automatica.2012.06.054. Google Scholar [14] W. Q. Liu and V. Sreeram, New algorithm for computing LQ suboptimal output feedback gains of decentralized control systems,, Journal of Optimization Theory and Applications, 93 (1997), 597. doi: 10.1023/A:1022647230641. Google Scholar [15] P. M. Mäkilä and H. T. Toivonen, Computational methods for parametric LQ problems-a survey,, IEEE Transactions on Automatic Control, 32 (1987), 658. doi: 10.1109/TAC.1987.1104686. Google Scholar [16] E. M. E. Mostafa, A trust region method for solving the decentralized static output feedback design problem,, Journal of Applied Mathematics & Computing, 18 (2005), 1. doi: 10.1007/BF02936553. Google Scholar [17] E. M. E. Mostafa, Computational design of optimal discrete-time output feedback controllers,, Journal of the Operations Research Society of Japan, 51 (2008), 15. Google Scholar [18] E. M. E. Mostafa, On the computation of optimal static output feedback controllers for discrete-time systems,, Numerical Functional Analysis and Optimization, 33 (2012), 591. doi: 10.1080/01630563.2012.661381. Google Scholar [19] E. M. E. Mostafa, A conjugate gradient method for discrete-time output feedback control design,, Journal of Computational Mathematics, 30 (2012), 279. doi: 10.4208/jcm.1109-m3364. Google Scholar [20] E. M. E. Mostafa, Nonlinear conjugate gradient method for continuous time output feedback design,, Journal of Applied Mathematics and Computing, 40 (2012), 529. doi: 10.1007/s12190-012-0574-8. Google Scholar [21] W. Nakamura, Y. Narushima and H. Yabe, Nonlinear conjugate gradient methods with sufficient descent properties for unconstrained optimization,, Journal of Industrial and Management Optimization, 9 (2013), 595. doi: 10.3934/jimo.2013.9.595. Google Scholar [22] P. R. Pagilla and Y. Zhu, A decentralized output feedback controller for a class of large-scale interconnected nonlinear systems,, ASME J. Dynam. Syst. Meas. Control, 127 (2004), 167. doi: 10.1115/1.1870047. Google Scholar [23] T. Rautert and E. W. Sachs, Computational design of optimal output feedback controllers,, SIAM Journal on Optimization, 7 (1997), 837. doi: 10.1137/S1052623495290441. Google Scholar [24] M. Saif and Y. Guan, Decentralized state estimation in large-scale interconnected dynamical systems,, Automatica J. IFAC, 28 (1992), 215. doi: 10.1016/0005-1098(92)90024-A. Google Scholar [25] D. D. Šiljak, Decentralized Control of Complex Systems,, Mathematics in Science and Engineering, (1991). Google Scholar [26] D.D. Šiljak and D. M. Stipanović, Robust stabilization of nonlinear systems: The LMI approach,, Math. Problems Eng., 6 (2000), 461. doi: 10.1155/S1024123X00001435. Google Scholar [27] V. L. Syrmos, C. T. Abdallah, P. Dorato and K. Grigoriadis, Static output feedback-a survey,, Automatica J. IFAC, 33 (1997), 125. doi: 10.1016/S0005-1098(96)00141-0. Google Scholar [28] S. Tong, Y. Li and T. Wang, Adaptive fuzzy decentralized output feedback control for stochastic nonlinear large-scale systems using DSC technique,, International Journal of Robust and Nonlinear Control, 23 (2013), 381. doi: 10.1002/rnc.1834. Google Scholar [29] R. J. Veilette, J. V. Medanić and W. R. Perkins, Design of reliable control systems,, IEEE Transaction on Automatic Control, 37 (1992), 290. doi: 10.1109/9.119629. Google Scholar [30] Z. Wei G. Li and L. Qi, New nonlinear conjugate gradient formulas for large-scale unconstrained optimization problems,, Applied Mathematics and Computation, 179 (2006), 407. doi: 10.1016/j.amc.2005.11.150. Google Scholar [31] G. Yu, L. Guan and Z. Wei, Globally convergent Polak-Ribière-Polyak conjugate gradient methods under a modified Wolfe line search,, Applied Mathematics and Computation, 215 (2009), 3082. doi: 10.1016/j.amc.2009.09.063. Google Scholar [32] G. Zhai, M. Ikeda and Y. Fujisaki, Decentralized Hinf controller design: A matrix inequality approach using a homotopy method,, Automatica J. IFAC, 37 (2001), 565. doi: 10.1016/S0005-1098(00)00190-4. Google Scholar [33] L. Zhang, W. Zhou and D. Li, Some descent three-term conjugate gradient methods and their global convergence,, Optimization Methods and Software, 22 (2007), 697. doi: 10.1080/10556780701223293. Google Scholar
[1] Wataru Nakamura, Yasushi Narushima, Hiroshi Yabe. Nonlinear conjugate gradient methods with sufficient descent properties for unconstrained optimization. Journal of Industrial & Management Optimization, 2013, 9 (3) : 595-619. doi: 10.3934/jimo.2013.9.595 [2] Ta T.H. Trang, Vu N. Phat, Adly Samir. Finite-time stabilization and $H_\infty$ control of nonlinear delay systems via output feedback. Journal of Industrial & Management Optimization, 2016, 12 (1) : 303-315. doi: 10.3934/jimo.2016.12.303 [3] Rohit Gupta, Farhad Jafari, Robert J. Kipka, Boris S. Mordukhovich. Linear openness and feedback stabilization of nonlinear control systems. Discrete & Continuous Dynamical Systems - S, 2018, 11 (6) : 1103-1119. doi: 10.3934/dcdss.2018063 [4] Shishun Li, Zhengda Huang. Guaranteed descent conjugate gradient methods with modified secant condition. Journal of Industrial & Management Optimization, 2008, 4 (4) : 739-755. doi: 10.3934/jimo.2008.4.739 [5] Sie Long Kek, Mohd Ismail Abd Aziz. Output regulation for discrete-time nonlinear stochastic optimal control problems with model-reality differences. Numerical Algebra, Control & Optimization, 2015, 5 (3) : 275-288. doi: 10.3934/naco.2015.5.275 [6] Magdi S. Mahmoud. Output feedback overlapping control design of interconnected systems with input saturation. Numerical Algebra, Control & Optimization, 2016, 6 (2) : 127-151. doi: 10.3934/naco.2016004 [7] Jian Chen, Tao Zhang, Ziye Zhang, Chong Lin, Bing Chen. Stability and output feedback control for singular Markovian jump delayed systems. Mathematical Control & Related Fields, 2018, 8 (2) : 475-490. doi: 10.3934/mcrf.2018019 [8] V. Rehbock, K.L. Teo, L.S. Jennings. Suboptimal feedback control for a class of nonlinear systems using spline interpolation. Discrete & Continuous Dynamical Systems - A, 1995, 1 (2) : 223-236. doi: 10.3934/dcds.1995.1.223 [9] H. T. Banks, R.C. Smith. Feedback control of noise in a 2-D nonlinear structural acoustics model. Discrete & Continuous Dynamical Systems - A, 1995, 1 (1) : 119-149. doi: 10.3934/dcds.1995.1.119 [10] Zhenyu Lu, Junhao Hu, Xuerong Mao. Stabilisation by delay feedback control for highly nonlinear hybrid stochastic differential equations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 4099-4116. doi: 10.3934/dcdsb.2019052 [11] Zhong Wan, Chaoming Hu, Zhanlu Yang. A spectral PRP conjugate gradient methods for nonconvex optimization problem based on modified line search. Discrete & Continuous Dynamical Systems - B, 2011, 16 (4) : 1157-1169. doi: 10.3934/dcdsb.2011.16.1157 [12] Gaohang Yu, Lutai Guan, Guoyin Li. Global convergence of modified Polak-Ribière-Polyak conjugate gradient methods with sufficient descent property. Journal of Industrial & Management Optimization, 2008, 4 (3) : 565-579. doi: 10.3934/jimo.2008.4.565 [13] Yigui Ou, Haichan Lin. A class of accelerated conjugate-gradient-like methods based on a modified secant equation. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-16. doi: 10.3934/jimo.2019013 [14] Brian D. O. Anderson, Shaoshuai Mou, A. Stephen Morse, Uwe Helmke. Decentralized gradient algorithm for solution of a linear equation. Numerical Algebra, Control & Optimization, 2016, 6 (3) : 319-328. doi: 10.3934/naco.2016014 [15] Shui-Hung Hou, Qing-Xu Yan. Nonlinear locally distributed feedback stabilization. Journal of Industrial & Management Optimization, 2008, 4 (1) : 67-79. doi: 10.3934/jimo.2008.4.67 [16] N. U. Ahmed. Existence of optimal output feedback control law for a class of uncertain infinite dimensional stochastic systems: A direct approach. Evolution Equations & Control Theory, 2012, 1 (2) : 235-250. doi: 10.3934/eect.2012.1.235 [17] Abderrahim Azouani, Edriss S. Titi. Feedback control of nonlinear dissipative systems by finite determining parameters - A reaction-diffusion paradigm. Evolution Equations & Control Theory, 2014, 3 (4) : 579-594. doi: 10.3934/eect.2014.3.579 [18] Evelyn Lunasin, Edriss S. Titi. Finite determining parameters feedback control for distributed nonlinear dissipative systems -a computational study. Evolution Equations & Control Theory, 2017, 6 (4) : 535-557. doi: 10.3934/eect.2017027 [19] Gonzalo Robledo. Feedback stabilization for a chemostat with delayed output. Mathematical Biosciences & Engineering, 2009, 6 (3) : 629-647. doi: 10.3934/mbe.2009.6.629 [20] Jana Kopfová. Nonlinear semigroup methods in problems with hysteresis. Conference Publications, 2007, 2007 (Special) : 580-589. doi: 10.3934/proc.2007.2007.580
2018 Impact Factor: 1.025
|
2019-08-26 03:56:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6344352960586548, "perplexity": 5791.985709394323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330962.67/warc/CC-MAIN-20190826022215-20190826044215-00339.warc.gz"}
|
https://wiki.bnl.gov/rhicspin/Run_11_target_usage
|
# Run 11 target usage
Last modified by Dmitri Smirnov on 10-07-2012
For latest results and plots go to http://www.phy.bnl.gov/cnipol/summary/
For target thickness and other parameters go to Polarimetry/pC#Targets
A report on a target stress testing study is also available: doc
Approximate number of sweep measurements performed with each target
1 2 3 4 5 6 All
H V H V H V H V H V H V H V
Blue-1 Upstream 81 297 1 1 333 6 74 0 61 0 224 0 777 304
Yellow-1 Downstream 249 16 4 4 56 15 98 7 113 1 65 1 585 44
Blue-2 Downstream 175 49 3 4 41 118 1 75 6 19 1 219 227 484
Yellow-2 Upstream 74 355 6 8 52 178 1 5 41 8 17 116 191 670
Blue-1 Upstream Only 250 GeV measurements are shown. Measurements with vertical targets are shown with down-pointing triangles ${\displaystyle \bigtriangledown }$; whereas mesurements with horizontal targets are shown with up-pointing triangles ${\displaystyle \bigtriangleup }$.
Blue-1 Upstream Only 24 GeV measurements are shown. Measurements with vertical targets are shown with down-pointing triangles ${\displaystyle \bigtriangledown }$; whereas mesurements with horizontal targets are shown with up-pointing triangles ${\displaystyle \bigtriangleup }$.
Blue-1 Upstream Only 100 GeV measurements are shown. Measurements with vertical targets are shown with down-pointing triangles ${\displaystyle \bigtriangledown }$; whereas mesurements with horizontal targets are shown with up-pointing triangles ${\displaystyle \bigtriangleup }$.
# Summary from online
The following information is provided by Haixin.
```Blue1
H1 15081-15251 02/05 12:15PM - 02/28 12:43PM
H3 2/26 15111-15240 3/7 5:33PM-2/26 7:13AM
H5 3/29 15343-15368 3/25 9:14pm- 3/29 6:03PM
H2 3/29 15368-15368 3/29 9:10PM
H4 4/4 15368-15393 3/29 11:07PM-4/4 12:38PM
H6 15394-15475 4/4 12:44PM-4/19 10:01AM
V1 14807-15342 1/13 12:20AM-3/25 6:00PM
V3 14890-14891 only used for emit. measurements.
V5
V2
V4
V6
Blue2
V1 2/21 14807-15208 1/13 12:42AM- 2/21 5:53PM
V3 3/29 15210-15366 2/21 10:51PM-3/29 2:37AM lost in ramp measurement
V5 3/31 15366-15370 3/29 2:47AM-3/30 4:57AM
V2 3/30 15371 lost w/o use
V4 4/5 15373-15399 3/31 6:45AM-4/5 1:39PM
V6 15399-15475 4/5 4:51PM- 4/19 9:37AM
H1 3/23 14803-15325 1/10 10:48AM- 3/23 5:46AM
H3 15331-15342 3/23 01:11AM-3/25 6:03PM
H5 used on 3/29, too loose
H2
H4
H6
Yellow1
H1 2/27 14890-15246 1/22 1:17PM- 2/27 9:06AM
H3 3/3 15249-15267 2/27 4:56PM-3/3 10:30PM
H5 3/25 15267-15338 3/3 10:36PM-3/25 2:28AM
H2 3/25 lost w/o use 15338 3/25
H4 3/27 15359-15410 3/27 11:27AM- 4/7 12:41PM
H6 15378-15475 4/1 12:40AM- 4/18 10:27PM
V1 3/26 15221-15350 2/23 6:59PM-3/26 6:48PM
V3 3/26 15221 lost w/o use
V5 3/26 15353 lost w/o use
V2 3/26 15353 lost w/o use
V4 3/26 15356 lost w/o use only; used 2/13 for three measurements
V6 15166 2/13 6:54PM used once
Yellow2
V1 3/2 15015-15257 1/30 10:40AM-3/2 4:37PM
V3 3/30 15261-15369 3/3 4:06AM- 3/30 3:24AM
V5 3/30 15369-15370 3/30 3:41AM-3/30 8:05AM
V2 3/31 lost w/o use
V4 3/31 lost w/o use
V6 4/8 15375-15419 3/31 10:02AM- 4/8 1:44AM
H1 4/12 15342-15357 (3/25-3/27) 15424-15443 4/8 7:10PM-4/12 3:54PM
H3 4/16 15443-15466 4/12 4:05PM- 4/16 5:12AM
H5 15466-15473 4/13 7:58AM- 4/18 11:03AM
H2
H4
H6 15221 used in APEX 2/23 6:33PM-9:31PM
```
# Loose Targets
This information is provided by Billy Christie:
```B1V2 (10-187) Prerun comments: No comment Postrun comments: Very Loose
B1V3 (10-141) Prerun comments: Little Loose Postrun comments: Very Loose
B2H3 (10-200) Prerun comments: No comment Postrun comments: Very Loose
B1V1 (10-168) Prerun comments: Little Loose Postrun comments: N/a (the target did not survive the run)
```
There are only few measurements with B1V2 and B1V3. No good sweep measurements available with these two targets.
B2H3 indeed may have been giving a non-gaussian intensity profile for sweep measurements [1] [2] [3]
B1V1 was giving nice gaussian intensity profiles most of the times. [4] [5]
Last modified by Dmitri Smirnov on 10-07-2012
|
2022-05-27 21:49:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4662932753562927, "perplexity": 3085.8687485965293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663006341.98/warc/CC-MAIN-20220527205437-20220527235437-00446.warc.gz"}
|
https://koaning.io/til/2021-10-29-learning-to-place/
|
# TIL: Learning to Place
Classification as a Heavy-Tail Regressor
Vincent Warmerdam koaning.io
2021-10-29
I haven’t benchmarked this idea, but it sounds like it might work.
## Heavy Tails
Let’s say that you want to have a regression algorithm on a dataset that has a large skew. Many values may be zero, but there’s a very long tail too. How might we go about regressing this?
We could … turn it into a classification problem instead.
## Classifier
Let’s say that we have an ordered dataset. Let’s say that item 1 has the smallest regression value and item $$n$$ has the highest value. That means that;
$y_1 \leq y_2 \leq ... \leq y_{n-1} \leq y_n$ Let’s now say we have a new datapoint $$y_{new}$$. Maybe we don’t need to perform regression. Maybe we just need to care about if $$y_{new} \leq y_1$$. If it is, we just predict $$y_{new} = y_1$$. If it’s not, we try $$y_1 \leq y_{new} \leq y_2$$. If that’s not it, we can try $$y_2 \leq y_{new} \leq y_3$$
This turns the problem on it’s head. We’re no longer worrying about how heavy the tail could be. Instead we’re wondering where in the order of our training data our new datapoint is. That means that we can use classification!
Given that we’ve trained a classifier that can be used to detect order, we can now use it as a heuristic to order new data.
|
2021-11-27 06:00:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.59678715467453, "perplexity": 617.687928630207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358118.13/warc/CC-MAIN-20211127043716-20211127073716-00351.warc.gz"}
|
https://www.physicsforums.com/threads/exterior-calculus-what-about-symmetric-tensors.339545/
|
# Exterior calculus: what about symmetric tensors?
1. Sep 22, 2009
### mrentropy
Hi all,
Quick question I haven't been able to find the answer to anywhere:
Can I use exterior calculus for symmetric tensors?
I'm familiar with the exterior calculus approach to things like Stokes's theorem and Gauss's law, but that's vector stuff. It seems to me the only tensors in exterior calculus are anti-symmetric tensors. This is fine. I understand the wedge product, so this makes sense.
The problem is my tensors aren't anti-symmetric, they're symmetric. I do lots of things with rank-2 symmetric tensor fields in flat space. Perfectly pedestrian things like the viscous stress tensor in a fluid, the stress in a sold, and the Maxwell stress; all of this is nonrelativistic BTW.
So is exterior calculus totally useless for what I do, or am I missing something?
Thanks,
Peter
2. Sep 22, 2009
### tiny-tim
Hi Peter!
Yup!
3. Sep 22, 2009
### lurflurf
There is an analogous theory, but exterior calculus will not itself be very helpful.
4. Sep 23, 2009
### Phrak
Are you sure? Say I only care about the elements of the symmetric tensors in an equation. Is it possible to recaste a tensor equation involving symmetric tensors into one involving skew symmetric tensors?
5. Sep 23, 2009
### jambaugh
Here is the context in which you can "appreciate" exterior and other products of vectors and tensors.
An N-dimensional vector is an element of the fundamental representation of the Lie group GL(N) of invertible linear operators acting on a vector space. (Think of the set of NxN invertible matrices). This group GL(N) is the group of automorphisms of the vector space i.e. the group of transformations preserving linear combinations.
Now if you define an abstract product $\otimes$ acting on vectors, and products of vectors and products of products of vectors etc, so that it generates an algebra then only hold it to the conditions that it
a.) be bilinear i.e. the product respects the procedure of taking linear combinations
$$X\otimes (aY + bZ) = a(X\otimes Y) + b(X\otimes Z)$$
and likewise for $(aY + bZ) \otimes X$; and
b.) that it be associative,
$$(X\otimes Y)\otimes Z \equiv X\otimes(Y\otimes Z)$$
then you have what is called the free associative product on a linear space. I know this is pretty abstract but bear with me. Here is the thing the "free product" for the algebra generated by elements of a vector space is the tensor product. What is more it is a way to generate other representations of the group GL(N) acting on the space.
Now the point of this is, the free product generates higher order objects from the vectors and these are the tensors. Also these tensors are other objects upon which the automorphism group GL(N) acts. (Since the tensor product respects linear combinations and so does GL(N) the action of GL(N) on tensor products of vectors still respects linearity.) In fact one way to define the vectors as a fundamental representation is that any finite dimensional representation of GL(N) can be found as a sub-representation (invariant subspace) of one of the tensor representations obtained by this free product. But in general these tensors are not irreducible in that the action of GL(N) on a space of tensors can be broken down into separate actions on subspaces of specific types of tensors. For example GL(N) will map anti-symmetric tensors to anti-symmetric tensors, and symmetric tensors to symmetric tensors. There is a whole big subject of representation theory for this group which delves into enumerating the irreducible parts.
OK. Here is the punch line. Any other reasonable/useful products we define on a set of objects can generally be expressed by giving the free product modulo a set of identities. This is how you define for example the "wedge product" or outer product of vectors and anti-symmetric tensors. You impose the anti-symmetry property. You can likewise impose a symmetry property defining a product yielding symmetric tensors. In particular these two products end up generating (from the vectors) classes of irreducible tensor representations. (Think of the identities as filtering out all but one irreducible part of the generally reducible tensor representations.)
There are other weirder products you can define but these two are special in that they generate uniform classes of irreducible representations. This partly because the defining identities themselves make no basis dependent references to the spaces and so are themselves unaffected by the automorphism group GL(N).
You are already familiar with the antisymmetric (wedge) products generating the totally antisymmetric tensors but --though you may not realize it-- you are even more familiar with the totally symmetric product. This total symmetry translates to commutativity and so the totally symmetric tensors on an N dimension space equates to the set of polynomials of N variables. Identify the degree k term coefficients of a polynomial with the coefficients of a rank k totally symmetric tensor. (The variables themselves correspond to the basis elements.)
OK. Now that I've taken you through China just to get to the store up the street, I hope by the tour I have given you a feel for the context by which the following emerges. The totally symmetric correspondent of the exterior calculus is just the standard standard calculus of analytic functions on N variables. (Recall that analytic functions have power series expansion and thus you can think of them as infinite degree polynomials or infinite sums of totally symmetric tensors.)
But Wait There's More! OK the generating object in all this was the Vector Space which is basically defined by linearity, i.e. linear combinations of vectors yields vectors. It had the automorphism group, the group of general linear transformations GL(N). You can impose additional structure on the space e.g. give it a metric. I could go into indefinite metrics such as we have in Minkowski space-time but lets stick to Euclidean spaces for now. The additional structure reduces the number of transformations which preserve it so you get a smaller group of automorphism, the orthogonal group O(N). (If you allow indefinite metrics you get O(p,n) p+n=N).
I bring this up because the metric corresponds to an inner product on the space. Also given more structure you get less automorphisms and more possible products preserving them. In particular you can extend the outer product (Grassmann product) to include also an inner product term since the inner product is also invariant under the smaller group. This is the Clifford product.
In the way that outer (Grassmann) and symmetric (commutative) products naturally express the linear structure, Clifford products and Clifford algebras naturally incorporate the additional metric structure.
OK I'm done now. I know I've laid a bunch of abstract stuff which is difficult to absorb in one reading but treat it as a source for references to further study.
Ultimately I'm describing the category structure of vector spaces. You have a category of objects and the automorphism structure. You can then look at the free associative products preserving structure and how the automorphisms map under it. Then impose identities and see what you get. This is how you generate the "calculus" over this category.
6. Sep 23, 2009
### jambaugh
OK, I got a bit overzealous. Could you give an example problem to see what you'd like to "recast"?
7. Sep 23, 2009
### mrentropy
Sure: the Navier-Stokes equations, starting with writing out explicitly the viscous stress tensor in a Newtonian fluid. Actually what I'd really like is compressible N-S with MHD, but I think if I see straight up NS I could figure out how to do the rest. :)
I just stepped in the door so haven't yet had the chance to read all the detailed replies. Thanks in advance.
8. Sep 23, 2009
### jambaugh
I'm sorry I'm not quite clear on the actual mathematics problem. You're trying to express the viscous stress tensor? You're trying to find a particular one for a given solution? You're trying to solve Navier-Stokes with certain boundary conditions?
9. Sep 23, 2009
### jambaugh
See if this may help...
One can use "diadics" to represent vectors and tensors in terms of a standard basis:
$$\mathbf{r} = x\mathbf{i} + y\mathbf{j} + z\mathbf{k}$$
$$\mathbf{u}\wedge \mathbf{v} = (v_1 u_2 - u_1 v_2)\mathbf{i}\wedge\mathbf{j} + \cdots$$
Think of the outer (wedge) product of vectors as the commutator of the tensor product:
$$\mathbf{i}\wedge \mathbf{j} = \mathbf{i}\otimes\mathbf{j} - \mathbf{j}\otimes\mathbf{i}$$
Except be sure you totally antisymmetrize multiple products instead of just taking straight commutator. The commutator product is not associative but if you totally antisymmetrize completely you recover associativity.
To express symmetric tensors you can define a similar totally symmetrized product. In the typical diadic formulation the tensor product is written just as adjacency ij but here I'm including it explicitly. You can then use ij to be the totally symmetric product i.e. commuting product. Thus you can express a totally symmetric rank 2 tensor as:
$$\mathbf{S} = S_{xx} i^2 + S_{yy}j^2 + S_{zz}k^2 + S_{xy}ij + S_{xz}ik + S_{yz}jk$$
Now if you want to take say a dot product you'll need to expand this symmetric product of distinct basis vectors (but not of powers to avoid factor of two issues).
$$\mathbf{i}\cdot ij = \mathbf{i}\cdot( \mathbf{i}\otimes \mathbf{j} + \mathbf{j}\otimes \mathbf{i}) = \mathbf{j}+ 0$$
In terms of the calculus just use the normal:
$$\nabla = \partial_x \mathbf{i} + \partial_y \mathbf{j} + \partial_z\mathbf{k}$$.
But be very careful about mixing wedge and symmetric products. Expand and then apply.
If you are working on paper you may want to always put symmetrized products of basis vectors in parentheses and shorten the tensor product to just adjacency for conciseness:
$$(\mathbf{i}\mathbf{j})= \mathbf{ij} + \mathbf{ji} \equiv \mathbf{i}\otimes\mathbf{j} + \mathbf{j}\otimes\mathbf{i}$$
Another convention is to put totally antisymmetrized products in square brackets:
$$[uv] = uv - vu$$
$$[uvw] = uvw - uwv -vuw -wvu + vwu + wuv$$
Again the symmetric tensors are equivalent to the polynomials (rank = degree) on the basis treated as the variables. So expanding symmetrized products is relatively easy. You'll also be able to apply calculus in the standard fashion. But be careful to explicitly identify the derivative operators type of diadic product. e.g. straight gradient is tensor product so expand symmetric or outer products in terms of tensor products first. Wedge products applied to anti-symmetric tensors yields anti-symmetric tensors so totally antisymmetrize.
As far as applying Stokes type formulas there may be something you can do with symmetrized forms. Let me see what I can work out.
10. Sep 23, 2009
### mrentropy
You're right, and I'll definitely do that.
OK so the reason for all of this is in a nutshell is that I'm trying to figure out if it's worth learning exterior calculus, specifically for doing numerical simulations (discrete exterior calculus, aka DEC), for fluid dynamics simulations. I haven't found any that do viscous flow, just Euler flow, the difference being that in the latter case you don't have to solve a stress tensor.
(BTW I first learned Stokes's and Gauss's from the exterior calc perspective so it's not totally foreign to me.)
I've found it helpful to derive correct finite difference / finite element discretizations using DEC approaches for simple things like Poisson's equation, but nothing sophisticated like Navier-Stokes.
So it would be nice to know how to express the Stokes viscous term - and then the full Navier-Stokes - in the language of exterior calculus.
The Stokes viscous term is that the viscous stress tensor is:
$\sigma'_{\alpha \beta} = \eta (v_{\alpha;\beta} + v_{\beta;\alpha} - \frac{2}{3}g_{\alpha\beta}v^\gamma_{\ \ ;\gamma}) + \xi g_{\alpha\beta}v^\gamma_{\ \ ;\gamma}$
I've taken this from Landau & Lifgarbagez and put it in covariant form instead of Cartesian index notation, but anyway there you go. (I use semicolons to denote covariant differentiation: indices that appear following a ";" are differentiated upon.)
$v_{\alpha}$ is the fluid velocity, and $v_{\alpha;\beta}$ is its covariant derivative. Obviously $\sigma'_{\alpha \beta}$ is symmetric. Oh yes and $\eta$ and $\xi$ are constants (first and second viscosities).
From this one then forms the full stress tensor
$\Pi_{\alpha\beta} = P g_{\alpha\beta} + \rho v_\alpha v_\beta - \sigma'_{\alpha \beta}$
where $P$ is the pressure and $\rho$ is the fluid density. Then the NS equations (i.e. momentum conservation) come from taking the divergence of this - i.e. d/dt of momentum density (LHS) is equal to the divergence of the momentum flux tensor $\Pi_{\alpha \beta}$ (which is a symmetric tensor), on the RHS:
$\frac{\partial}{\partial t} \left( \rho v_\alpha \right) = - \Pi_{\alpha \gamma ;}^{\ \ \ \gamma}$
Of course that's all pretty complicated but I'd be happy for starters just to have the foggiest notion how to write the viscous stress tensor for an incompressible fluid, which pretty simple. In that case,
$\sigma'_{\alpha \beta} = \eta \left( v_{\alpha;\beta} + v_{\beta;\alpha} \right)$
since $v^\gamma_{\ \ ;\gamma} = 0$.
Anyway that's what I'm after. Hope that makes sense. Thanks so much!
11. Sep 23, 2009
### mrentropy
Oh yes and $g_{\alpha \beta}$ is the metric, but you knew that.
Maybe a simpler example to start is this: The Euler equation (setting the density $\rho$ equal to 1) can be written:
$\partial_t {\vec v} + {\vec v} \cdot \nabla {\vec v} = - \nabla P$
Nevermind how you solve for $P$ at the moment. Anyway the second term on the LHS is equivalent to setting a stress tensor equal to
$\overleftrightarrow{\Pi} = {\vec v} {\vec v}$
and then taking its divergence,
$\partial_t {\vec v} = - \nabla \cdot \overleftrightarrow{\Pi} - \nabla P$
assuming that we're incompressible (since density is constant) so $\nabla \cdot {\vec v} = 0$.
Any ideas how one would write this in the language of exterior calculus?
(There's supposed to be a double-headed arrow over the $\Pi$. I can rewrite in Cartesian index notation:
$\Pi_{ij} = v_i v_j$
$\partial_t v_i = -\nabla_k \Pi_{ki} -\nabla_i P$
Sorry to bounce back and forth in the notation.)
You can also add the pressure into the stress tensor definition but nevermind that for the moment.
P.S. jambaugh, I just now noticed your second post. This makes much more sense to me than the first (I'm sure it was valid, I just don't yet understand it). I think taking a quick look at what you wrote I might be able to figure it out. It's definitely a big help. I'll give it a shot and let you know if/what I come up with.
Last edited: Sep 23, 2009
12. Sep 23, 2009
### Phrak
I don't believe I have an solid example of a tensor equation that doesn't appear to be antisymmetric, but is.
One candidate is Maxwell's equations. Are these antisymmetric equations in three dimensions? I haven't considered it, actually.
In any case, in 4 dimensions we have d*dAμ = -J, where the scalars found in Maxwell's equations are elements of antisymmetric tensors. A is the 1-form 4-potential. I must say, this is not the sort of thing I had in mind when asking about 'recasting', but I suppose it's worth considering.
The vacuum wave equation is another one we might consider.
d'Alembertian(Ei) = 0 and d'Alembertian(Bi) = 0 don't look antisymmetric. But in forms, they are. d*d*F = 0.
Personnally, I would be more interested in knowing whether the scalar elements of an equation involving symmetric tensors, such as those mrentropy had in mind, could be found within an equation involving antisymmetric tensors without invoking higher dimensions.
Last edited: Sep 23, 2009
13. Sep 24, 2009
### jambaugh
You don't need exterior calculus as you have it all in the general tensor calculus and you seem to be familiar with index notation. I don't think you'll find any magic bullets in the exterior calculus but --yes-- I'd say it is worth learning. Find a good book on Differential Geometry and it will cover both. In any relevant application there will be times when using the exterior calculus is helpful but it is a subset of the more general tensor calculus and you need to be able to fall back on that.
The principle arena where exterior calculus is useful in defining differential forms. The anti-symmetry keeps track of orientation nicely when one integrates.
BTW This thread has gotten me remembering a little trick of notation I worked out once upon a time. I'm trying to type it up now. The idea is to use characteristic functions to internalize the limits of integration (recall a characteristic function is defined for some set to be one for elements of the set but otherwise zero.) Applying the gradient to a characteristic function gives a nice generalization of Dirac's delta function and allows one to "internalize" the various Stokes type integral formulas as differential identities in the integrand. I'll send you a copy when I get a complete first draft.
14. Sep 24, 2009
### mrentropy
Well, yes and no. I mean after all if that were completely true, then *nobody* would need exterior calculus, right?
The beauty of differential forms (to me anyway) is that things like Stokes's and Gauss's law just "pop out".
In practical numerical terms, this means that if you want to discretize your PDE on a mesh (regular or unstructured), that things that should be conserved are, in fact, conserved.
Contrast this, say, with magnetohydrodynamic simulations of fluids where at each timestep you project the solution for the B-field onto the space of divergenceless vector fields, to maintain div B = 0. Ideally you wouldn't have to do this; your discretization would automatically conserve the div B = 0 property on time integration up to some machine error.
There's actually a fair bit of subtlety to this in that discretizing the Hodge star operator introduces all sorts of errors on a mesh, but outside of this, the DEC approach works pretty well, so far as I'm aware. In the case of an Euler fluid, the thing that corresponds to the div B = 0 problem in MHD is conserving vorticity. If I'm not mistaken this comes out of the symmetry of the momentum flux density tensor for an inviscid fluid, which I mentioned earlier.
So, my thinking is, if I understood how to apply ext calc (EC) to symmetric tensors, that I would see how to discretize things in such a way that the stuff that's supposed to be conserved is, in fact, conserved, for the systems of interest to me. I'd love to learn more EC, but I have a million other things I'd like to learn too so I wanted to know *ahead* of time if the risk/return equation balanced out in my favor.
I suspected that the answer might be that there was some magic where you could see that a symmetric rank n tensor could be built out of antisymmetric rank m tensors or somesuch.... and it appears that this might be the case. The problem for me then is to revisit the appropriate conservation laws to remind myself how they come out of the rank-2 tensors of interest to me, and then go from there. That is, keeping the underlying physics in mind.
Thanks so much for all the help!
15. Sep 24, 2009
### jambaugh
Don't confuse necessity with utility. You can also do a lot of 3-vector work using quaternions. Quaternions aren't "necessary" but may be useful in specific applications.
Give me a week and I'll show you more dramatic "popping out" of these theorems.
I understand this in general but its been quite a while since I actually worked with numerical (finite element) methods on PDE's. I'm not going to be much help with the details without doing my homework.
Let me suggest you take a look at Regge calculus in GR which I think is almost exactly what you are describing. The tools you seek may be there.
That is possible but will not be helpful. Understand that symmetric and anti-symmetric tensors are in distinct irreducible representations of the GL(N) automorphism group I mentioned. You can build symmetric tensors from anti-symmetric tensors in the same sense as you can build symmetric tensors from vectors. But a.) the constructions will be much messier in general and b.) will necessarily be stepping outside the EC you'd like to use so no advantage is to be had. Again I think you'll want to work at the more fundamental level of diadics and tensor calculus. But keep at it I could be wrong.
What I suggest is that you look within the numerical methods you know EC helps and see how the boon e.g. conservation law preservation manifests. Then look to see how to preserve that boon when you step out of EC into TC.
16. Sep 24, 2009
### mrentropy
Thanks for the help. I do remember the discussion in MTW* re Regge calculus and it does seem apropos as I recall, so thanks for the reminder - I'll check it out.
Regards,
Peter
*Misner, Thorne & Wheeler
|
2018-07-20 22:37:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8346728682518005, "perplexity": 509.2999147978061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591837.34/warc/CC-MAIN-20180720213434-20180720233434-00227.warc.gz"}
|
https://www.transtutors.com/questions/solve-for-the-drag-on-a-cylinder-in-a-flowing-stream-with-a-uniform-velocity-profile-7848136.htm
|
# Solve for the drag on a cylinder in a flowing stream with a uniform velocity profile upstream. Solve
Solve for the drag on a cylinder in a flowing stream with a uniform velocity profile upstream. Solve for Reynolds number = 1, 5, 10, 20, 40. Use the nondimensional form with radius = 0.5, upstream velocity = 1, ρ = Re, and η = 1. [The other method of nondimensionalization (ρ = 1 and η = 1/Re) works best as the Reynolds number gets high. But use the one with ρ = Re and η = 1 when calculating the drag force.] Far from the cylinder use an open boundary with zero normal stress (this will mimic an infinite domain.)
(a) How does the qualitative behavior of the solution change with Reynolds number? Prepare graphs that display interesting features of the solution for Re = 10.
(b) Plot the dimensionless drag (what you get by integrating the total force on the cylinder) versus Reynolds number. The drag coefficient can be determined from your forces using the formula CD = 2F
/Re. The force F
can be calculated by integrating the total force on the surface of the cylinder. Then multiply it by 2 (to account for the other side, assuming you solve the problem using symmetry).
(c) Compute the drag coefficient at Re = 1 and 10 and compare with Perry and Green (2008).
|
2021-11-29 12:29:00
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.896445631980896, "perplexity": 600.7825483584933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358705.61/warc/CC-MAIN-20211129104236-20211129134236-00306.warc.gz"}
|
http://math.stackexchange.com/questions/66112/a-very-simple-question
|
# A very simple question
$vx = z$
$zb = y$
Which means $vxb = y$
Does this build on an axiom (and which)? I have to prove a statement using only some specific axioms. But I don't know if I'm allowed to do that.
-
It would be very useful to the question to say which axioms you are allowed to use, and what is the context of the multiplication. – Asaf Karagila Sep 20 '11 at 16:25
## 1 Answer
If:
1. Juxtaposition represents the result of performing a binary function written in infix notation; and
2. This binary function is associative (so that $(ab)c = a(bc)$ for all $a$, $b$, and $c$;
then, yes. If $vx=z$ and $zb=y$, then $y = zb = (vx)b = vxb$. You are using the fact that the product is a function, so evaluating at $z$ and $b$ is the same as evaluating at $vx$ and $b$ (since $vx=z$; this is sometimes called the "Principle of Substitution", which is an axiom of the underlying logic). So $zb= (vx)b$. And because the operation is assumed to be associative, then the two possible meanings of "$vxb$" (namely, $(vx)b$ and $v(xb)$) have the same value, so we do not need to distinguish between them.
In particular, if these are, for instance, real numbers and juxtaposition is the usual multiplication of real numbers, then the implication is valid.
-
|
2016-07-02 07:38:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.952850878238678, "perplexity": 118.16761000007087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00105-ip-10-164-35-72.ec2.internal.warc.gz"}
|
https://mathhelpboards.com/threads/find-the-laplace-transform.428/
|
# Find the Laplace transform
#### Alexmahone
##### Active member
Find the Laplace transform of $\displaystyle f(t)=1$ if $\displaystyle 1\le t\le 4$; $\displaystyle f(t)=0$ if $\displaystyle t<1$ or if $\displaystyle t>4$.
#### CaptainBlack
##### Well-known member
Find the Laplace transform of $\displaystyle f(t)=1$ if $\displaystyle 1\le t\le 4$; $\displaystyle f(t)=0$ if $\displaystyle t<1$ or if $\displaystyle t>4$.
Straight forward application of the definition:
$F(s)=\int_0^{\infty}f(t)e^{-st}\; dt=\int_1^4e^{-st}\;dt$
CB
#### Alexmahone
##### Active member
Straight forward application of the definition:
$F(s)=\int_0^{\infty}f(t)e^{-st}\; dt=\int_1^4e^{-st}\;dt$
CB
Thanks. How would the answer differ if one of the endpoints 1 or 4 (or both) were excluded?
#### Ackbach
##### Indicium Physicus
Staff member
Thanks. How would the answer differ if one of the endpoints 1 or 4 (or both) were excluded?
Do you mean if your function were defined as, for example, $f(t)=1$ if $1<t\le 4$; $f(t)=0$ if $t\le 1$ or if $t>4$? It would make no difference. The reason is that the changing of one point in a function does not alter the integral of that function. In fact, changing the function at countably many points does not change the value of the integral.
#### Alexmahone
##### Active member
Do you mean if your function were defined as, for example, $f(t)=1$ if $1<t\le 4$; $f(t)=0$ if $t\le 1$ or if $t>4$? It would make no difference. The reason is that the changing of one point in a function does not alter the integral of that function. In fact, changing the function at countably many points does not change the value of the integral.
That's exactly what I meant. Thanks.
Staff member
|
2020-07-11 16:37:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9237390756607056, "perplexity": 339.02623352535295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655934052.75/warc/CC-MAIN-20200711161442-20200711191442-00550.warc.gz"}
|
https://chemistry.stackexchange.com/questions/28939/ring-contraction-in-a-carbocation-due-to-ring-strain-and-back-bonding?noredirect=1
|
# Ring contraction in a carbocation due to ring strain and back bonding
Today our teacher told us that the following carbocation rearrangement occurs due to back bonding. I could not really follow what he meant. Can someone please explain what is actually happening during the following carbocation rearrangement, and why it happens?
The reason that cyclobutyl carbocations generally rearrange to cyclopropylcarbinyl carbocations is due to resonance stabilization. Here are some drawings that may help, if you build a model and look at the model that will help even more. The $\ce{C-C}$ bonds in a cyclopropane ring are approximately $\ce{sp^{3.74}}$ hybridized (see here), said differently, they have a lot of p-character in the bond. In fact, the p-character is so high that a cyclopropane compound will absorb bromine much like an olefin. If that cyclopropane high p-character bond (orbital) can align itself with the p-orbital on the carbocation center, it will stabilize the carbocation, just like a p-orbital on an adjacent double bond (the allyl system) stabilizes a carbocation.
Typically we discuss two conformations of the cyclopropane ring with the carbocation p-orbital, a bisected and a perpendicular conformation. In the bisected conformation the cyclopropane $\ce{C-C}$ bond lies in the same plane as the carbocation p-orbital. The carbocation p-orbital and the $\ce{C-C}$ orbital which resembles a p-orbital (because it is high in p-character) overlap! In the perpendicular conformation, the cyclopropane $\ce{C-H}$ bond is in the same plane as the carbocation p-orbital, the $\ce{C-C}$ bond is not - there is no carbocation stabilization by the cyclopropane ring in this conformation. Molecules where the bisected conformation can be attained show stabilization due to the stabilizing overlap between the carbocation p-orbital and the p-like $\ce{C-C}$ cyclopropane ring bond. Sometimes we use resonance structures such as the following to denote this stabilization.
|
2019-09-19 00:20:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7274108529090881, "perplexity": 3005.27847500479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573385.29/warc/CC-MAIN-20190918234431-20190919020431-00106.warc.gz"}
|
https://homework.cpm.org/category/CCI_CT/textbook/int2/chapter/9/lesson/9.2.1/problem/9-56
|
### Home > INT2 > Chapter 9 > Lesson 9.2.1 > Problem9-56
9-56.
What is the difference between $f(x)=|x|$ and $y=|x|$ in relation to their graphs or tables? What conclusions can you draw from your answer?
There is no difference.
|
2021-09-26 13:59:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2846655547618866, "perplexity": 1325.038410222261}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057861.0/warc/CC-MAIN-20210926114012-20210926144012-00402.warc.gz"}
|
https://solvedlib.com/n/muth-i0worksheet-9-applications-of-relative-extretna-sinns,6744926
|
5 answers
# Muth I0Worksheet 9: Applications of Relative Extretna Sinns AmiQuantion Ol al the nuumlate Sutna t TU, htxl (Le ( Ihat
###### Question:
Muth I0 Worksheet 9: Applications of Relative Extretna Sinns Ami Quantion Ol al the nuumlate Sutna t TU, htxl (Le ( Ihat hav t marinnn That Ehnl tk maTrtm Q fVelte f FdETu Question % Mittiiza: Q 3" Sv' #tte ,-Ver
## Answers
#### Similar Solved Questions
5 answers
##### [0.4/1 Points]DETAILSPREVIOUS ANSWERSZilLoIFFEQ9 7.3.033.EPMY NOTESASK YOUR TEACHERPRACTICE ANOTHER4-pouno weiahu stretchessprinofeet: The weight released from rest 15 inches above the equilibrium position, and locity: (Use 32 ft/s2 for the acceleration due gravity-)resuicino Motion takes placeIe diui offering damipino lorce mumericaily equditimes the instantaneousComplete the Laplace transfon the differential equation.52 Ax}sAn}) x} =Use the Laplace transform to find the equatlon motlon x(t).x(
[0.4/1 Points] DETAILS PREVIOUS ANSWERS ZilLoIFFEQ9 7.3.033.EP MY NOTES ASK YOUR TEACHER PRACTICE ANOTHER 4-pouno weiahu stretches sprino feet: The weight released from rest 15 inches above the equilibrium position, and locity: (Use 32 ft/s2 for the acceleration due gravity-) resuicino Motion takes ...
1 answer
##### Please send me the solutions for the above 6 questions. Please send it as tomorrow I...
Please send me the solutions for the above 6 questions. Please send it as tomorrow I have an exam. Thank You. ? 8. Prove that if n is a perfect square, then n + 2 is not a perfect square. 9. Use a proof by contradiction to prove that the sum of an irrational number and a rational number is irrationa...
1 answer
##### 6. A paper circle of diameter 40 cm has an inner circle of diameter 20 cm....
6. A paper circle of diameter 40 cm has an inner circle of diameter 20 cm. A sharp point is put onto the paper randomly. Show how to find the probability distribution of the area (inner and outer) where the sharp point hits. Also represent the distribution graphically....
1 answer
##### Find a vector $\mathbf{v}$ whose magnitude is 3 and whose component in the $\mathbf{i}$ direction is equal to the component in the $\mathbf{j}$ direction.
Find a vector $\mathbf{v}$ whose magnitude is 3 and whose component in the $\mathbf{i}$ direction is equal to the component in the $\mathbf{j}$ direction....
5 answers
##### Hacttthunon belttu KAaHHAelfAbelienn(e)Haeennalants J_ 0 } TE [S 3130 20 Tenl40 434 Ioa3402Complete tha table (3 points) Compute the sample mean using calculator: One decimal place (2 point)Sample standard deviation using calculator: One decimal place (2 point)using class boundaries and Interprct the Constnuct relative frequeney histogrm left; skewed right bell-shaped) (3 points) distribution (unifor, skewed
Hacttthunon belttu KAaHHAelf Abelienn (e) Haeennalants J_ 0 } TE [S 3130 20 Tenl 40 434 Ioa 3 402 Complete tha table (3 points) Compute the sample mean using calculator: One decimal place (2 point) Sample standard deviation using calculator: One decimal place (2 point) using class boundaries and Int...
5 answers
##### Question 15WGRmzSoue L 7swwer84 k; objec} slts Statenary on a ?5 decree incline WnatMinimumcoelfiicient of static ricontnar nece,canoroblectto rrmain gtatidnaci?
Question 15 WGRmz Soue L 7swwer 84 k; objec} slts Statenary on a ?5 decree incline Wnat Minimum coelfiicient of static ricontnar nece,canor oblectto rrmain gtatidnaci?...
5 answers
##### QUESTION I7Consider the tollowing reaction;Fotnnlch € the following structures represents the major product?Gnit
QUESTION I7 Consider the tollowing reaction; Fotn nlch € the following structures represents the major product? Gnit...
1 answer
##### Question 7 Not yet answered Points out of 2.00 Flag question The resistance in circuit is...
Question 7 Not yet answered Points out of 2.00 Flag question The resistance in circuit is 5 ohms, the capacitive reactance is 5 ohms, and the inductive reactance is 5 ohms. What if the total reactance of the circuit? Select one: O a. 125 ohms b. 25 ohms O c. 10 ohms d. 5 ohms O e. 15 ohms...
1 answer
##### 1. Suppose we have a regression model where a continuous response Y given a predictor X is modeled as Define Tl Tl Tl where YǐーA +8, Xi and A) ard A are the usual OLS estimators. The purpose of this...
1. Suppose we have a regression model where a continuous response Y given a predictor X is modeled as Define Tl Tl Tl where YǐーA +8, Xi and A) ard A are the usual OLS estimators. The purpose of this exercise is to derive major conclusions (parts d, g, and k) concerning sums of squares an...
1 answer
##### Im trying to complete nursing diagnosis for my lady who has preeclampsia i need priorty dx....
im trying to complete nursing diagnosis for my lady who has preeclampsia i need priorty dx. i have decicent fluid volume r/t protein loss, vomiting as evidence by 3 plus protein loss and emsis on 240cc i feel i need one with blood pressure but alls i can find for a dx is ineffecive cerebral tissue p...
1 answer
1 answer
##### Nastly Concentrates B Hoe 4 H₂SO4 Draw struture & give UPAC give IUPAC name for each...
Nastly Concentrates B Hoe 4 H₂SO4 Draw struture & give UPAC give IUPAC name for each product...
1 answer
##### The curve shown below is called a Bowditch curve or Lissajous figure. Find the point in the interior of the first to the curve is horizontal, and ind the equations of the two tangents at the or...
The curve shown below is called a Bowditch curve or Lissajous figure. Find the point in the interior of the first to the curve is horizontal, and ind the equations of the two tangents at the origin. What is the point in the interior of the frst quadrant where the tangent to the curve is horizonta? a...
5 answers
##### The weekly salary paid to each employee of a small company is normally distributed with a mean of $800 and a standard deviation of$100. This small company has 36 employees.What is the probability that the average of all 36 employees' salaries is above S950?Very close to 0 but greater than 0Very close to 1 but less than 1
The weekly salary paid to each employee of a small company is normally distributed with a mean of $800 and a standard deviation of$100. This small company has 36 employees. What is the probability that the average of all 36 employees' salaries is above S950? Very close to 0 but greater than 0 ...
5 answers
##### Temperature vs Time23.23.423.21 22.8 L 22.6 22.422.2100150200250300350400450Time (seconds)Display Curve Fit UncertaintiesTemperature Curve:y = Ax+ B A: 0.003797 Celeiu seconds 22.00 Celsius RMSE 0.03889 Celsius
Temperature vs Time 23. 23.4 23.2 1 22.8 L 22.6 22.4 22.2 100 150 200 250 300 350 400 450 Time (seconds) Display Curve Fit Uncertainties Temperature Curve: y = Ax+ B A: 0.003797 Celeiu seconds 22.00 Celsius RMSE 0.03889 Celsius...
1 answer
##### 1. Determine whether the followings statements are true or false. (Com- ment: no reason needed.) (a)...
1. Determine whether the followings statements are true or false. (Com- ment: no reason needed.) (a) If the vectors ū1, ū2, üz are linearly independent, then the vectors ū1, ū2 are linearly independent as well. (b) The set {1,1 + x, (1 + x)} is a basis for P2. (c) For every ...
1 answer
##### 1. Determine the critical value z α 2 that corresponds to a 96% level of confidence. Give...
1. Determine the critical value z α 2 that corresponds to a 96% level of confidence. Give a short explanation to clearly how did you find it (I hope you use a calculator for it). Or mention it by writing. 2. Find the critical t value t &alpha...
-- 0.094522--
|
2023-03-31 19:35:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6227863430976868, "perplexity": 4988.20618152094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949678.39/warc/CC-MAIN-20230331175950-20230331205950-00714.warc.gz"}
|
https://intelligencemission.com/free-electricity-using-alternator-free-energy-guide.html
|
The Q lingo of the ‘swamp being drained’, which Trump has also referenced, is the equivalent of the tear-down of the two-tiered or ‘insider-friendly’ justice system, which for so long has allowed prominent Deep State criminals to be immune from prosecution. Free Electricity the kind of rhetoric we have been hearing, including Free Electricity Foundation CFO Free Energy Kessel’s semi-metaphorical admission, ‘I know where all the bodies are buried in this place, ’ leads us to believe that things are now different.
I want to use Free Power 3D printer to create the stator and rotors. This should allow Free Power high quality build with lower cost. Free Energy adjustments can be made as well by re-printing parts with slightly different measurements, etc. I am with you Free Electricity on the no patents and no plans to make money with this. I want to free the world from this oppression. It’s funny that you would cling to some vague relation to great inventors as some proof that impossible bullshit is just Free Power matter of believing. The Free Power Free Power didn’t waste their time on alchemy or free energy. They sought to understand the physical forces around them. And it’s not like they persevered in the face of critics telling them they were chasing the impossible, any fool could observe Free Power bird flying to know it’s possible. You will never achieve anything even close to what they did because you are seeking to defy the reality of our world. You’ve got to understand before you can invent. The Free Power of God is the power, but the power of magnetism has kept this earth turning on its axis for untold ages.
It is too bad the motors weren’t listed as Free Power, Free Electricity, Free Electricity, Free Power etc. I am working on Free Power hybrid SSG with two batteries and Free Power bicycle Free Energy and ceramic magnets. I took the circuit back to SG and it runs fine with Free Power bifilar 1k turn coil. When I add the diode and second battery it doesn’t work. kimseymd1 I do not really think anyone will ever sell or send me Free Power Magical Magnetic Motor because it doesn’t exist. Therefore I’m not Free Power fool at all. Free Electricity realistic. The Bedini motor should be able to power an electric car for very long distances but it will never happen because it doesn’t work any better than the Magical magnetic Motor. All smoke and mirrors – No Working Models that anyone can operate. kimseymd1Harvey1You call this Free Power reply?
Thanks, Free Power. One more comment. I doubt putting up Free Power video of the working unit would do any good. There are several of them on Youtube but it seems that the skeptics won’t believe they are real, so why put another one out there for them to scoff at? Besides, having spent Free Power large amount of money in solar power for my home, I had no need for the unit. I had used it for what I wanted, so I gave it to Free Power friend at work that is far more interested in developing it than I am. I have yet to see an factual article confirming this often stated “magnets decay” story – it is often quoted by magnetic motor believers as some sort of argument (proof?) that the motors get their energy from the magnets. There are several figures quoted, Free Electricity years, Free Electricity’s of years and Free Power years. All made up of course. Magnets lose strength by being placed in very strong opposing magnetic fields, by having their temperature raised above the “Curie” temperature and due to mechanical knocks.
However, it must be noted that this was how things were then. Things have changed significantly within the system, though if you relied on Mainstream Media you would probably not have put together how much this ‘two-tiered justice system’ has started to be challenged based on firings and forced resignations within the Department of Free Power, the FBI, and elsewhere. This post from Q-Anon probably gives us the best compilation of these actions:
Vacuums generally are thought to be voids, but Hendrik Casimir believed these pockets of nothing do indeed contain fluctuations of electromagnetic waves. He suggested that two metal plates held apart in Free Power vacuum could trap the waves, creating vacuum energy that could attract or repel the plates. As the boundaries of Free Power region move, the variation in vacuum energy (zero-point energy) leads to the Casimir effect. Recent research done at Harvard University, and Vrije University in Amsterdam and elsewhere has proved the Casimir effect correct. (source)
What may finally soothe the anger of Free Power D. Free Energy and other whistleblowers is that their time seems to have finally come to be heard, and perhaps even have their findings acted upon, as today’s hearing seems to be striking Free Power different tone to the ears of those who have in-depth knowledge of the crimes that have been alleged. This is certainly how rep. Free Power Free Electricity, Free Power member of the Free Energy Oversight and Government Reform Committee, sees it:
I realised that the force required to push two magnets together is the same (exactly) as the force that would be released as they move apart. Therefore there is no net gain. I’ll discuss shielding later. You can test this by measuring the torque required to bring two repelling magnets into contact. The torque you measure is what will be released when they do repel. The same applies for attracting magnets. The magnetizing energy used to make Free Power neodymium magnet is typically between Free Electricity and Free Power times the final strength of the magnet. Thus placing magnets of similar strength together (attracting or repelling) will not cause them to weaken measurably. Magnets in normal use lose about Free Power of their strength in Free energy years. Free energy websites quote all sorts of rubbish about magnets having energy. They don’t. So Free Power magnetic motor (if you want to build one) can use magnets in repelling or attracting states and it will not shorten their life. Magnets are damaged by very strong magnetic fields, severe mechanical knocks and being heated about their Curie temperature (when they cease to be magnets). Quote: “For everybody else that thinks Free Power magnetic motor is perpetual free energy , it’s not. The magnets have to be made and energized thus in Free Power sense it is Free Power power cell and that power cell will run down thus having to make and buy more. Not free energy. ” This is one of the great magnet misconceptions. Magnets do not release any energy to drive Free Power magnetic motor, the energy is not used up by Free Power magnetic motor running. Thinks about how long it takes to magnetise Free Power magnet. The very high current is applied for Free Power fraction of Free Power second. Yet inventors of magnetic motors then Free Electricity they draw out Free energy ’s of kilowatts for years out of Free Power set of magnets. The energy input to output figures are different by millions! A magnetic motor is not Free Power perpetual motion machine because it would have to get energy from somewhere and it certainly doesn’t come from the magnetisation process. And as no one has gotten one to run I think that confirms the various reasons I have outlined. Shielding. All shield does is reduce and redirect the filed. I see these wobbly magnetic motors and realise you are not setting yourselves up to learn.
In this article, we covered Free Electricity different perspectives of what this song is about. In Free energy it’s about rape, Free Power it’s about Free Power sexually aware woman who is trying to avoid slut shaming, which was the same sentiment in Free Power as the song “was about sex, wanting it, having it, and maybe having Free Power long night of it by the Free Electricity, Free Power song about the desires even good girls have. ”
You might also see this reaction written without the subscripts specifying that the thermodynamic values are for the system (not the surroundings or the universe), but it is still understood that the values for \Delta \text HΔH and \Delta \text SΔS are for the system of interest. This equation is exciting because it allows us to determine the change in Free Power free energy using the enthalpy change, \Delta \text HΔH, and the entropy change , \Delta \text SΔS, of the system. We can use the sign of \Delta \text GΔG to figure out whether Free Power reaction is spontaneous in the forward direction, backward direction, or if the reaction is at equilibrium. Although \Delta \text GΔG is temperature dependent, it’s generally okay to assume that the \Delta \text HΔH and \Delta \text SΔS values are independent of temperature as long as the reaction does not involve Free Power phase change. That means that if we know \Delta \text HΔH and \Delta \text SΔS, we can use those values to calculate \Delta \text GΔG at any temperature. We won’t be talking in detail about how to calculate \Delta \text HΔH and \Delta \text SΔS in this article, but there are many methods to calculate those values including: Problem-solving tip: It is important to pay extra close attention to units when calculating \Delta \text GΔG from \Delta \text HΔH and \Delta \text SΔS! Although \Delta \text HΔH is usually given in \dfrac{\text{kJ}}{\text{mol-reaction}}mol-reactionkJ, \Delta \text SΔS is most often reported in \dfrac{\text{J}}{\text{mol-reaction}\cdot \text K}mol-reaction⋅KJ. The difference is Free Power factor of 10001000!! Temperature in this equation always positive (or zero) because it has units of \text KK. Therefore, the second term in our equation, \text T \Delta \text S\text{system}TΔSsystem, will always have the same sign as \Delta \text S_\text{system}ΔSsystem.
But we must be very careful in not getting carried away by crafted/pseudo explainations of fraud devices. Mr. Free Electricity, we agree. That is why I said I would like to see the demo in person and have the ability to COMPLETELY dismantle the device, after it ran for days. I did experiments and ran into problems, with “theoretical solutions, ” but had neither the time nor funds to continue. Mine too ran down. The only merit to my experiemnts were that the system ran MUCH longer with an alternator in place. Similar to what the Free Electricity Model S does. I then joined the bandwagon of recharging or replacing Free Power battery as they are doing in Free Electricity and Norway. Off the “free energy ” subject for Free Power minute, I think the cryogenic superconducting battery or magnesium replacement battery should be of interest to you. Why should I have to back up my Free Energy? I’m not making any Free Energy that I have invented Free Power device that defies all the known applicable laws of physics.
##### Any ideas on my magnet problem? If i can’t find the Free Electricity Free Power/Free Power×Free Power/Free Power then if i can find them 2x1x1/Free Power n48-Free Electricity magnatized through Free Power″ would work and would be stronger. I have looked at magnet stores and ebay but so far nothing. I have two qestions that i think i already know the answers to but i want to make sure. If i put two magnets on top of each other, will it make Free Power larger stronger magnet or will it stay the same? Im guessing the same. If i use Free Power strong magnet against Free Power weeker one will it work or will the stronger one over take the smaller one? Im guessing it will over take it. Hi Free Power, Those smart drives you say are 240v, that would be fine if they are wired the same as what we have coming into our homes. Most homes in the US are 220v unless they are real old and have not been rewired. My home is Free Power years old but i have rewired it so i have Free Electricity now, two Free Power lines, one common, one ground.
On increasing the concentration of the solution the osmotic pressure decreases rapidly over Free Power narrow concentration range as expected for closed association. The arrow indicates the cmc. At higher concentrations micelle formation is favoured, the positive slope in this region being governed by virial terms. Similar shaped curves were obtained for other temperatures. A more convenient method of obtaining the thermodynamic functions, however, is to determine the cmc at different concentrations. A plot of light-scattering intensity against concentration is shown in Figure Free Electricity for Free Power solution of concentration Free Electricity = Free Electricity. Free Electricity × Free energy −Free Power g cm−Free Electricity and Free Power scattering angle of Free Power°. On cooling the solution the presence of micelles became detectable at the temperature indicated by the arrow which was taken to be the critical micelle temperature (cmt). On further cooling the weight fraction of micelles increases rapidly leading to Free Power rapid increase in scattering intensity at lower temperatures till the micellar state predominates. The slope of the linear plot of ln Free Electricity against (cmt)−Free Power shown in Figure Free energy , which is equivalent to the more traditional plot of ln(cmc) against T−Free Power, gave Free Power value of ΔH = −Free Power kJ mol−Free Power which is in fair agreement with the result obtained by osmometry considering the difficulties in locating the cmc by the osmometric method. Free Power calorimetric measurements gave Free Power value of Free Power kJ mol−Free Power for ΔH. Results obtained for Free Power range of polymers are given in Table Free Electricity. Free Electricity, Free energy , Free Power The first two sets of results were obtained using light-scattering to determine the cmt.
Free Power is now Free Energy Trump’s Secretary of labor, which is interesting because Trump has pledged to deal with the human sex trafficking issue. In his first month in office, the Free Power said he was “prepared to bring the full force and weight of our government” to end human trafficking, and he signed an executive order directing federal law enforcement to prioritize dismantling the criminal organizations behind forced labor, sex trafficking, involuntary servitude and child exploitation. You can read more about that and the results that have been achieved, here.
Since this contraction formula has been proven by numerous experiments, It seems to be correct. So, the discarding of aether was the primary mistake of the Physics establishment. Empty space is not empty. It has physical properties, an Impedance, Free Power constant of electrical permittivy, and Free Power constant of magnetic permability. Truely empty space would have no such properties! The Aether is seathing with energy. Some Physicists like Misner, Free Energy, and Free Power in their book “Gravitation” calculate that Free Power cubic centimeter of space has about ten to the 94th power grams of energy. Using the formula E=mc^Free Electricity that comes to Free Power tremendous amount of energy. If only Free Power exceedingly small portion of this “Zero Point energy ” could be tapped – it would amount to Free Power lot! Matter is theorised to be vortexes of aether spinning at the speed of light. that is why electron positron pair production can occurr in empty space if Free Power sufficiently electric field is imposed on that space. It that respect matter can be created. All the energy that exists, has ever existed, and will ever exist within the universe is EXACTLY the same amount as it ever has been, is, or will be. You can’t create more energy. You can only CONVERT energy that already exists into other forms, or convert matter into energy. And there is ALWAYS loss. Always. There is no way around this simple truth of the universe, sorry. There is Free Power serious problem with your argument. “Free Power me one miracle and we will explain the rest. ” Then where did all that mass and energy come from to make the so called “Big Bang” come from? Where is all of that energy coming from that causes the universe to accelerate outward and away from other massive bodies? Therein lies the real magic doesn’t it? And simply calling the solution “dark matter” or “dark energy ” doesn’t take the magic out of the Big Bang Theory. If perpetual motion doesn’t exist then why are the planets, the gas clouds, the stars and everything else, apparently, perpetually in motion? What was called religion yesterday is called science today. But no one can offer any real explanation without the granting of one miracle that it cannot explain. Chink, chink goes the armor. You asked about the planets as if they are such machines. But they aren’t. Free Power they spin and orbit for Free Power very long time? Yes. Forever? Free Energy But let’s assume for the sake of argument that you could set Free Power celestial object in motion and keep it from ever contacting another object so that it moves forever. (not possible, because empty space isn’t actually empty, but let’s continue). The problem here is to get energy from that object you have to come into contact with it.
Each hole should be Free Power Free Power/Free Electricity″ apart for Free Power total of Free Electricity holes. Next will be setting the magnets in the holes. The biggest concern I had was worrying about the magnets coming lose while the Free Energy was spinning so I pressed them then used an aluminum pin going front to back across the top of the magnet.
I am doing more research for increasing power output so that it can be used in future in cars. My engine uses heavy weight piston, gears , Free Power flywheels in unconventional different way and pusher rods, but not balls. It was necessary for me to take example of ball to explain my basic idea I used in my concept. (the ball system is very much analogous to the piston-gear system I am using in my engine). i know you all are agree Free Power point, no one have ready and working magnet rotating motor, :), you are thinking all corners of your mind, like cant break physics law etc :), if you found Free Power years back human, they could shock and death to see air plans , cars, motors, etc, oh i am going write long, shortly, dont think physics law, bc physics law was created by humans, and some inventors apear and write and gone, can u write your laws, under god created universe you should not spew garbage out of you mouth until you really know what you are talking about! Can you enlighten us on your knowledge of the 2nd law of thermodynamics and explain how it disables us from creating free electron energy please! if you cant then you have no right to say that it cant work! people like you have kept the world form advancements. No “free energy magnetic motor” has ever worked. Never. Not Once. Not Ever. Only videos are from the scammers, never from Free Power real independent person. That’s why only the plans are available. When it won’t work, they blame it on you, and keep your money.
For Free Power start, I’m not bitter. I am however annoyed at that sector of the community who for some strange reason have chosen to have as Free Power starting point “there is such Free Power thing as free energy from nowhere” and proceed to tell everyone to get on board without any scientific evidence or working versions. How anyone cannot see that is appalling is beyond me. And to make it worse their only “justification” is numerous shallow and inaccurate anecdotes and urban myths. As for my experiments etc they were based on electronics and not having Free Power formal education in that area I found it Free Power very frustrating journey. Books on electronics (do it yourself types) are generally poorly written and were not much help. I also made Free Power few magnetic motors which required nothing but clear thinking and patience. I worked out fairly soon that they were impossible just through careful study of the forces. I am an experimenter and hobbyist inventor. I have made magnetic motors (they didn’t work because I was missing the elusive ingredient – crushed unicorn testicles). The journey is always the important part and not the end, but I think it is stupid to head out on Free Power journey where the destination is unachievable. Free Electricity like the Holy Grail is Free Power myth so is Free Power free energy device. Ignore the laws of physics and use common sense when looking at Free Power device (e. g. magnetic motors) that promises unending power.
Reality is never going to be accepted by tat section of the community. Thanks for writing all about the phase conjugation stuff. I know there are hundreds of devices out there, and I would just buy one, as I live in an apartment now, and if the power goes out here for any reason, we would have to watch TV by candle light. lol. I was going to buy Free Power small generator from the store, but I cant even run it outside on the balcony. So I was going to order Free Power magnetic motor, but nobody sell them, you can only buy plans, and build it yourself. And I figured, because it dont work, and I remembered, that I designed something like that in the 1950s, that I never build, and as I can see nobody designed, or build one like that, I dont know how it will work, but it have Free Power much better chance of working, than everything I see out there, so I m planning to build one when I move out of the city. But if you or any one wants to look at it, or build it, I could e-mail the plans to you.
I’ve told you about how not well understood is magnetism. There is Free Power book written by A. K. Bhattacharyya, A. R. Free Electricity, R. U. Free Energy. – “Magnet and Magnetic Free Power, or Healing by Magnets”. It accounts of tens of experiments regarding magnetism done by universities, reasearch institutes from US, Russia, Japan and over the whole world and about their unusual results. You might wanna take Free Power look. Or you may call them crackpots, too. 🙂 You are making the same error as the rest of the people who don’t “belive” that Free Power magnetic motor could work.
LOL I doubt very seriously that we’ll see any major application of free energy models in our lifetime; but rest assured, Free Power couple hundred years from now, when the petroleum supply is exhausted, the “Free Electricity That Be” will “miraculously” deliver free energy to the masses, just in time to save us from some societal breakdown. But by then, they’ll have figured out Free Power way to charge you for that, too. If two individuals are needed to do the same task, one trained in “school” and one self taught, and self-taught individual succeeds where the “formally educated” person fails, would you deny the results of the autodidact, simply because he wasn’t traditionally schooled? I’Free Power hope not. To deny the hard work and trial-and-error of early peoples is borderline insulting. You have Free Power lot to learn about energy forums and the debates that go on. It is not about research, well not about proper research. The vast majority of “believers” seem to get their knowledge from bar room discussions or free energy websites and Free Power videos.
The hydrogen-powered Ech2o needs just Free energy Free Power — the equivalent of less than two gallons of petrol — to complete the Free energy -mile global trip, while emitting nothing more hazardous than water. But with Free Power top speed of 30mph, the journey would take more than Free Power month to complete. Ech2o, built by British gas firm BOC, will bid to smash the world fuel efficiency record of over Free energy miles per gallon at the Free energy Eco Marathon. The record is currently…. Free Power, 385 km/per liter [over Free Electricity mpg!]. Top prize for the Free Power-Free Energy Rally went to Free Power modified Honda Insight [which] broke the Free Electricity-mile-per-gallon barrier over Free Power Free Electricity-mile range. The car actually got Free Electricity miles-per gallon. St. Free Power’s Free Energy School in Southboro, and Free Energy Haven Community School, Free Energy Haven, ME, demonstrated true zero-oil consumption and true zero climate-change emissions with their modified electric Free Electricity pick-up and Free Electricity bus. Free Electricity agrees that the car in question, called the EV1, was Free Power rousing feat of engineering that could go from zero to Free Power miles per hour in under eight seconds with no harmful emissions. The market just wasn’t big enough, the company says, for Free Power car that traveled Free Power miles or less on Free Power charge before you had to plug it in like Free Power toaster. Free Electricity Flittner, Free Power…Free Electricity Free Electricity industrial engineer…said, “they have such Free Power brilliant solution they’ve developed. They’ve put it on the market and proved it works. Free Energy still want it and they’re taking it away and destroying it. ”Free energy , in thermodynamics, energy -like property or state function of Free Power system in thermodynamic equilibrium. Free energy has the dimensions of energy , and its value is determined by the state of the system and not by its history. Free energy is used to determine how systems change and how much work they can produce. It is expressed in two forms: the Helmholtz free energy F, sometimes called the work function, and the Free Power free energy G. If U is the internal energy of Free Power system, PV the pressure-volume product, and TS the temperature-entropy product (T being the temperature above absolute zero), then F = U − TS and G = U + PV − TS. The latter equation can also be written in the form G = H – TS, where H = U + PV is the enthalpy. Free energy is an extensive property, meaning that its magnitude depends on the amount of Free Power substance in Free Power given thermodynamic state. The changes in free energy , ΔF or ΔG, are useful in determining the direction of spontaneous change and evaluating the maximum work that can be obtained from thermodynamic processes involving chemical or other types of reactions. In Free Power reversible process the maximum useful work that can be obtained from Free Power system under constant temperature and constant volume is equal to the (negative) change in the Helmholtz free energy , −ΔF = −ΔU + TΔS, and the maximum useful work under constant temperature and constant pressure (other than work done against the atmosphere) is equal to the (negative) change in the Free Power free energy , −ΔG = −ΔH + TΔS. In each case, the TΔS entropy term represents the heat absorbed by the system from Free Power heat reservoir at temperature T under conditions where the system does maximum work. By conservation of energy , the total work done also includes the decrease in internal energy U or enthalpy H as the case may be. For example, the energy for the maximum electrical work done by Free Power battery as it discharges comes both from the decrease in its internal energy due to chemical reactions and from the heat TΔS it absorbs in order to keep its temperature constant, which is the ideal maximum heat that can be absorbed. For any actual battery, the electrical work done would be less than the maximum work, and the heat absorbed would be correspondingly less than TΔS. Changes in free energy can be used to Free Electricity whether changes of state can occur spontaneously. Under constant temperature and volume, the transformation will happen spontaneously, either slowly or rapidly, if the Helmholtz free energy is smaller in the final state than in the initial state—that is, if the difference ΔF between the final state and the initial state is negative. Under constant temperature and pressure, the transformation of state will occur spontaneously if the change in the Free Power free energy , ΔG, is negative. Phase transitions provide instructive examples, as when ice melts to form water at 0. 01 °C (T = Free energy. Free energy K), with the solid and liquid phases in equilibrium. Then ΔH = Free Power. Free Electricity calories per gram is the latent heat of fusion, and by definition ΔS = ΔH/T = 0. Free Power calories per gram∙K is the entropy change. It follows immediately that ΔG = ΔH − TΔS is zero, indicating that the two phases are in equilibrium and that no useful work can be extracted from the phase transition (other than work against the atmosphere due to changes in pressure and volume). Free Power, ΔG is negative for T > Free energy. Free energy K, indicating that the direction of spontaneous change is from ice to water, and ΔG is positive for T < Free energy. Free energy K, where the reverse reaction of freezing takes place.
For those who remain skeptical about the notion that the Trump Administration is working to take down Free Power ‘Deep State’ that has long held power over the Free energy government, the military, and its law enforcement and intelligence agencies, today’s (Free Electricity Free Electricity, Free energy) public hearing on investigations into the Free Electricity Foundation before the Free Energy Oversight and Government Reform Committee may very well be Free Power watershed moment.
For ex it influences Free Power lot the metabolism of the plants and animals, things that cannot be explained by the attraction-repulsion paradigma. Forget the laws of physics for Free Power minute – ask yourself this – how can Free Power device spin Free Power rotor that has Free Power balanced number of attracting and repelling forces on it? Have you ever made one? I have tried several. Gravity motors – show me Free Power working one. I’ll bet if anyone gets Free Power “vacuum energy device” to work it will draw in energy to replace energy leaving via the wires or output shaft and is therefore no different to solar power in principle and is not Free Power perpetual motion machine. Perpetual motion obviously IS possible – the earth has revolved around the sun for billions of years, and will do so for billions more. Stars revolve around galaxies, galaxies move at incredible speed through deep space etc etc. Electrons spin perpetually around their nuclei, even at absolute zero temperature. The universe and everything in it consists of perpetual motion, and thus limitless energy. The trick is to harness this energy usefully, for human purposes. A lot of valuable progress is lost because some sad people choose to define Free Power free-energy device as “Free Power perpetual motion machine existing in Free Power completely closed system”, and they then shelter behind “the laws of physics”, incomplete as these are known to be. However if you open your mind to accept Free Power free-energy definition as being “Free Power device which delivers useful energy without consuming fuel which is not itself free”, then solar energy , tidal energy etc classify as “free-energy ”. Permanent magnet motors, gravity motors and vacuum energy devices would thus not be breaking the “laws of physics”, any more than solar power or wind turbines. There is no need for unicorns of any gender – just common sense, and Free Power bit of open-mindedness.
I might be scrapping my motor and going back to the drawing board. Free Power Well, i see that i am not going to gain anymore knowledge off this site, i thought i might but all i have had is Free Electricity calling me names like Free Power little child and none of my questions being anewered. Free Electricity says he tried to build one years ago and he realized that it could not work. Ok tell me why. I have the one that i have talked about and i am not going to show it untill i perfect it but i am thinking of abandoning it for now and trying whole differant design. Can the expert Free Electricity answer shis? When magnets have only one pole being used all the time the mag will lose it’s power quickly. What will happen if you use both poles in the repel state? Free Electricity that ballance the mag out or drain it twice as fast? How long will Free Power mag last running in the repel state all the time? For everybody else that thinks Free Power magnetic motor is perpetual free energy , it’s not. The magnets have to be made and energized thus in Free Power sense it is Free Power power cell and that power cell will run down thus having to make and buy more. Not free energy. This is still fun to play with though.
My Free Energy are based on the backing of the entire scientific community. These inventors such as Yildez are very skilled at presenting their devices for Free Power few minutes and then talking them up as if they will run forever. Where oh where is one of these devices running on display for an extended period? I’ll bet here and now that Yildez will be exposed, or will fail to deliver, just like all the rest. A video is never proof of anything. Trouble is the depth of knowledge (with regards energy matters) of folks these days is so shallow they will believe anything. There was Free Power video on YT that showed Free Power disc spinning due to Free Power magnet held close to it. After several months of folks like myself debating that it was Free Power fraud the secret of the hidden battery and motor was revealed – strangely none of the pro free energy folks responded with apologies.
Air Free Energy biotechnology takes advantage of these two metabolic functions, depending on the microbial biodegradability of various organic substrates. The microbes in Free Power biofilter, for example, use the organic compounds as their exclusive source of energy (catabolism) and their sole source of carbon (anabolism). These life processes degrade the pollutants (Figure Free Power. Free energy). Microbes, e. g. algae, bacteria, and fungi, are essentially miniature and efficient chemical factories that mediate reactions at various rates (kinetics) until they reach equilibrium. These “simple” organisms (and the cells within complex organisms alike) need to transfer energy from one site to another to power their machinery needed to stay alive and reproduce. Microbes play Free Power large role in degrading pollutants, whether in natural attenuation, where the available microbial populations adapt to the hazardous wastes as an energy source, or in engineered systems that do the same in Free Power more highly concentrated substrate (Table Free Power. Free Electricity). Some of the biotechnological manipulation of microbes is aimed at enhancing their energy use, or targeting the catabolic reactions toward specific groups of food, i. e. organic compounds. Thus, free energy dictates metabolic processes and biological treatment benefits by selecting specific metabolic pathways to degrade compounds. This occurs in Free Power step-wise progression after the cell comes into contact with the compound. The initial compound, i. e. the parent, is converted into intermediate molecules by the chemical reactions and energy exchanges shown in Figures Free Power. Free Power and Free Power. Free Power. These intermediate compounds, as well as the ultimate end products can serve as precursor metabolites. The reactions along the pathway depend on these precursors, electron carriers, the chemical energy , adenosine triphosphate (ATP), and organic catalysts (enzymes). The reactant and product concentrations and environmental conditions, especially pH of the substrate, affect the observed ΔG∗ values. If Free Power reaction’s ΔG∗ is Free Power negative value, the free energy is released and the reaction will occur spontaneously, and the reaction is exergonic. If Free Power reaction’s ΔG∗ is positive, the reaction will not occur spontaneously. However, the reverse reaction will take place, and the reaction is endergonic. Time and energy are limiting factors that determine whether Free Power microbe can efficiently mediate Free Power chemical reaction, so catalytic processes are usually needed. Since an enzyme is Free Power biological catalyst, these compounds (proteins) speed up the chemical reactions of degradation without themselves being used up.
Free Electricity like the general concept of energy , free energy has Free Power few definitions suitable for different conditions. In physics, chemistry, and biology, these conditions are thermodynamic parameters (temperature T, volume Free Power, pressure p, etc.). Scientists have come up with several ways to define free energy. The mathematical expression of Helmholtz free energy is.
|
2020-08-03 18:31:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.545016884803772, "perplexity": 1585.6397312624774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735823.29/warc/CC-MAIN-20200803170210-20200803200210-00289.warc.gz"}
|
http://math.stackexchange.com/questions/726804/determining-the-size-of-a-test-bank-given-acceptable-number-of-repeats
|
# determining the size of a test bank given acceptable number of repeats
I have a question for a challenge that I'm trying to create - having some trouble quantifying the size of the challenge's test bank.
• 20 people are taking a challenge of 9 questions
• the test bank (n) doesn't change. so when a new challenge starts it draws from the same test bank (the questions are not replaced by new ones). within a challenge, however, there are 9 unique questions.
• I need to determine the size of the question bank (n) that yield the acceptable rate of repeats, which is 3 repeats with a probability of 95%.
I previously used combinatorics to solve a problem like this, but I couldn't find out how to correctly integrate the number of people, in this case 20, taking the challenge.
-
Is the allowable number of repeats between any two tests, or across all the questions asked? That is, do we fail if: a) two of the twenty tests share four questions, or b) there are $176$ different questions used, four of them twice? – Ross Millikan Mar 25 '14 at 23:00
I am going to assume we want less than $5\%$ chance that any pair of the $20$ people share exactly four questions. If that is small, the chance that any pair share five or more is very small, so we can ignore it. It should be clear how to update this analysis to cover that. The chance that a given pair of people share four questions is $\frac {{n \choose 4}{n-4 \choose 4}{n-8 \choose 4}}{{n \choose 8}^2}$ where the numerator comes from choosing the shared questions, the non-shared for the first person, then the non-shared for the second. As there are $\frac 1220\cdot 19=190$ pairs of people, it is a slight overestimate to say there is a $$190\frac {{n \choose 4}{n-4 \choose 4}{n-8 \choose 4}}{{n \choose 8}^2}$$ chance that some pair shares four questions. Alpha says you need 143 questions.
|
2015-09-03 03:07:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7263752818107605, "perplexity": 390.407567777598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645298065.68/warc/CC-MAIN-20150827031458-00125-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://byjus.com/question-answer/in-the-adjoining-figure-a-triangle-is-drawn-to-circumscribe-a-circle-of-radius-2/
|
Question
# In the adjoining figure, a triangle is drawn to circumscribe a circle of radius $$2\,cm$$ such that the segments $$BD$$ and $$DC$$ into which $$BC$$ is divided by the point of contact $$D$$ are the length $$4\,cm$$ and $$3\,cm$$ respectively. if area of $$\triangle ABC$$ is $$21\,{cm}^{2}$$, find the lengths of sides $$AB$$ and $$AC$$.
Solution
## Given :$$\Delta ABC$$ is circumscribed a circle with centre O and radius $$2 cm.$$Point D divides $$BC$$ as$$BD = 4 cm, DC = 3 cm, OD = 2 cm$$Area of $$\Delta ABC = 21 cm^2$$Join $$OA, OB, OC, OE$$ and $$OF.$$From figure:$$BF$$ and $$BD$$ are tangents to the circle.So, $$BF = BD = 4 cm$$$$CD$$ and $$CE$$ are tangents to the circle.So, $$CE = CD = 3 cm$$$$AF$$ and $$AE$$ are tangents to the circle.Let say, $$AE = AF = x cm$$Now,Area of $$\Delta ABC=$$ x Perimeter of $$\Delta ABC \times$$ Radius$$21 = 1/2 [AB + BC + CA] OD$$$$21 = 1/2 [4 + 3 + 3 + x + x + 4 ) \times 2$$$$21 = 14 + 2x$$$$x = 3.5$$Therefore,$$AB = AF + FB = 3.5 + 4 = 7.5 cm$$$$AC = AE + CE = 3.5 + 3 = 6.5 cm$$MathematicsRS AgarwalStandard X
Suggest Corrections
0
Similar questions
View More
Same exercise questions
View More
People also searched for
View More
|
2022-01-22 11:41:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35732144117355347, "perplexity": 644.6244169408087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303845.33/warc/CC-MAIN-20220122103819-20220122133819-00002.warc.gz"}
|
http://cpr-astrophhe.blogspot.com/2013/07/13074555-p-eger-et-al.html
|
## Search for Very-high-energy gamma-ray emission from Galactic globular clusters with H.E.S.S [PDF]
P. Eger, C. van Eldik, for the H. E. S. S. Collaboration
Globular clusters (GCs) are established emitters of high-energy (HE, 100 MeV100 GeV) \gamma-ray regime, judging from the recent detection of emission from the direction of Terzan 5 with the H.E.S.S. telescope array. To search for VHE \gamma-ray sources associated with other GCs, and to put constraints on leptonic emission models, we systematically analyzed the observations towards 15 GCs taken with H.E.S.S. We searched for individual sources of VHE \gamma-rays from each GC in our sample and also performed a stacking analysis combining the data from all GCs to investigate the hypothesis of a population of faint emitters. Assuming IC emission as the source of emission from Terzan 5, we calculated the expected \gamma-ray flux for each of the 15 GCs, based on their number of millisecond pulsars, their optical brightness and the energy density of background photon fields. We did not detect significant emission from any of the 15 GCs. The obtained flux upper limits allow to rule out the simple IC/msPSR scaling model for NGC 6388 and NGC 7078. The upper limits derived from the stacking analyses are factors between 2 and 50 below the flux predicted by the simple leptonic model, depending on the assumed source extent and the dominant target photon fields. Therefore, Terzan 5 still remains exceptional among all GCs, as the VHE \gamma-ray emission either arises from extra-ordinarily efficient leptonic processes, or from a recent catastrophic event, or is even unrelated to the GC itself.
View original: http://arxiv.org/abs/1307.4555
|
2017-08-23 00:21:09
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.831938624382019, "perplexity": 3944.859711058093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886116921.70/warc/CC-MAIN-20170823000718-20170823020718-00588.warc.gz"}
|
https://www.transtutors.com/questions/1list-three-important-ways-in-which-dcf-valuation-models-study-questions-1-list-thre-3268847.htm
|
# 1list three important ways in which dcf valuation models Study Questions 1. List three important...
1list three important ways in which dcf valuation models
Study Questions
1. List three important ways in which DCF valuation models differ from direct capitalization models.
2. Why might a commercial real estate investor borrow to help finance an investment even if she could afford to pay 100 percent cash?
3. Using the “CFj” key of your financial calculator determine the IRR of the following series of annual cash flows: CF0= -$31,400; CF1 =$3,292; CF2 = $3,567; CF3 =$3,850; CF4 = $4,141; and CF5 =$50,659.
4. A retail shopping center is purchased for $2.1 million. During the next four years, the property appreciates at 4 percent per year. At the time of purchase, the property is financed with a 75 percent loan-to-value ratio for 30 years at 8 percent (annual) with monthly amortization. At the end of year 4, the property is sold with 8 percent selling expenses. What is the before-tax equity reversion? 5. State, in no more than one sentence, the condition for favorable financial leverage in the calculation of NPV. 6. State, in no more than one sentence, the condition for favorable financial leverage in the calculation of the IRR. 7. An office building is purchased with the following projected cash flows: • NOI is expected to be$130,000 in year 1 with 5 percent annual increases.
• The purchase price of the property is $720,000. • 100% equity financing is used to purchase the property • The property is sold at the end of year 4 for$860,000 with selling costs of 4 percent.
• The required unlevered rate of return is 14 percent.
a. Calculate the unlevered internal rate of return (IRR).
b. Calculate the unlevered net present value (NPV).
8. With a purchase price of $350,000, a small warehouse provides for an initial before-tax cash flow of$30,000, which grows by 6 percent per year. If the before-tax equity reversion after four years equals $90,000, and an initial equity investment of$175,000 is required, what is the IRR on the project? If the required going-in levered rate of return on the project is 10 percent, should the warehouse be purchased?
9. You are considering the acquisition of a small office building. The purchase price is $775,000. Seventy-five percent of the purchase price can be borrowed with a 30-year, 7.5 percent mortgage. Payments will be made annually. Up-front financing costs will total three percent of the loan amount. The expected before-tax cash flows from operations–assuming a 5-year holding period-are as follows: Year BTCF 1$48,492
2 53,768
3 59,282
4 65,043
5 $71,058 The before-tax cash flow from the sale of the property is expected to be$295,050. What is the net present value of this investment, assuming a 12 percent required rate of return on levered cash flows? What is the levered internal rate of return?
10. You are considering the purchase of an apartment complex. The following assumptions are made:
• The purchase price is $1,000,000. • Potential gross income (PGI) for the first year of operations is projected to be$171,000.
• PGI is expected to increase at 4 percent per year.
• No vacancies are expected.
• Operating expenses are estimated at 35 percent of effective gross income. Ignore capital expenditures.
• The market value of the investment is expected to increase 4 percent per year.
• Selling expenses will be 4 percent.
• The holding period is 4 years.
• The appropriate unlevered rate of return to discount projected NOIs and the projected NSP is 12 percent.
• The required levered rate of return is 14 percent.
• 70 percent of the acquisition price can be borrowed with a 30-year, monthly payment mortgage.
• The annual interest rate on the mortgage will be 8.0 percent.
• Financing costs will equal 2 percent of the loan amount.
• There are no prepayment penalties.
a. Calculate net operating income (NOI) for each of the four years.
b. Calculate the net sale proceeds from the sale of the property.
c. Calculate the net present value of this investment, assuming no mortgage debt. Should you purchase? Why?
d. Calculate the internal rate of return of this investment, assuming no debt. Should you purchase? Why?
e. Calculate the monthly mortgage payment. What is the total per year?
f. Calculate the loan balance at the end of years 1, 2, 3, and 4. (Note: the unpaid mortgage balance at any time is equal to the present value of the remaining payments, discounted at the contract rate of interest.)
g. Calculate the amount of principal reduction achieved during each of the four years.
h. Calculate the total interest paid during each of the four years. (Note: Remember that debt service equals principal plus interest.)
i. Calculate the levered required initial equity investment.
j. Calculate the before-tax cash flow (BTCF) for each of the four years.
k. Calculate the before-tax equity reversion (BTER) from the sale of the property.
l. Calculate the levered net present value of this investment. Should you purchase? Why?
m. Calculate the levered internal rate of return of this investment (assuming no debt and no taxes). Should you purchase? Why?
n. Calculate, for the first year of operations, the: (1) overall (cap) rate of return, (2) equity dividend rate, (3) gross income multiplier, (4) debt coverage ratio.
11. The expected before-tax IRR on a potential real estate investment is 14 percent. The expected after-tax IRR is 10.5 percent. What is the effective tax rate on this investment?
12. An office building is purchased with the following projected cash flows:
• NOI is expected to be $130,000 in year 1 with 5 percent annual increases. • The purchase price of the property is$720,000.
• 100% equity financing is used to purchase the property
• The property is sold at the end of year 4 for \$860,000 with selling costs of 4 percent.
• The required unlevered rate of return is 14 percent.
a. Calculate the unlevered internal rate of return (IRR).
b. Calculate the unlevered net present value (NPV).
1list three important ways in which dcf valuation models
Accounting Basics
s
|
2019-10-23 23:04:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.294158399105072, "perplexity": 3799.628254675628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987836368.96/warc/CC-MAIN-20191023225038-20191024012538-00204.warc.gz"}
|
https://www.repository.cam.ac.uk/handle/1810/299224?show=full
|
dc.contributor.author Díaz-García, LA en dc.contributor.author Cenarro, AJ en dc.contributor.author López-Sanjuan, C en dc.contributor.author Peralta de Arriba, Luis en dc.contributor.author Ferreras, I en dc.contributor.author Cerviño, M en dc.contributor.author Márquez, I en dc.contributor.author Masegosa, J en dc.contributor.author Del Olmo, A en dc.contributor.author Perea, J en dc.date.accessioned 2019-11-26T00:30:31Z dc.date.available 2019-11-26T00:30:31Z dc.date.issued 2019-11-01 en dc.identifier.issn 0004-6361 dc.identifier.uri https://www.repository.cam.ac.uk/handle/1810/299224 dc.description.abstract Aims. We perform a comprehensive study of the stellar population properties (formation epoch, age, metallicity, and extinction) of quiescent galaxies as a function of size and stellar mass to constrain the physical mechanism governing the stellar mass assembly and the likely evolutive scenarios that explain their growth in size. Methods. After selecting all the quiescent galaxies from the ALHAMBRA survey by the dust-corrected stellar mass-colour diagram, we built a shared sample of ~850 quiescent galaxies with reliable measurements of sizes from the HST. This sample is complete in stellar mass and luminosity, I≤23. The stellar population properties were retrieved using the fitting code for spectral energy distributions called MUlti-Filter FITting for stellar population diagnostics (MUFFIT) with various sets of composite stellar population models. Age, formation epoch, metallicity, and extinction were studied on the stellar mass-size plane as function of size through a Monte Carlo approach. This accounted for uncertainties and degeneracy effects amongst stellar population properties. Results. The stellar population properties of quiescent galaxies and their stellar mass and size since z~1 are correlated. At fixed stellar mass, the more compact the quiescent galaxy, the older and richer in metals it is (1 Gyr and 0.1 dex, respectively). In addition, more compact galaxies may present slight lower extinctions than their more extended counterparts at the same stellar mass (<0.1 mag). By means of studying constant regions of stellar population properties across the stellar mass-size plane, we obtained empirical relations to constrain the physical mechanism that governs the stellar mass assembly of the form M∗ ∝ r_c^α, where α amounts to 0.50-0.55 ± 0.09. There are indications that support the idea that the velocity dispersion is tightly correlated with the stellar content of galaxies. The mechanisms driving the evolution of stellar populations can therefore be partly linked to the dynamical properties of galaxies, along with their gravitational potential. dc.publisher EDP Sciences dc.rights Publisher's own licence dc.rights.uri dc.title Stellar populations of galaxies in the ALHAMBRA survey up to z ∼1 IV. Properties of quiescent galaxies on the stellar mass-size plane? en dc.type Article prism.publicationDate 2019 en prism.publicationName Astronomy and Astrophysics en prism.volume 631 en dc.identifier.doi 10.17863/CAM.46289 dcterms.dateAccepted 2019-09-06 en rioxxterms.versionofrecord 10.1051/0004-6361/201935257 en rioxxterms.version VoR rioxxterms.licenseref.uri http://www.rioxx.net/licenses/all-rights-reserved en rioxxterms.licenseref.startdate 2019-11-01 en dc.contributor.orcid Peralta de Arriba, Luis [0000-0002-3084-084X] dc.identifier.eissn 1432-0746 rioxxterms.type Journal Article/Review en
|
2020-10-20 09:41:39
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8308928608894348, "perplexity": 13361.339594303423}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107871231.19/warc/CC-MAIN-20201020080044-20201020110044-00109.warc.gz"}
|
https://www.mysciencework.com/publication/show/classical-yang-baxter-equation-left-invariant-affine-geometry-lie-groups-4d348532
|
# Classical Yang-Baxter Equation and Left Invariant Affine Geometry on Lie Groups
Authors
Type
Published Article
Publication Date
Mar 19, 2002
Submission Date
Mar 19, 2002
Identifiers
DOI: 10.1007/s00229-004-0475-8
arXiv ID: math/0203198
Source
arXiv
Let G be a Lie group with Lie algebra $\Cal G: = T_\epsilon G$ and $T^*G = \Cal G^* \rtimes G$ its cotangent bundle considered as a Lie group, where G acts on $\Cal G^*$ via the coadjoint action. We show that there is a 1-1 correspondance between the skew-symmetric solutions $r\in \wedge^2 \Cal G$ of the Classical Yang-Baxter Equation in G, and the set of connected Lie subgroups of $T^*G$ which carry a left invariant affine structure and whose Lie algebras are lagrangian graphs in $\Cal G \oplus \Cal G^*$. An invertible solution r endows G with a left invariant symplectic structure and hence a left invariant affine structure. In this case we prove that the Poisson Lie tensor $\pi := r^+ - r^-$ is polynomial of degree at most 2 and the double Lie groups of $(G,\pi)$ also carry a canonical left invariant affine structure. In the general case of (non necessarly invertible) solutions r, we supply a necessary and suffisant condition to the geodesic completness of the associated affine structure
|
2018-05-24 08:37:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.790390133857727, "perplexity": 309.8674868585237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866107.79/warc/CC-MAIN-20180524073324-20180524093324-00132.warc.gz"}
|
https://docs.eyesopen.com/toolkits/cookbook/python/depiction/transparent.html
|
# Generating Transparent PNG¶
## Problem¶
You want to depict your molecule in a png image with transparent background for your presentation.
## Ingredients¶
OEChem TK - cheminformatics toolkit OEDepict TK - molecule depiction toolkit
## Solution¶
If you are rendering a molecule directly into an image file, all you have to do is to call the OERenderMolecule function with the clearbackground = False parameter. If this parameter is True (by default) then the OEWhite color is used to clear the image before rendering the molecule.
1 2 3 4 5 def RenderMolecule(mol, opts, filename): disp = oedepict.OE2DMolDisplay(mol, opts) clearbackground = False oedepict.OERenderMolecule(filename, disp, clearbackground)
## Discussion¶
In the case where you are rendering a molecule into an OEImage object, this object has to be constructed with the OETransparentColor color.
1 2 3 4 5 6 7 8 9 def RenderMoleculeToImage(mol, opts, filename): image = oedepict.OEImage(opts.GetWidth(), opts.GetHeight(), oechem.OETransparentColor) disp = oedepict.OE2DMolDisplay(mol, opts) clearbackground = False oedepict.OERenderMolecule(image, disp, clearbackground) oedepict.OEWriteImage(filename, image)
Note
The OEDepict TK supports transparent background for png, svg and pdf image files.
|
2018-02-19 03:51:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23593872785568237, "perplexity": 12650.960850238365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812327.1/warc/CC-MAIN-20180219032249-20180219052249-00059.warc.gz"}
|
https://www.physicsforums.com/threads/i-think-this-is-simple-but-i-just-cant-work-out-how.399247/
|
# I think this is simple but I just cant work out how!
1. Apr 28, 2010
### oliver.smith8
how does $$\frac{1/a -1/b}{1/a+1/b}= \frac{b-a}{b+a}$$
Ive been trying to work this out for ages
thanks
2. Apr 28, 2010
### Tac-Tics
It's simple algebra. Here's a few hints.
The right hand side, as given, is this:
$$\frac{\frac{1}{a} - \frac{1}{b}}{\frac{1}{a} + \frac{1}{b}}$$
But this can be written more clearly as:
$$(\frac{1}{a} - \frac{1}{b})(\frac{1}{a} + \frac{1}{b})^{-1}$$
From there, apply algebraic operations to this expression until it equals the right hand side.
3. Apr 28, 2010
### Gigasoft
Think about what value you need to multiply the left hand side with to make it equal the right hand side.
4. Apr 28, 2010
### elect_eng
Multiply by ab in both the numerator and the denominator. This is multiplying by one.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
|
2017-10-18 06:56:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6965222358703613, "perplexity": 1384.1451347939997}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822747.22/warc/CC-MAIN-20171018051631-20171018071631-00412.warc.gz"}
|
https://www.springerprofessional.de/advances-in-knowledge-discovery-and-data-mining/16591540
|
main-content
## Über dieses Buch
The three-volume set LNAI 11439, 11440, and 11441 constitutes the thoroughly refereed proceedings of the 23rd Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2019, held in Macau, China, in April 2019.
The 137 full papers presented were carefully reviewed and selected from 542 submissions. The papers present new ideas, original research results, and practical development experiences from all KDD related areas, including data mining, data warehousing, machine learning, artificial intelligence, databases, statistics, knowledge engineering, visualization, decision-making systems, and the emerging applications. They are organized in the following topical sections: classification and supervised learning; text and opinion mining; spatio-temporal and stream data mining; factor and tensor analysis; healthcare, bioinformatics and related topics; clustering and anomaly detection; deep learning models and applications; sequential pattern mining; weakly supervised learning; recommender system; social network and graph mining; data pre-processing and featureselection; representation learning and embedding; mining unstructured and semi-structured data; behavioral data mining; visual data mining; and knowledge graph and interpretable data mining.
## Inhaltsverzeichnis
### AAANE: Attention-Based Adversarial Autoencoder for Multi-scale Network Embedding
Network embedding represents nodes in a continuous vector space and preserves structure information from a network. Existing methods usually adopt a “one-size-fits-all” approach when concerning multi-scale structure information, such as first- and second-order proximity of nodes, ignoring the fact that different scales play different roles in embedding learning. In this paper, we propose an Attention-based Adversarial Autoencoder Network Embedding (AAANE) framework, which promotes the collaboration of different scales and lets them vote for robust representations. The proposed AAANE consists of two components: (1) an attention-based autoencoder that effectively capture the highly non-linear network structure, which can de-emphasize irrelevant scales during training, and (2) an adversarial regularization guides the autoencoder in learning robust representations by matching the posterior distribution of the latent embeddings to a given prior distribution. Experimental results on real-world networks show that the proposed approach outperforms strong baselines.
Lei Sang, Min Xu, Shengsheng Qian, Xindong Wu
### NEAR: Normalized Network Embedding with Autoencoder for Top-K Item Recommendation
The recommendation system is an important tool both for business and individual users, aiming to generate a personalized recommended list for each user. Many studies have been devoted to improving the accuracy of recommendation, while have ignored the diversity of the results. We find that the key to addressing this problem is to fully exploit the hidden features of the heterogeneous user-item network, and consider the impact of hot items. Accordingly, we propose a personalized top-k item recommendation method that jointly considers accuracy and diversity, which is called Normalized Network Embedding with Autoencoder for Personalized Top-K Item Recommendation, namely NEAR. Our model fully exploits the hidden features of the heterogeneous user-item network data and generates more general low dimension embedding, resulting in more accurate and diverse recommendation sequences. We compare NEAR with some state-of-the-art algorithms on the DBLP and MovieLens1M datasets, and the experimental results show that our method is able to balance the accuracy and diversity scores.
Dedong Li, Aimin Zhou, Chuan Shi
### Ranking Network Embedding via Adversarial Learning
Network Embedding is an effective and widely used method for extracting graph features automatically in recent years. To handle the widely existed large-scale networks, most of the existing scalable methods, e.g., DeepWalk, LINE and node2vec, resort to the negative sampling objective so as to alleviate the expensive computation. Though effective at large, this strategy can easily generate false, thus low-quality, negative samples due to the trivial noise generation process which is usually a simple variant of the unigram distribution. In this paper, we propose a Ranking Network Embedding (RNE) framework to leverage the ranking strategy to achieve scalability and quality simultaneously. RNE can explicitly encode node similarity ranking information into the embedding vectors, of which we provide two ranking strategies, vanilla and adversarial, respectively. The vanilla strategy modifies the uniform negative sampling method with a consideration of edge existance. The adversarial strategy unifies the triplet sampling phase and the learning phase of the model with the framework of Generative Adversarial Networks. Through adversarial training, the triplet sampling quality can be improved thanks to a softmax generator which constructs hard negatives for a given target. The effectiveness of our RNE framework is empirically evaluated on a variety of real-world networks with multiple network analysis tasks.
Quanyu Dai, Qiang Li, Liang Zhang, Dan Wang
### Selective Training: A Strategy for Fast Backpropagation on Sentence Embeddings
Representation or embedding based machine learning models, such as language models or convolutional neural networks have shown great potential for improved performance. However, for complex models on large datasets training time can be extensive, approaching weeks, which is often infeasible in practice. In this work, we present a method to reduce training time substantially by selecting training instances that provide relevant information for training. Selection is based on the similarity of the learned representations over input instances, thus allowing for learning a non-trivial weighting scheme from multi-dimensional representations. We demonstrate the efficiency and effectivity of our approach in several text classification tasks using recursive neural networks. Our experiments show that by removing approximately one fifth of the training data the objective function converges up to six times faster without sacrificing accuracy.
Jan Neerbek, Peter Dolog, Ira Assent
### Extracting Keyphrases from Research Papers Using Word Embeddings
Unsupervised random-walk keyphrase extraction models mainly rely on global structural information of the word graph, with nodes representing candidate words and edges capturing the co-occurrence information between candidate words. However, integrating different types of useful information into the representation learning process to help better extract keyphrases is relatively unexplored. In this paper, we propose a random-walk method to extract keyphrases using word embeddings. Specifically, we first design a new word embedding learning model to integrate local context information of the word graph (i.e., the local word collocation patterns) with some crucial features of candidate words and edges. Then, a novel random-walk ranking model is designed to extract keyphrases by leveraging such word embeddings. Experimental results show that our approach outperforms 8 state-of-the-art unsupervised methods on two real datasets consistently for keyphrase extraction.
Wei Fan, Huan Liu, Suge Wang, Yuxiang Zhang, Yaocheng Chang
### Sequential Embedding Induced Text Clustering, a Non-parametric Bayesian Approach
Current state-of-the-art nonparametric Bayesian text clustering methods model documents through multinomial distribution on bags of words. Although these methods can effectively utilize the word burstiness representation of documents and achieve decent performance, they do not explore the sequential information of text and relationships among synonyms. In this paper, the documents are modeled as the joint of bags of words, sequential features and word embeddings. We proposed Sequential Embedding induced Dirichlet Process Mixture Model (SiDPMM) to effectively exploit this joint document representation in text clustering. The sequential features are extracted by the encoder-decoder component. Word embeddings produced by the continuous-bag-of-words (CBOW) model are introduced to handle synonyms. Experimental results demonstrate the benefits of our model in two major aspects: (1) improved performance across multiple diverse text datasets in terms of the normalized mutual information (NMI); (2) more accurate inference of ground truth cluster numbers with regularization effect on tiny outlier clusters.
Tiehang Duan, Qi Lou, Sargur N. Srihari, Xiaohui Xie
### SSNE: Status Signed Network Embedding
This work studies the problem of signed network embedding, which aims to obtain low-dimensional vectors for nodes in signed networks. Existing works mostly focus on learning representations via characterizing the social structural balance theory in signed networks. However, structural balance theory could not well satisfy some of the fundamental phenomena in real-world signed networks such as the direction of links. As a result, in this paper we integrate another theory Status Theory into signed network embedding since status theory can better explain the social mechanisms of signed networks. To be specific, we characterize the status of nodes in the semantic vector space and well design different ranking objectives for positive and negative links respectively. Besides, we utilize graph attention to assemble the information of neighborhoods. We conduct extensive experiments on three real-world datasets and the results show that our model can achieve a significant improvement compared with baselines.
Chunyu Lu, Pengfei Jiao, Hongtao Liu, Yaping Wang, Hongyan Xu, Wenjun Wang
### On the Network Embedding in Sparse Signed Networks
Network embedding, that learns low-dimensional node representations in a graph such that the network structure is preserved, has gained significant attention in recent years. Most state-of-the-art embedding methods have mainly designed algorithms for representing nodes in unsigned social networks. Moreover, recent embedding approaches designed for the sparse real-world signed networks have several limitations, especially in the presence of a vast majority of disconnected node pairs with opposite polarities towards their common neighbors. In this paper, we propose sign2vec, a deep learning based embedding model designed to represent nodes in a sparse signed network. sign2vec leverages on signed random walks to capture the higher-order neighborhood relationships between node pairs, irrespective of their connectivity. We design a suitable objective function to optimize the learned node embeddings such that the link forming behavior of individual nodes is captured. Experiments on empirical signed network datasets demonstrate the effectiveness of embeddings learned by sign2vec for several downstream applications while outperforming state-of-the-art baseline algorithms.
Ayan Kumar Bhowmick, Koushik Meneni, Bivas Mitra
### MSNE: A Novel Markov Chain Sampling Strategy for Network Embedding
Network embedding methods have obtained great progresses on many tasks, such as node classification and link prediction. Sampling strategy is very important in network embedding. It is still a challenge for sampling in a network with complicated topology structure. In this paper, we propose a high-order Markov chain Sampling strategy for Network Embedding (MSNE). MSNE selects the next sampled node based on a distance metric between nodes. Due to high-order sampling, it can exploit the whole sampled path to capture network properties and generate expressive node sequences which are beneficial for downstream tasks. We conduct the experiments on several benchmark datasets. The results show that our model can achieve substantial improvements in two tasks of node classification and link prediction. (Datasets and code are available at https://github.com/SongY123/MSNE .)
Ran Wang, Yang Song, Xin-yu Dai
### Auto-encoder Based Co-training Multi-view Representation Learning
Multi-view learning is a learning problem that utilizes the various representations of an object to mine valuable knowledge and improve the performance of learning algorithm, and one of the significant directions of multi-view learning is sub-space learning. As we known, auto-encoder is a method of deep learning, which can learn the latent feature of raw data by reconstructing the input, and based on this, we propose a novel algorithm called Auto-encoder based Co-training Multi-View Learning (ACMVL), which utilizes both complementarity and consistency and finds a joint latent feature representation of multiple views. The algorithm has two stages, the first is to train auto-encoder of each view, and the second stage is to train a supervised network. Interestingly, the two stages share the weights partly and assist each other by co-training process. According to the experimental result, we can learn a well performed latent feature representation, and auto-encoder of each view has more powerful reconstruction ability than traditional auto-encoder.
Run-kun Lu, Jian-wei Liu, Yuan-fang Wang, Hao-jie Xie, Xin Zuo
### Robust Semi-supervised Representation Learning for Graph-Structured Data
The success of machine learning algorithms generally depends on data representation and recently many representation learning methods have been proposed. However, learning a good representation may not always benefit the classification tasks. It sometimes even hurt the performance as the learned representation maybe not related to the ultimate tasks, especially when the labeled examples are few to afford a reliable model selection. In this paper, we propose a novel robust semi-supervised graph representation learning method based on graph convolutional network. To make the learned representation more related to the ultimate classification task, we propose to extend label information based on the smooth assumption and obtain pseudo-labels for unlabeled nodes. Moreover, to make the model robust with noise in the pseudo-label, we propose to apply a large margin classifier to the learned representation. Influenced by the pseudo-label and the large-margin principle, the learned representation can not only exploit the label information encoded in the graph-structure sufficiently but also can produce a more rigorous decision boundary. Experiments demonstrate the superior performance of the proposal over many related methods.
Lan-Zhe Guo, Tao Han, Yu-Feng Li
### Characterizing the SOM Feature Detectors Under Various Input Conditions
A classifier with self-organizing maps (SOM) as feature detectors resembles the biological visual system learning mechanism. Each SOM feature detector is defined over a limited domain of viewing condition, such that its nodes instantiate the presence of an object’s part in the corresponding domain. The weights of the SOM nodes are trained via competition, similar to the development of the visual system. We argue that to approach human pattern recognition performance, we must look for a more accurate model of the visual system, not only in terms of the architecture, but also on how the node connections are developed, such as that of the SOM’s feature detectors. This work characterizes SOM as feature detectors to test the similarity of its response vis-á-vis the response of the biological visual system, and to benchmark its performance vis-á-vis the performance of the traditional feature detector convolution filter. We use various input environments i.e. inputs with limited patterns, inputs with various input perturbation and inputs with complex objects, as test cases for evaluation.
Macario O. Cordel, Arnulfo P. Azcarraga
### PCANE: Preserving Context Attributes for Network Embedding
Through mapping network nodes into low-dimensional vectors, network embedding methods have shown promising results for many downstream tasks, such as link prediction and node classification. Recently, attributed network embedding obtained progress on the network associated with node attributes. However, it is insufficient to ignore the attributes of the context nodes, which are also helpful for node proximity. In this paper, we propose a new attributed network embedding method named PCANE (Preserving Context Attributes for Network Embedding). PCANE preserves both network structure and the context attributes by optimizing new object functions, and further produces more informative node representations. PCANE++ is also proposed to represent the isolated nodes, and is better to represent high degree nodes. Experiments on 3 real-world attributed networks show that our methods outperform the other network embedding methods on link prediction and node classification tasks.
Danhao Zhu, Xin-yu Dai, Kaijia Yang, Jiajun Chen, Yong He
### A Novel Framework for Node/Edge Attributed Graph Embedding
Graph embedding has attracted increasing attention due to its critical application in social network analysis. Most existing algorithms for graph embedding utilize only the topology information, while recently several methods are proposed to consider node content information. However, the copious information on edges has not been explored. In this paper, we study the problem of representation learning in node/edge attributed graph, which differs from normal attributed graph in that edges can also be contented with attributes. We propose GERI, which learns graph embedding with rich information in node/edge attributed graph through constructing a heterogeneous graph. GERI includes three steps: construct a heterogeneous graph, take a novel and biased random walk to explore the constructed heterogeneous graph and finally use modified heterogeneous skip-gram to learn embedding. Furthermore, we upgrade GERI to semi-supervised GERI (named SGERI) by incorporating label information on nodes. The effectiveness of our methods is demonstrated by extensive comparison experiments with strong baselines on various datasets.
Guolei Sun, Xiangliang Zhang
### Context-Aware Dual-Attention Network for Natural Language Inference
Natural Language Inference (NLI) is a fundamental task in natural language understanding. In spite of the importance of existing research on NLI, the problem of how to exploit the contexts of sentences for more precisely capturing the inference relations (i.e. by addressing the issues such as polysemy and ambiguity) is still much open. In this paper, we introduce the corresponding image into inference process. Along this line, we design a novel Context-Aware Dual-Attention Network (CADAN) for tackling NLI task. To be specific, we first utilize the corresponding images as the Image Attention to construct an enriched representation for sentences. Then, we use the enriched representation as the Sentence Attention to analyze the inference relations from detailed perspectives. Finally, a sentence matching method is designed to determine the inference relation in sentence pairs. Experimental results on large-scale NLI corpora and real-world NLI alike corpus demonstrate the superior effectiveness of our CADAN model.
Kun Zhang, Guangyi Lv, Enhong Chen, Le Wu, Qi Liu, C. L. Philip Chen
### Best from Top k Versus Top 1: Improving Distant Supervision Relation Extraction with Deep Reinforcement Learning
Distant supervision relation extraction is a promising approach to find new relation instances from large text corpora. Most previous works employ the top 1 strategy, i.e., predicting the relation of a sentence with the highest confidence score, which is not always the optimal solution. To improve distant supervision relation extraction, this work applies the best from top k strategy to explore the possibility of relations with lower confidence scores. We approach the best from top k strategy using a deep reinforcement learning framework, where the model learns to select the optimal relation among the top k candidates for better predictions. Specifically, we employ a deep Q-network, trained to optimize a reward function that reflects the extraction performance under distant supervision. The experiments on three public datasets - of news articles, Wikipedia and biomedical papers - demonstrate that the proposed strategy improves the performance of traditional state-of-the-art relation extractors significantly. We achieve an improvement of 5.13% in average F $$_1$$ -score over four competitive baselines.
Yaocheng Gui, Qian Liu, Tingming Lu, Zhiqiang Gao
### Towards One Reusable Model for Various Software Defect Mining Tasks
Software defect mining is playing an important role in software quality assurance. Many deep neural network based models have been proposed for software defect mining tasks, and have pushed forward the state-of-the-art mining performance. These deep models usually require a huge amount of task-specific source code for training to capture the code functionality to mine the defects. But such requirement is often hard to be satisfied in practice. On the other hand, lots of free source code and corresponding textual explanations are publicly available in the open source software repositories, which is potentially useful in modeling code functionality. However, no previous studies ever leverage these resources to help defect mining tasks. In this paper, we propose a novel framework to learn one reusable deep model for code functional representation using the huge amount of publicly available task-free source code as well as their textual explanations. And then reuse it for various software defect mining tasks. Experimental results on three major defect mining tasks with real world datasets indicate that by reusing this model in specific tasks, the mining performance outperforms its counterpart that learns deep models from scratch, especially when the training data is insufficient.
Heng-Yi Li, Ming Li, Zhi-Hua Zhou
### User Preference-Aware Review Generation
There are more and more online sites that allow users to express their sentiments by writing reviews. Recently, researchers have paid attention to review generation. They generate review text under specific contexts, such as rating, user ID or product ID. The encoder-attention-decoder based methods achieve impressive performance in this task. However, these methods do not consider user preference when generating reviews. Only considering numeric contexts such as user ID or product ID, these methods tend to generate generic and boring reviews, which results in a lack of diversity when generating reviews for different users or products. We propose a user preference-aware review generation model to take account of user preference. User preference reflects the characteristics of the user and has a great impact when the user writes reviews. Specifically, we extract keywords from users’ reviews using a score function as user preference. The decoder generates words depending on not only the context vector but also user preference when decoding. Through considering users’ preferred words explicitly, we generate diverse reviews. Experiments on a real review dataset from Amazon show that our model outperforms state-of-the-art baselines according to two evaluation metrics.
Wei Wang, Hai-Tao Zheng, Hao Liu
### Mining Cluster Patterns in XML Corpora via Latent Topic Models of Content and Structure
We present two innovative machine-learning approaches to topic model clustering for the XML domain. The first approach consists in exploiting consolidated clustering techniques, in order to partition the input XML documents by their meaning. This is captured through a new Bayesian probabilistic topic model, whose novelty is the incorporation of Dirichlet-multinomial distributions for both content and structure. In the second approach, a novel Bayesian probabilistic generative model of XML corpora seamlessly integrates the foresaid topic model with clustering. Both are conceived as interacting latent factors, that govern the wording of the input XML documents. Experiments over real-world benchmark XML corpora reveal the overcoming effectiveness of the devised approaches in comparison to several state-of-the-art competitors.
Gianni Costa, Riccardo Ortale
### A Large-Scale Repository of Deterministic Regular Expression Patterns and Its Applications
Deterministic regular expressions (DREs) have been used in a myriad of areas in data management. However, to the best of our knowledge, presently there has been no large-scale repository of DREs in the literature. In this paper, based on a large corpus of data that we harvested from the Web, we build a large-scale repository of DREs by first collecting a repository after analyzing determinism of the real data; and then further processing the data by using normalized DREs to construct a compact repository of DREs, called DRE pattern set. At last we use our DRE patterns as benchmark datasets in several algorithms that have lacked experiments on real DRE data before. Experimental results demonstrate the usefulness of the repository.
Haiming Chen, Yeting Li, Chunmei Dong, Xinyu Chu, Xiaoying Mou, Weidong Min
### Determining the Impact of Missing Values on Blocking in Record Linkage
Record linkage is the process of integrating information from the same underlying entity across disparate data sets. This process, which is increasingly utilized to build accurate representations of individuals and organizations for a variety of applications, ranging from credit worthiness assessments to continuity of medical care, can be computationally intensive because it requires comparing large quantities of records over a range of attributes. To reduce the amount of computation in record linkage in big data settings, blocking methods, which are designed to limit the number of record pair comparisons that needs to be performed, are critical for scaling up the record linkage process. These methods group together potential matches into blocks, often using a subset of attributes before a final comparator function predicts which record pairs within the blocks correspond to matches. Yet data corruption and missing values adversely influence the performance of blocking methods (e.g., it may cause some matching records not to be placed in the same block). While there has been some investigation into the impact of missing values on general record linkage techniques (e.g., the comparator function), no study has addressed the impact of the missing values on blocking methods. To address this issue, in this work, we systematically perform a detailed empirical analysis of the individual and joint impact of missing values and data corruption on different blocking methods using realistic data sets. Our results show that blocking approaches that do not depend on one type of blocking attributes are more robust against missing values. In addition, our results indicate that blocking parameters must be chosen carefully for different blocking techniques.
Imrul Chowdhury Anindya, Murat Kantarcioglu, Bradley Malin
### Bridging the Gap Between Research and Production with CODE
Despite the ever-increasing enthusiasm from the industry, artificial intelligence or machine learning is a much-hyped area where the results tend to be exaggerated or misunderstood. Many novel models proposed in research papers never end up being deployed to production. The goal of this paper is to highlight four important aspects which are often neglected in real-world machine learning projects, namely Communication, Objectives, Deliverables, Evaluations (CODE). By carefully considering these aspects, we can avoid common pitfalls and carry out a smoother technology transfer to real-world applications. We draw from a priori experiences and mistakes while building a real-world online advertising platform powered by machine learning technology, aiming to provide general guidelines for translating ML research results to successful industry projects.
Yiping Jin, Dittaya Wanvarie, Phu T. V. Le
### Distance2Pre: Personalized Spatial Preference for Next Point-of-Interest Prediction
Point-of-interest (POI) prediction is a key task in location-based social networks. It captures the user preference to predict POIs. Recent studies demonstrate that spatial influence is significant for prediction. The distance can be converted to a weight reflecting the relevance of two POIs or can be utilized to find nearby locations. However, previous studies almost ignore the correlation between user and distance. When people choose the next POI, they will consider the distance at the same time. Besides, spatial influence varies greatly for different users. In this work, we propose a Distance-to-Preference (Distance2Pre) network for the next POI prediction. We first acquire the user’s sequential preference by modeling check-in sequences. Then, we propose to acquire the spatial preference by modeling distances between successive POIs. This is a personalized process and can capture the relationship in user-distance interactions. Moreover, we propose two preference encoders which are a linear fusion and a non-linear fusion. Such encoders explore different ways to fuse the above two preferences. Experiments on two real-world datasets show the superiority of our proposed network.
Qiang Cui, Yuyuan Tang, Shu Wu, Liang Wang
### Using Multi-objective Optimization to Solve the Long Tail Problem in Recommender System
An improved algorithm for recommender system is proposed in this paper where not only accuracy but also comprehensiveness of recommendation items is considered. We use a weighted similarity measure based on non-dominated sorting genetic algorithm II (NSGA-II). The solution of optimal weight vector is transformed into the multi-objective optimization problem. Both accuracy and coverage are taken as the objective functions simultaneously. Experimental results show that the proposed algorithm improves the coverage while the accuracy is kept.
Jiaona Pang, Jun Guo, Wei Zhang
### Event2Vec: Learning Event Representations Using Spatial-Temporal Information for Recommendation
Event-based social networks (EBSN), such as meetup.com and plancast.com , have witnessed increased popularity and rapid growth in recent years. In EBSN, a user can choose to join any events such as a conference, house party, or drinking event. In this paper, we present a novel model—Event2Vec, which explores how representation learning for events incorporating spatial-temporal information can help event recommendation in EBSN. The spatial-temporal information represents the physical location and the time where and when an event will take place. It typically has been modeled as a bias in conventional recommendation models. However, such an approach ignores the rich semantics associated with the spatial-temporal information. In Event2Vec, the spatial-temporal influences are naturally incorporated into the learning of latent representations for events, so that Event2Vec predicts user’s preference on events more accurately. We evaluate the effectiveness of the proposed model on three real datasets; our experiments show that with a proper modeling of the spatial-temporal information, we can significantly improve event recommendation performance.
Yan Wang, Jie Tang
### Maximizing Gain over Flexible Attributes in Peer to Peer Marketplaces
Peer to peer marketplaces enable transactional exchange of services directly between people. In such platforms, those providing a service are faced with various choices. For example in travel peer to peer marketplaces, although some amenities (attributes) in a property are fixed, others are relatively flexible and can be provided without significant effort. Providing an attribute is usually associated with a cost. Naturally, different sets of attributes may have a different “gains” for a service provider. Consequently, given a limited budget, deciding which attributes to offer is challenging.In this paper, we formally introduce and define the problem of Gain Maximization over Flexible Attributes (GMFA) and study its complexity. We provide a practically efficient exact algorithm to the GMFA problem that can handle any monotonic gain function. Since the users of the peer to peer marketplaces may not have access to any extra information other than existing tuples in the database, as the next part of our contribution, we introduce the notion of frequent-item based count (FBC), which utilizes nothing but the database itself. We conduct a comprehensive experimental evaluation on real data from AirBnB and a case study that confirm the efficiency and practicality of our proposal.
Abolfazl Asudeh, Azade Nazi, Nick Koudas, Gautam Das
### An Attentive Spatio-Temporal Neural Model for Successive Point of Interest Recommendation
In a successive Point of Interest (POI) recommendation problem, analyzing user behaviors and contextual check-in information in past POI visits are essential in predicting, thus recommending, where they would likely want to visit next. Although several works, especially the Matrix Factorization and/or Markov chain based methods, are proposed to solve this problem, they have strong independence and conditioning assumptions. In this paper, we propose a deep Long Short Term Memory recurrent neural network model with a memory/attention mechanism, for the successive Point-of-Interest recommendation problem, that captures both the sequential, and temporal/spatial characteristics into its learned representations. Experimental results on two popular Location-Based Social Networks illustrate significant improvements of our method over the state-of-the-art methods. Our method is also robust to overfitting compared with popular methods for the recommendation tasks.
Khoa D. Doan, Guolei Yang, Chandan K. Reddy
### Mentor Pattern Identification from Product Usage Logs
A typical software tool for solving complex problems tends to expose a rich set of features to its users. This creates challenges such as new users facing a steep onboarding experience and current users tending to use only a small fraction of the software’s features. This paper describes and solves an unsupervised mentor pattern identification problem from product usage logs for softening both challenges. The problem is formulated as identifying a set of users (mentors) that satisfies three mentor qualification metrics: (a) the mentor set is small, (b) every user is close to some mentor as per usage pattern, and (c) every feature has been used by some mentor. The proposed solution models the task as a non-convex variant of an regularized logistic regression problem and develops an alternating minimization style algorithm to solve it. Numerical experiments validate the necessity and effectiveness of mentor identification towards improving the performance of a k-NN based product feature recommendation system for a real-world dataset. Further, t-SNE visuals demonstrate that the proposed algorithm achieves a trade-off that is both quantitatively and qualitatively distinct from alternative approaches to mentor identification such as Maximum Marginal Relevance and K-means.
Ankur Garg, Aman Kharb, Yash H. Malviya, J. P. Sagar, Atanu R. Sinha, Iftikhar Ahamath Burhanuddin, Sunav Choudhary
### AggregationNet: Identifying Multiple Changes Based on Convolutional Neural Network in Bitemporal Optical Remote Sensing Images
The detection of multiple changes (i.e., different change types) in bitemporal remote sensing images is a challenging task. Numerous methods focus on detecting the changing location while the detailed “from-to” change types are neglected. This paper presents a supervised framework named AggregationNet to identify the specific “from-to” change types. This AggregationNet takes two image patches as input and directly output the change types. The AggregationNet comprises a feature extraction part and a feature aggregation part. Deep “from-to” features are extracted by the feature extraction part which is a two-branch convolutional neural network. The feature aggregation part is adopted to explore the temporal correlation of the bitemporal image patches. A one-hot label map is proposed to facilitate AggregationNet. One element in the label map is set to 1 and others are set to 0. Different change types are represented by different locations of 1 in the one-hot label map. To verify the effectiveness of the proposed framework, we perform experiments on general optical remote sensing image classification datasets as well as change detection dataset. Extensive experimental results demonstrate the effectiveness of the proposed method.
Qiankun Ye, Xiankai Lu, Hong Huo, Lihong Wan, Yiyou Guo, Tao Fang
### Detecting Micro-expression Intensity Changes from Videos Based on Hybrid Deep CNN
Facial micro-expressions, which usually last only for a fraction of a second, are challenging to detect by the human eye or machine. They are useful for understanding the genuine emotional state of a human face, and have various applications in education, medical, surveillance and legal sectors. Existing works on micro-expressions are focused on binary classification of the micro-expressions. However, detecting the micro-expression intensity changes over the spanning time, i.e., the micro-expression profiling, is not addressed in the literature. In this paper, we present a novel deep Convolutional Neural Network (CNN) based hybrid framework for micro-expression intensity change detection together with an image pre-processing technique. The two components of our hybrid framework, namely a micro-expression stage classifier, and an intensity estimator, are designed using a 3D and 2D shallow deep CNNs respectively. Moreover, we propose a fusion mechanism to improve the micro-expression intensity classification accuracy. Evaluation using the recent benchmark micro-expression datasets; CASME, CASME II and SAMM, demonstrates that our hybrid framework can accurately classify the various intensity levels of each micro-expression. Further, comparison with the state-of-the-art methods reveals the superiority of our hybrid approach in classifying the micro-expressions accurately.
Selvarajah Thuseethan, Sutharshan Rajasegarar, John Yearwood
### A Multi-scale Recalibrated Approach for 3D Human Pose Estimation
The major challenge for 3D human pose estimation is the ambiguity in the process of regressing 3D poses from 2D. The ambiguity is introduced by the poor exploiting of the image cues especially the spatial relations. Previous works try to use a weakly-supervised method to constrain illegal spatial relations instead of leverage image cues directly. We follow the weakly-supervised method to train an end-to-end network by first detecting 2D body joints heatmaps, and then constraining 3D regression through 2D heatmaps. To further utilize the inherent spatial relations, we propose to use a multi-scale recalibrated approach to regress 3D pose. The recalibrated approach is integrated into the network as an independent module, and the scale factor is altered to capture information in different resolutions. With the additional multi-scale recalibration modules, the spatial information in pose is better exploited in the regression process. The whole network is fine-tuned for the extra parameters. The quantitative result on Human3.6m dataset demonstrates the performance surpasses the state-of-the-art. Qualitative evaluation results on the Human3.6m and in-the-wild MPII datasets show the effectiveness and robustness of our approach which can handle some complex situations such as self-occlusions.
Ziwei Xie, Hailun Xia, Chunyan Feng
### Gossiping the Videos: An Embedding-Based Generative Adversarial Framework for Time-Sync Comments Generation
Recent years have witnessed the successful rise of the time-sync “gossiping comment”, or so-called “Danmu” combined with online videos. Along this line, automatic generation of Danmus may attract users with better interactions. However, this task could be extremely challenging due to the difficulties of informal expressions and “semantic gap” between text and videos, as Danmus are usually not straightforward descriptions for the videos, but subjective and diverse expressions. To that end, in this paper, we propose a novel Embedding-based Generative Adversarial (E-GA) framework to generate time-sync video comments with “gossiping” behavior. Specifically, we first model the informal styles of comments via semantic embedding inspired by variational autoencoders (VAE), and then generate Danmus in a generatively adversarial way to deal with the gap between visual and textual content. Extensive experiments on a large-scale real-world dataset demonstrate the effectiveness of our E-GA framework.
Guangyi Lv, Tong Xu, Qi Liu, Enhong Chen, Weidong He, Mingxiao An, Zhongming Chen
### Self-paced Robust Deep Face Recognition with Label Noise
Deep face recognition has achieved rapid development but still suffers from occlusions, illumination and pose variations, especially for face identification. The success of deep learning models in face recognition lies in large-scale high quality face data with accurate labels. However, in real-world applications, the collected data may be mixed with severe label noise, which significantly degrades the generalization ability of deep models. To alleviate the impact of label noise on face recognition, inspired by curriculum learning, we propose a self-paced deep learning model (SPDL) by introducing a negative $$l_1$$ -norm regularizer for face recognition with label noise. During training, SPDL automatically evaluates the cleanness of samples in each batch and chooses cleaner samples for training while abandons the noisy samples. To demonstrate the effectiveness of SPDL, we use deep convolution neural network architectures for the task of robust face recognition. Experimental results show that our SPDL achieves superior performance on LFW, MegaFace and YTF when there are different levels of label noise.
Pengfei Zhu, Wenya Ma, Qinghua Hu
### Multi-Constraints-Based Enhanced Class-Specific Dictionary Learning for Image Classification
Sparse representation based on dictionary learning has been widely applied in recognition tasks. These methods only work well under the conditions that the training samples are uncontaminated or contaminated by a little noise. However, with increasing noise, these methods are not robust for image classification. To address the problem, we propose a novel multi-constraints-based enhanced class-specific dictionary learning (MECDL) approach for image classification, of which our dictionary learning framework is composed of shared dictionary and class-specific dictionaries. For the class-specific dictionaries, we apply Fisher discriminant criterion on them to get structured dictionary. And the sparse coefficients corresponding to the class-specific dictionaries are also introduced into Fisher-based idea, which could obtain discriminative coefficients. At the same time, we apply low-rank constraint into these dictionaries to remove the large noise. For the shared dictionary, we impose a low-rank constraint on it and the corresponding intra-class coefficients are encouraged to be as similar as possible. The experimental results on three well-known databases suggest that the proposed method could enhance discriminative ability of dictionary compared with state-of-art dictionary learning algorithms. Moreover, with the largest noise, our approach both achieves a high recognition rate of over 80%.
Ze Tian, Ming Yang
### Discovering Senile Dementia from Brain MRI Using Ra-DenseNet
With the rapid development of medical industry, there is a growing demand for disease diagnosis using machine learning technology. The recent success of deep learning brings it to a new height. This paper focuses on application of deep learning to discover senile dementia from brain magnetic resonance imaging (MRI) data. In this work, we propose a novel deep learning model based on Dense convolutional Network (DenseNet), denoted as ResNeXt Adam DenseNet (Ra-DenseNet), where each block of DenseNet is modified using ResNeXt and the adapter of DenseNet is optimized by Adam algorithm. It compresses the number of the layers in DenseNet from 121 to 40 by exploiting the key characters of ResNeXt, which reduces running complexity and inherits the advantages of Group Convolution technology. Experimental results on a real-world MRI data set show that our Ra-DenseNet achieves a classification accuracy with 97.1 $$\%$$ and outperforms the existing state-of-the-art baselines (i.e., LeNet, AlexNet, VGGNet, ResNet and DenseNet) dramatically.
Xiaobo Zhang, Yan Yang, Tianrui Li, Hao Wang, Ziqing He
### Granger Causality for Heterogeneous Processes
Discovery of temporal structures and finding causal interactions among time series have recently attracted attention of the data mining community. Among various causal notions graphical Granger causality is well-known due to its intuitive interpretation and computational simplicity. Most of the current graphical approaches are designed for homogeneous datasets i.e. the interacting processes are assumed to have the same data distribution. Since many applications generate heterogeneous time series, the question arises how to leverage graphical Granger models to detect temporal causal dependencies among them. Profiting from the generalized linear models, we propose an efficient Heterogeneous Graphical Granger Model (HGGM) for detecting causal relation among time series having a distribution from the exponential family which includes a wider common distributions e.g. Poisson, gamma. To guarantee the consistency of our algorithm we employ adaptive Lasso as a variable selection method. Extensive experiments on synthetic and real data confirm the effectiveness and efficiency of HGGM.
Sahar Behzadi, Kateřina Hlaváčková-Schindler, Claudia Plant
### Knowledge Graph Embedding with Order Information of Triplets
Knowledge graphs (KGs) are large scale multi-relational directed graph, which comprise a large amount of triplets. Embedding knowledge graphs into continuous vector space is an essential problem in knowledge extraction. Many existing knowledge graph embedding methods focus on learning rich features from entities and relations with increasingly complex feature engineering. However, they pay little attention on the order information of triplets. As a result, current methods could not capture the inherent directional property of KGs fully. In this paper, we explore knowledge graphs embedding from an ingenious perspective, viewing a triplet as a fixed length sequence. Based on this idea, we propose a novel recurrent knowledge graph embedding method RKGE. It uses an order keeping concatenate operation and a shared sigmoid layer to capture order information and discriminate fine-grained relation-related information. We evaluate our method on knowledge graph completion on benchmark data sets. Extensive experiments show that our approach outperforms state-of-the-art baselines significantly with relatively much lower space complexity. Especially on sparse KGs, RKGE achieves a 86.5% improvement at Hits@1 on FB15K-237. The outstanding results demonstrate that the order information of triplets is highly beneficial for knowledge graph embedding.
Jun Yuan, Neng Gao, Ji Xiang, Chenyang Tu, Jingquan Ge
### Knowledge Graph Rule Mining via Transfer Learning
Mining logical rules from knowledge graphs (KGs) is an important yet challenging task, especially when the relevant data is sparse. Transfer learning is an actively researched area to address the data sparsity issue, where a predictive model is learned for the target domain from that of a similar source domain. In this paper, we propose a novel method for rule learning by employing transfer learning to address the data sparsity issue, in which most relevant source KGs and candidate rules can be automatically selected for transfer. This is achieved by introducing a similarity in terms of embedding representations of entities, relations and rules. Experiments are conducted on some standard KGs. The results show that proposed method is able to learn quality rules even with extremely sparse data and its predictive accuracy outperformed state-of-the-art rule learners (AMIE+ and RLvLR), and link prediction systems (TransE and HOLE).
Pouya Ghiasnezhad Omran, Zhe Wang, Kewen Wang
### Knowledge Base Completion by Inference from Both Relational and Literal Facts
Knowledge base (KB) completion predicts new facts in a KB by performing inference from the existing facts, which is very important for expanding KBs. Most previous KB completion approaches infer new facts only from the relational facts (facts containing object properties) in KBs. Actually, there are large number of literal facts (facts containing datatype properties) besides the relational ones in most KBs; these literal facts are ignored in the previous approaches. This paper studies how to take the literal facts into account when making inference, aiming to further improve the performance of KB completion. We propose a new approach that consumes both relational and literal facts to predict new facts. Our approach extracts literal features from literal facts, and incorporates them with path-based features extracted from relational facts; a predictive model is then trained on all the features to infer new facts. Experiments on YAGO KB show that our approach outperforms the compared approaches that only take relational facts as input.
Zhichun Wang, Yong Huang
### EMT: A Tail-Oriented Method for Specific Domain Knowledge Graph Completion
The basic unit of knowledge graph is triplet, including head entity, relation and tail entity. Centering on knowledge graph, knowledge graph completion has attracted more and more attention and made great progress. However, these models are all verified by open domain data sets. When applied in specific domain case, they will be challenged by practical data distributions. For example, due to poor presentation of tail entities caused by their relation-oriented feature, they can not deal with the completion of enzyme knowledge graph. Inspired by question answering and rectilinear propagation of lights, this paper puts forward a tail-oriented method - Embedding for Multi-Tails knowledge graph (EMT). Specifically, it first represents head and relation in question space; then, finishes projection to answer one by tail-related matrix; finally, gets tail entity via translating operation in answer space. To overcome time-space complexity of EMT, this paper includes two improved models: EMT $$^v$$ and EMT $$^s$$ . Taking some optimal translation and composition models as baselines, link prediction and triplets classification on an enzyme knowledge graph sample and Kinship proved our performance improvements, especially in tails prediction.
Yi Zhang, Zhijuan Du, Xiaofeng Meng
### An Interpretable Neural Model with Interactive Stepwise Influence
Deep neural networks have achieved promising prediction performance, but are often criticized for the lack of interpretability, which is essential in many real-world applications such as health informatics and political science. Meanwhile, it has been observed that many shallow models, such as linear models or tree-based models, are fairly interpretable though not accurate enough. Motivated by these observations, in this paper, we investigate how to fully take advantage of the interpretability of shallow models in neural networks. To this end, we propose a novel interpretable neural model with Interactive Stepwise Influence (ISI) framework. Specifically, in each iteration of the learning process, ISI interactively trains a shallow model with soft labels computed from a neural network, and the learned shallow model is then used to influence the neural network to gain interpretability. Thus ISI could achieve interpretability in three aspects: importance of features, impact of feature value changes, and adaptability of feature weights in the neural network learning process. Experiments on both synthetic and two real-world datasets demonstrate that ISI could generate reliable interpretation with respect to the three aspects, as well as preserve prediction accuracy by comparing with other state-of-the-art methods.
Yin Zhang, Ninghao Liu, Shuiwang Ji, James Caverlee, Xia Hu
### Multivariate Time Series Early Classification with Interpretability Using Deep Learning and Attention Mechanism
Multivariate time-series early classification is an emerging topic in data mining fields with wide applications like biomedicine, finance, manufacturing, etc. Despite of some recent studies on this topic that delivered promising developments, few relevant works can provide good interpretability. In this work, we consider simultaneously the important issues of model performance, earliness, and interpretability to propose a deep-learning framework based on the attention mechanism for multivariate time-series early classification. In the proposed model, we used a deep-learning method to extract the features among multiple variables and capture the temporal relation that exists in multivariate time-series data. Additionally, the proposed method uses the attention mechanism to identify the critical segments related to model performance, providing a base to facilitate the better understanding of the model for further decision making. We conducted experiments on three real datasets and compared with several alternatives. While the proposed method can achieve comparable performance results and earliness compared to other alternatives, more importantly, it can provide interpretability by highlighting the important parts of the original data, rendering it easier for users to understand how the prediction is induced from the data.
En-Yu Hsu, Chien-Liang Liu, Vincent S. Tseng
### Backmatter
Weitere Informationen
|
2020-08-14 14:12:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.436895489692688, "perplexity": 1967.7727946632688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739328.66/warc/CC-MAIN-20200814130401-20200814160401-00483.warc.gz"}
|
https://jp.maplesoft.com/support/help/maplesim/view.aspx?path=Student/LinearAlgebra/IdentityMatrix
|
IdentityMatrix - Maple Help
Student[LinearAlgebra]
IdentityMatrix
construct an Identity Matrix
Id
construct an Identity Matrix
Calling Sequence IdentityMatrix(d, options) Id(d, options)
Parameters
d - (optional) non-negative integer; dimension of the resulting Matrix options - (optional) parameters; for a complete list, see LinearAlgebra[IdentityMatrix]
Description
• The IdentityMatrix(d) command returns a $dxd$ Identity Matrix. This command can be abbreviated as Id(d).
Examples
> $\mathrm{with}\left(\mathrm{Student}\left[\mathrm{LinearAlgebra}\right]\right):$
> $\mathrm{IdentityMatrix}\left(4\right)$
$\left[\begin{array}{cccc}{1}& {0}& {0}& {0}\\ {0}& {1}& {0}& {0}\\ {0}& {0}& {1}& {0}\\ {0}& {0}& {0}& {1}\end{array}\right]$ (1)
> $⟨⟨1,2⟩|⟨3,4⟩⟩-t\mathrm{Id}\left(2\right)$
$\left[\begin{array}{cc}{1}{-}{t}& {3}\\ {2}& {4}{-}{t}\end{array}\right]$ (2)
|
2022-01-28 09:23:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9992135167121887, "perplexity": 5989.874622361097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305423.58/warc/CC-MAIN-20220128074016-20220128104016-00631.warc.gz"}
|
https://www.gamedev.net/forums/topic/501826-base-class-pointers-derived-class-operator/
|
# Base Class Pointers, Derived Class operator==
This topic is 3716 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Is there a way to cause a base class pointer to use the operator== of the derived class it's pointing to? I tried to do something like this:
class ABC
{
...
public:
bool operator==(ABC& rv) = 0;
...
}
and then redefine it for each derived class, but that requires that I use an ABC& as the rvalue, which means I can't compare data which isn't in class ABC, but is instead added in the derived classes. In a perfect world, the code should be able to automagically tell that two objects of two different classes are not equal, but I'm not sure how to make it do so. What I'm trying to do is be able to say, in a Tile*[][] which represents my map:
if(map[x][y] == map[z][w])
//do something
but I have some derived tile types (Animated, Static, Harmful, Healing). In case that helps you suggest a better way to fix the whole mess. Thanks!
##### Share on other sites
This is not equality relation. It's is-a, something which requires RTTI, and is not exactly popular in C++.
For this purpose, defining a bool equals(BaseObject *) would be best.
In body, you can then use dynamic_cast to test for proper types.
##### Share on other sites
if you want to guarantee that the two types are _absolutely_ the same you can do:
if(typeid(map[x][y])==typeid(map[z][w]) and map[x][y] == map[z][w]){ //do something}
but that means that the two types must be _exactly_ the same.
1. 1
2. 2
Rutin
20
3. 3
4. 4
frob
13
5. 5
• 9
• 14
• 10
• 9
• 17
• ### Forum Statistics
• Total Topics
632601
• Total Posts
3007359
×
## Important Information
We are the game development community.
Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!
Sign me up!
|
2018-09-22 01:34:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23299862444400787, "perplexity": 3225.8588418520017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158001.44/warc/CC-MAIN-20180922005340-20180922025740-00226.warc.gz"}
|
https://hpmuseum.org/forum/thread-15814.html
|
Calculators and numerical differentiation
10-30-2020, 09:57 PM
Post: #1
robve Senior Member Posts: 360 Joined: Sep 2020
Calculators and numerical differentiation
Came across this article that might be of interest to this forum: "calculators and numerical differentiation" http://blog.damnsoft.org/tag/fx-880p/
Some ways vintage calculators approximated differentiation "the right and wrong way".
- Rob
"I count on old friends" -- HP 71B,Prime|Ti VOY200,Nspire CXII CAS|Casio fx-CG50...|Sharp PC-G850,E500,2500,1500,14xx,13xx,12xx...
10-30-2020, 11:41 PM
Post: #2
Paul Dale Senior Member Posts: 1,770 Joined: Dec 2013
RE: Calculators and numerical differentiation
The WP 34S doesn't use either of the methods discussed. It uses an order 10 method for it's derivative with fallbacks to order 6 and 4 if the function doesn't evaluate properly. For the second derivative is again uses an order 10 method with a fallback to an order 4 method. The order 10 methods are a weighted summation of $$f(x \pm 1)$$, $$f(x \pm 2)$$, $$f(x \pm 3)$$, $$f(x \pm 4)$$ and $$f(x \pm 5)$$.
There are good reasons for not wanting to evaluate the function at the x specified.
10-31-2020, 01:20 AM
Post: #3
Albert Chan Senior Member Posts: 2,103 Joined: Jul 2018
RE: Calculators and numerical differentiation
(10-30-2020 09:57 PM)robve Wrote: Came across this article that might be of interest to this forum: "calculators and numerical differentiation" http://blog.damnsoft.org/tag/fx-880p/
The author suggested Casio is doing central difference formula, based on Casio CFX-9×50 manual.
On closer reading, it only *illustrated* what is central difference.
Using the example f(x)=1/x, a = 0.001, h = 0.0001
f'(a) ≈ (f(a+h) - f(a-h)) / (2h) = -1/(a²-h²) < -1/a²
Casio fx-570MS: d/dx(1/x, 0.001, 0.0001) = -999974.6848 > -1/a²
Casio fx-115ES+: d/dx(1/x, 0.001) = -999999.999994767 > -1/a²
This suggested Casio is not using central difference formula as-is.
Something more is involved ...
11-01-2020, 05:39 AM
Post: #4
Wes Loewer Senior Member Posts: 416 Joined: Jan 2014
RE: Calculators and numerical differentiation
Instead of averaging the backwards and forwards differences, why not just check to see if f(a) exists and if it does then do the central difference method?
Notice in the TI manual that the default epsilon can be overridden. Same goes the their 8x numeric models.
The non-CAS Nspire apparently has a bit of CAS hidden under the hood because it does not use this approximation. It appears to evaluate the derivative symbolically and then evaluates that expression numerically, keeping the CAS carefully hidden from the user.
(Had anyone previously seen the del operator used for the backwards difference as shown in the Casio manual?)
11-01-2020, 05:39 PM
Post: #5
Albert Chan Senior Member Posts: 2,103 Joined: Jul 2018
RE: Calculators and numerical differentiation
(11-01-2020 05:39 AM)Wes Loewer Wrote: Instead of averaging the backwards and forwards differences, why not just check to see if f(a) exists and if it does then do the central difference method?
Try taken derivative of f(x) = |x|, at x=0.
f(0) = |0| = 0, thus exist.
Central difference slope = (|0+h| - |0-h|)/(2h) = 0
Forward difference slope = (h-0)/h = 1
Backward difference slope = (0-h)/h = -1
2 slopes does not match. f'(0) does not exist.
Oh ... Casio does not check this, d/dx(√(X²), 0) → 0
Quote:The non-CAS Nspire apparently has a bit of CAS hidden under the hood because it does not use this approximation. It appears to evaluate the derivative symbolically and then evaluates that expression numerically, keeping the CAS carefully hidden from the user.
How did you deduce there is hidden CAS under non-CAS Nspire ?
May be an example ?
Quote:Had anyone previously seen the del operator used for the backwards difference as shown in the Casio manual?
I think many text use the same symbols (or equivalent):
Ref: Fundamentals of Numerical Analysis, by Stephen Kellison
Code:
Operators: Δf(x) = f(x+1) - f(x) // forward difference ∇f(x) = f(x) - f(x-1) // backward difference Ef(x) = f(x+1) // stepping δf(x) = f(x+½) - f(x-½) // central difference µf(x) = (f(x+½)+f(x-½))/2 // average Df(x) = f'(x) // derivative
Or, operator notation: $$Δ=E-1,\quad ∇=1-E^{-1}$$
Combining µδ, we have the 3 points central difference slope formula
$$hD ≈ µδ = \large\left({E^{½} + E^{-½} \over 2}\right) \normalsize(E^{½}-E^{-½}) = \large{E\;-\;E^{-1} \over 2} = \large{Δ\;+\;∇ \over 2}$$
For more accuracy, we can add more terms: (note, there is no even powers of δ )
$$hD ≡ µδ \large\left(1 - {δ^2\over 6} + {δ^4\over 30} - {δ^6\over 140} + {δ^8\over 630} - {δ^{10}\over 2772} + {δ^{12}\over 12012} - {δ^{14}\over 51480} \;+\; ... \right)$$
(10-30-2020 11:41 PM)Paul Dale Wrote: The order 10 methods are a weighted summation of $$f(x \pm 1)$$, $$f(x \pm 2)$$, $$f(x \pm 3)$$, $$f(x \pm 4)$$ and $$f(x \pm 5)$$.
Lets build D (order 10), without using central difference table.
XCas> c := E^(1/2) - E^(-1/2); // central difference
XCas> m := (E^(1/2) + E^(-1/2))/2; // mean
XCas> [mc, cc] := expand(simplify([m,c] .* c)) → [E/2 - 1/(E*2) , E - 2 + 1/E]
Note the symmetry of m*c, c*c. This suggested D also have similar symmetry
XCas> simplify(mc/h * horner([1/630, -1/140, 1/30, -1/6, 1], cc))
$$→ D = \large \frac {2100(E-E^{-1})\; -\;600(E^2-E^{-2})\; +\;150(E^3-E^{-3})\; -\;25(E^4-E^{-4})\; +\;2(E^5-E^{-5})} {2520h}$$
11-01-2020, 11:43 PM
Post: #6
Albert Chan Senior Member Posts: 2,103 Joined: Jul 2018
RE: Calculators and numerical differentiation
(11-01-2020 05:39 PM)Albert Chan Wrote: For more accuracy, we can add more terms: (note, there is no even powers of δ )
$$hD ≡ µδ \large\left(1 - {δ^2\over 6} + {δ^4\over 30} - {δ^6\over 140} + {δ^8\over 630} - {δ^{10}\over 2772} + {δ^{12}\over 12012} - {δ^{14}\over 51480} \;+\; ... \right)$$
We can show Df(0) = f'(0) does not require calculating f(0).
In other words, operator form will not have a constant term, the "1" operator.
From previous post, we have µδ = (E-1/E)/2, δδ = (E+1/E) - 2
Doing "operator" mathematics, with x = log(E), we have:
µδ = sinh(x)
δδ = 2*cosh(x) - 2
Hyperbolics identities:
(1): cosh(z1)*cosh(z2) = (cosh(z1 - z2) + cosh(z1 + z2)) / 2
(2): sinh(z1)*cosh(z2) = (sinh(z1 - z2) + sinh(z1 + z2)) / 2
hD = sinh(x) * (k1 + k2*cosh(x) + k3*cosh(x)^2 + k4*cosh(x)^3 + ...) // apply (1)
= sinh(x) * (k1' + k2'*cosh(x) + k3'*cosh(2x) + k4'*cosh(3x) + ... ) // apply (2)
= k1''*sinh(x) + k2''*sinh(2x) + k3''*sinh(3x) + k4''*sinh(4x) + ...
sinh(nx) = (En - E-n)/2
→ this explained why En coefs = negative of E-n coefs.
→ RHS terms will not generate constant term (i.e., no "1" operator)
→ D does not require calculating f(0)
→ same for D^odd_powers, since RHS is still linear combinations of sinh's.
11-03-2020, 06:09 PM
Post: #7
Wes Loewer Senior Member Posts: 416 Joined: Jan 2014
RE: Calculators and numerical differentiation
(11-01-2020 05:39 PM)Albert Chan Wrote: How did you deduce there is hidden CAS under non-CAS Nspire ?
May be an example ?
I may have made a leap in my logic, but the fact that examples such as |x| or 1/x at x=0 or at x=0.0001 produce the correct answer in the non-CAS Nspire while these are incorrect on the 84+ lead me to believe that the non-CAS model must be doing something CAS-like under the hood. I had never come across a counter-example. It also made sense to me the it would be very easy to share the same code as the Nspire-CAS for such calculations.
You prompted me to look in the Nspire manual which gives some insight.
Quote:nDerivative()
Returns the numerical derivative calculated using auto differentiation methods.
...
Note: The nDerivative() algorithm has a limitiation [sic]: it works recursively through the unsimplified expression, computing the numeric value of the first derivative (and second, if applicable) and the evaluation of each subexpression, which may lead to an unexpected result.
Consider the example on the right. The first derivative of x•(x^2+x)^(1/3) at x=0 is equal to 0. However, because the first derivative of the subexpression (x^2+x)^(1/3) is undefined at x=0, and this value is used to calculate the derivative of the total expression, nDerivative() reports the result as undefined and displays a warning message.
If you encounter this limitation, verify the solution graphically. You can also try using centralDiff().
(The Npsire has a centralDiff() function as well that behaves like the 84+.)
The part starting with "Note: " is not in the Nspire CAS manual. The Nspire CAS gives the correct answer for this example.
When I first saw the above example, I thought that I must have been wrong about the numeric model having an internal CAS since the numeric model does not give the correct answer while the CAS model does. However, now that I'm reading it again, I'm thinking that I may have been right after all. The fact that it says "the subexpression (x^2+x)^(1/3) is undefined at x=0" means that the calculator must be breaking the expression down into subexpressions and evaluating their derivatives (consistent with the product rule) which means that it must have some CAS capabilities rather than just evaluating the whole expression numerically.
So my current thinking is that the Nspire must have at least some CAS capabilities under the hood, but not to the extent as the Nspire CAS.
Thoughts?
11-03-2020, 06:55 PM (This post was last modified: 11-03-2020 06:56 PM by CMarangon.)
Post: #8
CMarangon Member Posts: 107 Joined: Oct 2020
RE: Calculators and numerical differentiation
Hello!
Nice post!
I suggest you to increase the font size, because we have many who access
our pages with cell phones or other guys like me who don't like to wear glasses.:-)
Carlos - Brazil
Time Zone: GMT -3
http://area48.com
11-03-2020, 10:14 PM (This post was last modified: 11-04-2020 12:48 AM by Albert Chan.)
Post: #9
Albert Chan Senior Member Posts: 2,103 Joined: Jul 2018
RE: Calculators and numerical differentiation
Slightly off topics, for f(x) = x*g(x), getting f'(0) is easier taking limit directly.
$$f(x) = x·g(x) = x·\sqrt[3]{x^2+x}$$
$$f'(0) = \displaystyle{\lim_{h \to 0}} {f(h)-f(0)\over h} = \displaystyle{\lim_{h \to 0}}\; g(h) = g(0) = 0$$
Getting derivative take more work, and easier to make mistakes.
f' = (x*g)' = g + x*g'
g has the form z^n, where z = x²+x, n = 1/3
g' = (n*z^(n-1)) * z' = (n*g) * (z'/z) = g/3 * (2x+1)/(x²+x)
f' = g + g/3 * (2x+1)/(x+1) = g * (5x+4)/(3x+3)
f'(0) = g(0) * 4/3 = 0
---
Another way, by shape of the curve.
$$f(x) = x·\sqrt[3]{x^2+x} = \large\frac{\sqrt[3]{(x^2+x)^4}}{x+1}$$
f(ε) > 0, f(-ε) > 0, f(0) = 0
→ f(0) is local minimum
→ f'(0) = 0
11-04-2020, 04:04 PM
Post: #10
Wes Loewer Senior Member Posts: 416 Joined: Jan 2014
RE: Calculators and numerical differentiation
(11-03-2020 06:55 PM)CMarangon Wrote: I suggest you to increase the font size, because we have many who access our pages with cell phones or other guys like me who don't like to wear glasses.:-)
I'm not sure what you are seeing. My post and all the other posts have the same font size. I checked on my phone as well and they all look the same.
11-04-2020, 04:14 PM
Post: #11
Wes Loewer Senior Member Posts: 416 Joined: Jan 2014
RE: Calculators and numerical differentiation
(11-03-2020 10:14 PM)Albert Chan Wrote: Slightly off topics, for f(x) = x*g(x), getting f'(0) is easier taking limit directly.
$$f(x) = x·g(x) = x·\sqrt[3]{x^2+x}$$
$$f'(0) = \displaystyle{\lim_{h \to 0}} {f(h)-f(0)\over h} = \displaystyle{\lim_{h \to 0}}\; g(h) = g(0) = 0$$
If you pull out an $$x$$, then $$x \cdot (x^2+x)^{1/3}$$ becomes $$x^{4/3} \cdot (x+1)^{1/3}$$ which the non-CAS Npsire handles correctly.
« Next Oldest | Next Newest »
User(s) browsing this thread: 1 Guest(s)
|
2023-03-29 04:02:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4339706301689148, "perplexity": 5778.269090145706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948932.75/warc/CC-MAIN-20230329023546-20230329053546-00616.warc.gz"}
|
http://elib.mi.sanu.ac.rs/pages/browse_issue.php?db=mm&rbr=20
|
eLibrary of Mathematical Instituteof the Serbian Academy of Sciences and Arts
> Home / All Journals / Journal /
Mathematica MoravicaPublisher: Faculty of Technical Sciences Čačak, University of Kragujevac, ČačakISSN: 1450-5932Issue: 15_2Date: 2011Journal Homepage
Well-Posedness of Fixed Point Problem for a Multifunction Satisfying an Implicit Relation 1 - 9 Mohamed Akkouchi and Valeriu Popa
AbstractKeywords: Well-posedness of fixed point problem for a multifunction; strict fixed points; implicit relations; orbitally complete metric spacesMSC: 54H25; 47H10DOI: 10.5937/MatMor1102001A
Related Fixed Point Theorems for Three Metric Spaces, II 11 - 17 Sampada Navshinde, Dr. J. Achari and Brian Fisher
AbstractKeywords: Complete metric space; compact metric space; related fixed pointMSC: 54H25DOI: 10.5937/MatMor1102011N
Continued Fractions Expansion of $\sqrt{D}$ and Pell Equation $x^{2}-Dy^{2}=1$ 19 - 27 Ahmet Tekcan
AbstractKeywords: Pell equation; solutions of the Pell equation; continued fractionsMSC: 11D04; 11D09 11D75DOI: 10.5937/MatMor1102019T
Some Results for Fuzzy Maps Under Nonexpansive Type Condition 29 - 39 Sweetee Mishra, R.K. Namdeo and Brian Fisher
AbstractKeywords: Fuzzy maps; Common fixed point; Non-expansive mapDOI: 10.5937/MatMor1102029M
The Thy-Angle and g-Angle in a Quasi-Inner Product Space 41 - 46 Pavle Miličić
AbstractKeywords: Quasi-Inner Product Space; Thy-Angle; g-AngleMSC: 46B20; 46C15; 51K05DOI: 10.5937/MatMor1102041M
A Note on the Zeros of One Form of Composite Polynomials 47 - 49 Dragomir Simeunović
AbstractKeywords: Roots of algebraic equations; upper bounds for roots moduliMSC: 12D10DOI: 10.5937/MatMor1102047S
A Remark on One Family of Iterative Formulas 51 - 53 Dragomir Simeunović
AbstractKeywords: Iteration formulas; Approximate solutions of equationsMSC: 65H05DOI: 10.5937/MatMor1102051S
Principles of Transpose in the Fixed Point Theory for Cone Metric Spaces 55 - 63 Milan Tasković
AbstractKeywords: Coincidence points; common fixed points; cone metric spaces; Principles of Transpose; Banach’s contraction principle; numerical and nonnumerical distances; characterizations of contractive mappings; Banach’s mappings; nonnumerical transversalsMSC: 47H10; 54H25 54H25; 54C60; 46B40DOI: 10.5937/MatMor1102055T
On a Statement by I. Aranđelović for Asymptotic Contractions in Appl. Anal. Discrete Math. 65 - 68 Milan Tasković
AbstractKeywords: Metric and topological spaces; TCS-convergence; Complete spaces; Contraction; Asymptotic contraction; Nonlinear conditions for fixed points; Kirk’s theorem for asymptotic contractions; Tasković’s characterizations of asymptotic conditions for fixed pointsMSC: 54E15; 47H10; 05A15 54E35; 54H25DOI: 10.5937/MatMor1102065T
A Question of Priority Regarding a Fixed Point Theorem in a Cartesian Product of Metric Spaces 69 - 71 Milan Tasković
AbstractKeywords: Kuratowski’s problem; fixed points; Cartesian product; complete metric spaces; Cauchy’s sequence; Banach’s principle of contractionMSC: 47H10 54H25DOI: 10.5937/MatMor1102069T
Article page: 12>>
Remote Address: 52.87.176.39 • Server: elib.mi.sanu.ac.rsHTTP User Agent: CCBot/2.0 (https://commoncrawl.org/faq/)
|
2019-06-24 19:27:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5562463998794556, "perplexity": 6721.391476411161}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999709.4/warc/CC-MAIN-20190624191239-20190624213239-00432.warc.gz"}
|
https://electronics.stackexchange.com/questions/320426/non-linear-amplification-circuit
|
# Non-linear amplification circuit
I've got a few questions about a task from my exam preparation paper. The sketch below shows the circuit in question. It is given that U_E varies between 0V...3V. The operating voltage U_B is 10V. The op amp and diode are ideal at the beginning (it does not say at which point in time they must be not seen as ideal anymore). The four tasks are the following:
1. Give an algebraic expression for the differential amplification $v_1=\frac{dU_A}{dU_E}$
My solution: $v_1=1+\frac{R_2}{R_1}$ (non inverting op amp)
2. What is U_E's value such that D1 is conducting? What is U_A then? Give algebraic expressions for U_E and U_A.
My solution: $U_E>U_B*\frac{R_3}{R_3+R_4}$ thus $U_A>U_B*\frac{R_3}{R_3+R_4}*(1+\frac{R2}{R1})$
3. What is the differential amplification v_2=dU_A/dU_E if D1 is conducting? Give an algebraic expression.
My solution: $\frac{dU_A}{dU_E}=\frac{R2}{R4}+\frac{R2}{R3}+1+\frac{R2}{R1}$
4. U_B is 10V. At which value U_E does U_A saturate? Give an algebraic expression for U_E. What is the differential voltage u_d as a function of U_E if saturation occurs?
I don't really know how to solve this task so I'd be glad if anyone could help me out.
Moreover I am not quite sure about my solution to question 2 and 3.
simulate this circuit – Schematic created using CircuitLab
• Please consider making your post more clear by writing the equations in MathJax. Note EE uses \$ to start and end inline Mathjax to avoid confusion when talking about dollar amounts. (We still use $\$ for display equations) – The Photon Jul 25 '17 at 16:41
• #2 is correct if your definition of an ideal diode is 0V forward voltage. You may want to clarify the forward voltage of an ideal diode with the teacher. For #3, if D1 is an ideal diode, just replace it with a short circuit. For #4 just set up an equation for U_A < 10V. Once the op amp saturates, it can no longer change the inverting input to match the non-inverting input. – DavidG25 Jun 19 '18 at 20:09
|
2019-10-17 08:25:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6589748859405518, "perplexity": 1120.5293017203983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986673250.23/warc/CC-MAIN-20191017073050-20191017100550-00392.warc.gz"}
|
https://docs.eyesopen.com/toolkits/python/oechemtk/OEChemClasses/oemolostream.html
|
oemolostream¶
class oemolostream : public oemolstreambase
The oemolostream class provides a stream-like abstraction for writing molecules to files, streams or standard output. The oemolostream maintains the format and flavor of the writer used when writing molecules. The oemolostream is capable of compressing gzip files while writing.
The following methods are publicly inherited from oemolstreambase:
Constructors¶
oemolostream()
Creates a new oemolostream object that is connected to standard output.
oemolostream(const char *fname)
oemolostream(const std::string &fname)
Creates a new oemolostream object and opens the file specified by the given name (‘fname’).
oemolostream(OEPlatform::oeostream *, bool owned=true)
Creates a new oemolostream object from the existing oeostream object. The second optional argument is used to indicate whether the new oemolostream now owns the given oeostream and is therefore responsible for closing and destroying it when it itself is closed and/or destroyed.
Warning
Only in C++ can an oemolostream object own an existing oeostream object.
Usage in Python
i = oechem.oeostream()
owned = True
ofs = oechem.oemolostream(i, not owned)
To associate a file or a stream with an oemolostream after it has been created, see the oemolstreambase.open method.
operator bool¶
operator bool() const
operator OEPlatform::oeostream &¶
operator OEPlatform::oeostream &() const
GetString¶
std::string GetString(void)
Returns the contents of the in-memory buffer associated with the oemolostream object. The oemolostream object must previously have been opened for writing to an in-memory buffer using the oemolostream.openstring method.
SetFlavor¶
bool SetFlavor(unsigned int format, unsigned int flavor)
Sets the file flavor for a given format associated with the oemolostream object. The set of valid formats are defined int the OEFormat namespace. The set of valid bitmasks flavors are defined in the OEOFlavor namespace. The current flavor can be queried using the oemolstreambase.GetFlavor method. Each format has its own specific flavor which must be set separately. The oemolostream constructors calls the OESetDefaultFlavors function to set the flavors for all of the formats to their default state.
SetFormat¶
bool SetFormat(unsigned int format)
Sets the file format associated with the oemolostream object. The set of valid formats are defined in the OEFormat namespace. By default, when writing to standard output, the associated file format is OEFormat_SMI. The file format property of an oemolistream may be retrieved using the oemolstreambase.GetFormat method.
Note
The file format property is also set automatically by oemolstreambase.open based upon the file extension of the specified filename.
SetString¶
void SetString(const char *c)
void SetString(const unsigned char *c, oesize_t size)
Allows the oemolostream to be opened for writing to a buffer and specify the initial contents of the buffer.
The contents of in-memory buffer when writing to a string may be obtained using the oemolostream.GetString method.
Setgz¶
bool Setgz(bool gz, OEPlatform::oeostream *sptr=0, bool=false)
Specifies that the contents of the oemolostream object should be GNU gzip compressed The compression takes place on-the-fly. Usually the ‘gz’ property of a oemolostream object is determined implicitly from the file extension used to open the stream for writing. The current ‘gz’ property of the oemolostream object can be retrieved using the oemolstreambase.Getgz method.
close¶
void close()
Closes the oemolostream object. This method may be safely called multiple times.
flush¶
oemolostream &flush()
openstring¶
bool openstring()
bool openstring(bool gzip)
Allows the oemolostream to write to a buffer in memory, instead of to a file or standard output.
Calling oemolostream.openstring instructs the oemolostream object to start accumulating output in a buffer, which can later be retrieved by calling the oemolostream.GetString method.
The form that accepts a ‘gzip’ argument can be used to instruct the oemolostream object to compress the contents of the in-memory buffer.
put¶
oemolostream &put(char c)
tell¶
oefpos_t tell()
write¶
oemolostream &write(const char *str, oesize_t size)
|
2021-05-18 17:19:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3599143326282501, "perplexity": 8435.841313416546}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991288.0/warc/CC-MAIN-20210518160705-20210518190705-00003.warc.gz"}
|
https://homework.zookal.com/questions-and-answers/a-car-rounds-a-bend-which-has-a-radius-of-124823056
|
1. Engineering
2. Mechanical Engineering
3. a car rounds a bend which has a radius of...
# Question: a car rounds a bend which has a radius of...
###### Question details
A car rounds a bend which has a radius of 200 m. At a particular
instant, the car has a velocity of 60 km/h. The speed of the car is
increasing at a rate of 5 m/s2. Determine
(j) the normal and tangential components of the acceleration of
the car,
(ii) the magnitude and direction of the resultant acceleration,
(iii) how far around the bend the car will travel before its speed
has increased to 20 m/s.
|
2021-06-12 14:38:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8215181231498718, "perplexity": 820.2540378809399}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487584018.1/warc/CC-MAIN-20210612132637-20210612162637-00122.warc.gz"}
|
http://quantumfrontiers.com/category/theoretical-highlights/
|
# Toward physical realizations of thermodynamic resource theories
The thank-you slide of my presentation remained onscreen, and the question-and-answer session had begun. I was presenting a seminar about thermodynamic resource theories (TRTs), models developed by quantum-information theorists for small-scale exchanges of heat and work. The audience consisted of condensed-matter physicists who studied graphene and photonic crystals. I was beginning to regret my topic’s abstractness.
The question-asker pointed at a listener.
“This is an experimentalist,” he continued, “your arch-nemesis. What implications does your theory have for his lab? Does it have any? Why should he care?”
I could have answered better. I apologized that quantum-information theorists, reared on the rarefied air of Dirac bras and kets, had developed TRTs. I recalled the baby steps with which science sometimes migrates from theory to experiment. I could have advocated for bounding, with idealizations, efficiencies achievable in labs. I should have invoked the connections being developed with fluctuation results, statistical mechanical theorems that have withstood experimental tests.
The crowd looked unconvinced, but I scored one point: The experimentalist was not my arch-nemesis.
“My new friend,” I corrected the questioner.
His question has burned in my mind for two years. Experiments have inspired, but not guided, TRTs. TRTs have yet to drive experiments. Can we strengthen the connection between TRTs and the natural world? If so, what tools must resource theorists develop to predict outcomes of experiments? If not, are resource theorists doing physics?
A Q&A more successful than mine.
I explore answers to these questions in a paper released today. Ian Durham and Dean Rickles were kind enough to request a contribution for a book of conference proceedings. The conference, “Information and Interaction: Eddington, Wheeler, and the Limits of Knowledge” took place at the University of Cambridge (including a graveyard thereof), thanks to FQXi (the Foundational Questions Institute).
“Proceedings are a great opportunity to get something off your chest,” John said.
That seminar Q&A had sat on my chest, like a pet cat who half-smothers you while you’re sleeping, for two years. Theorists often justify TRTs with experiments.* Experimentalists, an argument goes, are probing limits of physics. Conventional statistical mechanics describe these regimes poorly. To understand these experiments, and to apply them to technologies, we must explore TRTs.
Does that argument not merit testing? If experimentalists observe the extremes predicted with TRTs, then the justifications for, and the timeliness of, TRT research will grow.
Something to get off your chest. Like the contents of a conference-proceedings paper, according to my advisor.
You’ve read the paper’s introduction, the first eight paragraphs of this blog post. (Who wouldn’t want to begin a paper with a mortifying anecdote?) Later in the paper, I introduce TRTs and their role in one-shot statistical mechanics, the analysis of work, heat, and entropies on small scales. I discuss whether TRTs can be realized and whether physicists should care. I identify eleven opportunities for shifting TRTs toward experiments. Three opportunities concern what merits realizing and how, in principle, we can realize it. Six adjustments to TRTs could improve TRTs’ realism. Two more-out-there opportunities, though less critical to realizations, could diversify the platforms with which we might realize TRTs.
One opportunity is the physical realization of thermal embezzlement. TRTs, like thermodynamic laws, dictate how systems can and cannot evolve. Suppose that a state $R$ cannot transform into a state $S$: $R \not\mapsto S$. An ancilla $C$, called a catalyst, might facilitate the transformation: $R + C \mapsto S + C$. Catalysts act like engines used to extract work from a pair of heat baths.
Engines degrade, so a realistic transformation might yield $S + \tilde{C}$, wherein $\tilde{C}$ resembles $C$. For certain definitions of “resembles,”** TRTs imply, one can extract arbitrary amounts of work by negligibly degrading $C$. Detecting the degradation—the work extraction’s cost—is difficult. Extracting arbitrary amounts of work at a difficult-to-detect cost contradicts the spirit of thermodynamic law.
The spirit, not the letter. Embezzlement seems physically realizable, in principle. Detecting embezzlement could push experimentalists’ abilities to distinguish between close-together states $C$ and $\tilde{C}$. I hope that that challenge, and the chance to violate the spirit of thermodynamic law, attracts researchers. Alternatively, theorists could redefine “resembles” so that $C$ doesn’t rub the law the wrong way.
The paper’s broadness evokes a caveat of Arthur Eddington’s. In 1927, Eddington presented Gifford Lectures entitled The Nature of the Physical World. Being a physicist, he admitted, “I have much to fear from the expert philosophical critic.” Specializing in TRTs, I have much to fear from the expert experimental critic. The paper is intended to point out, and to initiate responses to, the lack of physical realizations of TRTs. Some concerns are practical; some, philosophical. I expect and hope that the discussion will continue…preferably with more cooperation and charity than during that Q&A.
If you want to continue the discussion, drop me a line.
*So do theorists-in-training. I have.
**A definition that involves the trace distance.
# Bits, Bears, and Beyond in Banff: Part Deux
You might remember that about one month ago, Nicole blogged about the conference Beyond i.i.d. in information theory and told us about bits, bears, and beyond in Banff. I was very pleased that Nicole did so, because this conference has become one of my favorites in recent years (ok, it’s my favorite). You can look at her post to see what is meant by “Beyond i.i.d.” The focus of the conference includes cutting-edge topics in quantum Shannon theory, and the conference still has a nice “small world” feel to it (for example, the most recent edition and the first one featured a music session from participants). Here is a picture of some of us having a fun time:
Will Matthews, Felix Leditzky, me, and Nilanjana Datta (facing away) singing “Jamaica Farewell”.
The Beyond i.i.d. series has shaped a lot of the research going on in this area and has certainly affected my own research directions. The first Beyond i.i.d. was held in Cambridge, UK in January 2013, organized by Nilanjana Datta and Renato Renner. It had a clever logo, featuring cyclists of various sorts biking one after another, the first few looking the same and the ones behind them breaking out of the i.i.d. pattern:
It was also at the Cambridge edition that the famous entropy zoo first appeared, which has now been significantly updated, based on recent progress in the area. The next Beyond i.i.d. happened in Singapore in May 2014, organized by Marco Tomamichel, Vincent Tan, and Stephanie Wehner. (Stephanie was a recent “quantum woman” for her work on a loophole-free Bell experiment.)
The tradition continued this past summer in beautiful Banff, Canada. I hope that it goes on for a long time. At least I have next year’s to look forward to, which will be in beachy Barcelona in the summertime, (as of now) planned to be just one week before Alexander Holevo presents the Shannon lecture in Barcelona at the ISIT 2016 conference (by the way, this is the first time that a quantum information theorist has won the prestigious Shannon award).
So why am I blabbing on and on about the Beyond i.i.d. conference if Nicole already wrote a great summary of the Banff edition this past summer? Well, she didn’t have room in her blog post to cover one of my favorite topics that was discussed at my favorite conference, so she graciously invited me to discuss it here.
The driving question of my new favorite topic is “What is the right notion of a quantum Markov chain?” The past year or so has seen some remarkable progress in this direction. To motivate it, let’s go back to bears, and specifically bear attacks (as featured in Nicole’s post). In Banff, the locals there told us that they had never heard of a bear attacking a group of four or more people who hike together. But let’s suppose that Alice, Bob, and Eve ignore this advice and head out together for a hike in the mountains. Also, in a different group are 50 of Alice’s sisters, but the park rangers are focusing their binoculars on the group of three (Alice, Bob, and Eve), observing their movements, because they are concerned that a group of three will run into trouble.
In the distance, there is a clever bear observing the movements of Alice, Bob, and Eve, and he notices some curious things. If he looks at Alice and Bob’s movements alone, they appear to take each step randomly, but for the most part together. That is, their steps appear correlated. He records their movements for some time and estimates a probability distribution $p(a,b)$ that characterizes their movements. However, if he considers the movements of Alice, Bob, and Eve all together, he realizes that Alice and Bob are really taking their steps based on what Eve does, who in turn is taking her steps completely at random. So at this point the bear surmises that Eve is the mediator of the correlations observed between Alice and Bob’s movements, and when he writes down an estimate for the probability distribution $p(a,b,e)$ characterizing all three of their movements, he notices that it factors as $p(a,b,e) = p(a|e) p(b|e) p(e)$. That is, the bear sees that the distribution forms a Markov chain.
What is an important property of such a Markov chain?“, asks the bear. Well, neglecting Alice’s movements (summing over the $a$ variable), the probability distribution reduces to $p(b|e) p(e)$, because $p(a|e)$ is a conditional probability distribution. A characteristic of a Markov probability distribution is that one could reproduce the original distribution $p(a,b,e)$ simply by acting on the $e$ variable of $p(b|e) p(e)$ with the conditional probability distribution $p(a|e)$. So the bear realizes that it would be possible for Alice to be lost and subsequently replaced by Eve calling in one of Alice’s sisters, such that nobody else would notice anything different from before — it would appear as if the movements of all three were unchanged once this replacement occurs. Salivating at his realization, the bear takes Eve briefly aside without any of the others noticing. The bear explains that he will not eat Eve and will instead eat Alice if Eve can call in one of Alice’s sisters and direct her movements to be chosen according to the distribution $p(a|e)$. Eve, realizing that her options are limited (ok, ok, maybe there are other options…), makes a deal with the bear. So the bear promptly eats Alice, and Eve draws in one of Alice’s sisters, whom Eve then directs to walk according to the distribution $p(a|e)$. This process repeats, going on and on, and all the while, the park rangers, focusing exclusively on the movements on the party of three, don’t think anything of what’s going because they observe that the joint distribution $p(a,b,e)$ describing the movements of “Alice,” Bob, and Eve never seems to change (let’s assume that the actions of the bear and Eve are very fast :) ). So the bear is very satisfied after eating Alice and some of her sisters, and Eve is pleased not to be eaten, at the same time never having cared too much for Alice or any of her sisters.
A natural question arises: “What could Alice and Bob do to prevent this terrible situation from arising, in which Alice and so many of her sisters get eaten without the park rangers noticing anything?” Well, Alice and Bob could attempt to coordinate their movements independently of Eve’s. Even better, before heading out on a hike, they could make sure to have brought along several entangled pairs of particles (and perhaps some bear spray). If Alice and Bob choose their movements according to the outcomes of measurements of the entangled pairs, then it would be impossible for Alice to be eaten and the park rangers not to notice. That is, the distribution describing their movements could never be described by a Markov chain distribution of the form $p(a|e) p(b|e) p(e)$. Thus, in such a scenario, as soon as the bear attacks Alice and then Eve replaces her with one of her sisters, the park rangers would immediately notice something different about the movements of the party of three and then figure out what is going on. So at least Alice could save her sisters…
What is the lesson here? A similar scenario is faced in quantum key distribution. Eve and other attackers (such as a bear) might try to steal what is there in Alice’s system and then replace it with something else, in an attempt to go undetected. If the situation is described by classical physics, this would be possible if Eve had access to a “hidden variable” that dictates the actions of Alice and Bob. But according to Bell’s theorem or the monogamy of entanglement, it is impossible for a “hidden variable” strategy to mimic the outcomes of measurements performed on sufficiently entangled particles.
Since we never have perfectly entangled particles or ones whose distributions exactly factor as Markov chains, it would be ideal to quantify, for a given three-party quantum state of Alice, Bob, and Eve, how well one could recover from the loss of the Alice system by Eve performing a recovery channel on her system alone. This would help us to better understand approximate cases that we expect to appear in realistic scenarios. At the same time, we could have a more clear understanding of what constitutes an approximate quantum Markov chain.
Now due to recent results of Fawzi and Renner, we know that this quantification of quantum non-Markovianity is possible by using the conditional quantum mutual information (CQMI), a fundamental measure of information in quantum information theory. We already knew that the CQMI is non-negative when evaluated for any three-party quantum state, due to the strong subadditivity inequality, but now we can say more than that: If the CQMI is small, then Eve can recover well from the loss of Alice, implying that the reduced state of Alice and Bob’s system could not have been too entangled in the first place. Relatedly, if Eve cannot recover well from the loss of Alice, then the CQMI cannot be small. The CQMI is the quantity underlying the squashed entanglement measure, which in turn plays a fundamental role in characterizing the performance of realistic quantum key distribution systems.
Since the original results of Fawzi and Renner appeared on the arXiv, this topic has seen much activity in “quantum information land.” Here are some papers related to this topic, which have appeared in the past year or so (apologies if I missed your paper!):
Some of the papers are admittedly that of myself and my collaborators, but hey!, please forgive me, I’ve been excited about the topic. We now know simpler proofs of the original FawziRenner results and extensions of them that apply to the quantum relative entropy as well. Since the quantum relative entropy is such a fundamental quantity in quantum information theory, some of the above papers provide sweeping ramifications for many foundational statements in quantum information theory, including entanglement theory, quantum distinguishability, the Holevo bound, quantum discord, multipartite information measures, etc. Beyond i.i.d. had a day of talks dedicated to the topic, and I think we will continue seeing further developments in this area.
# Bits, bears, and beyond in Banff
Another conference about entropy. Another graveyard.
Last year, I blogged about the University of Cambridge cemetery visited by participants in the conference “Eddington and Wheeler: Information and Interaction.” We’d lectured each other about entropy–a quantification of decay, of the march of time. Then we marched to an overgrown graveyard, where scientists who’d lectured about entropy decades earlier were decaying.
This July, I attended the conference “Beyond i.i.d. in information theory.” The acronym “i.i.d.” stands for “independent and identically distributed,” which requires its own explanation. The conference took place at BIRS, the Banff International Research Station, in Canada. Locals pronounce “BIRS” as “burrs,” the spiky plant bits that stick to your socks when you hike. (I had thought that one pronounces “BIRS” as “beers,” over which participants in quantum conferences debate about the Measurement Problem.) Conversations at “Beyond i.i.d.” dinner tables ranged from mathematical identities to the hiking for which most tourists visit Banff to the bears we’d been advised to avoid while hiking. So let me explain the meaning of “i.i.d.” in terms of bear attacks.
The BIRS conference center. Beyond here, there be bears.
Suppose that, every day, exactly one bear attacks you as you hike in Banff. Every day, you have a probability p1 of facing down a black bear, a probability p2 of facing down a grizzly, and so on. These probabilities form a distribution {pi} over the set of possible events (of possible attacks). We call the type of attack that occurs on a given day a random variable. The distribution associated with each day equals the distribution associated with each other day. Hence the variables are identically distributed. The Monday distribution doesn’t affect the Tuesday distribution and so on, so the distributions are independent.
Information theorists quantify efficiencies with which i.i.d. tasks can be performed. Suppose that your mother expresses concern about your hiking. She asks you to report which bear harassed you on which day. You compress your report into the fewest possible bits, or units of information. Consider the limit as the number of days approaches infinity, called the asymptotic limit. The number of bits required per day approaches a function, called the Shannon entropy HS, of the distribution:
Number of bits required per day → HS({pi}).
The Shannon entropy describes many asymptotic properties of i.i.d. variables. Similarly, the von Neumann entropy HvN describes many asymptotic properties of i.i.d. quantum states.
But you don’t hike for infinitely many days. The rate of black-bear attacks ebbs and flows. If you stumbled into grizzly land on Friday, you’ll probably avoid it, and have a lower grizzly-attack probability, on Saturday. Into how few bits can you compress a set of nonasymptotic, non-i.i.d. variables?
We answer such questions in terms of ɛ-smooth α-Rényi entropies, the sandwiched Rényi relative entropy, the hypothesis-testing entropy, and related beasts. These beasts form a zoo diagrammed by conference participant Philippe Faist. I wish I had his diagram on a placemat.
“Beyond i.i.d.” participants define these entropies, generalize the entropies, probe the entropies’ properties, and apply the entropies to physics. Want to quantify the efficiency with which you can perform an information-processing task or a thermodynamic task? An entropy might hold the key.
Many highlights distinguished the conference; I’ll mention a handful. If the jargon upsets your stomach, skip three paragraphs to Thermodynamic Thursday.
Aram Harrow introduced a resource theory that resembles entanglement theory but whose agents pay to communicate classically. Why, I interrupted him, define such a theory? The backstory involves a wager against quantum-information pioneer Charlie Bennett (more precisely, against an opinion of Bennett’s). For details, and for a quantum version of The Princess and the Pea, watch Aram’s talk.
Graeme Smith and colleagues “remove[d] the . . . creativity” from proofs that certain entropic quantities satisfy subadditivity. Subadditivity is a property that facilitates proofs and that offers physical insights into applications. Graeme & co. designed an algorithm for checking whether entropic quantity Q satisfies subadditivity. Just add water; no innovation required. How appropriate, conference co-organizer Mark Wilde observed. BIRS has the slogan “Inspiring creativity.”
Patrick Hayden applied one-shot entropies to AdS/CFT and emergent spacetime, enthused about elsewhere on this blog. Debbie Leung discussed approximations to Haar-random unitaries. Gilad Gour compared resource theories.
Conference participants graciously tolerated my talk about thermodynamic resource theories. I closed my eyes to symbolize the ignorance quantified by entropy. Not really; the photo didn’t turn out as well as hoped, despite the photographer’s goodwill. But I could have closed my eyes to symbolize entropic ignorance.
Thermodynamics and resource theories dominated Thursday. Thermodynamics is the physics of heat, work, entropy, and stasis. Resource theories are simple models for transformations, like from a charged battery and a Tesla car at the bottom of a hill to an empty battery and a Tesla atop a hill.
My advisor’s Tesla. No wonder I study thermodynamic resource theories.
Philippe Faist, diagrammer of the Entropy Zoo, compared two models for thermodynamic operations. I introduced a generalization of resource theories for thermodynamics. Last year, Joe Renes of ETH and I broadened thermo resource theories to model exchanges of not only heat, but also particles, angular momentum, and other quantities. We calculated work in terms of the hypothesis-testing entropy. Though our generalization won’t surprise Quantum Frontiers diehards, the magic tricks in my presentation might.
At twilight on Thermodynamic Thursday, I meandered down the mountain from the conference center. Entropies hummed in my mind like the mosquitoes I slapped from my calves. Rising from scratching a bite, I confronted the Banff Cemetery. Half-wild greenery framed the headstones that bordered the gravel path I was following. Thermodynamicists have associated entropy with the passage of time, with deterioration, with a fate we can’t escape. I seem unable to escape from brushing past cemeteries at entropy conferences.
Not that I mind, I thought while scratching the bite in Pasadena. At least I escaped attacks by Banff’s bears.
With thanks to the conference organizers and to BIRS for the opportunity to participate in “Beyond i.i.d. 2015.”
# Holography and the MERA
The AdS/MERA correspondence has been making the rounds of the blogosphere with nice posts by Scott Aaronson and Sean Carroll, so let’s take a look at the topic here at Quantum Frontiers.
The question of how to formulate a quantum theory of gravity is a long-standing open problem in theoretical physics. Somewhat recently, an idea that has gained a lot of traction (and that Spiros has blogged about before) is emergence. This is the idea that space and time may emerge from some more fine-grained quantum objects and their interactions. If we could understand how classical spacetime emerges from an underlying quantum system, then it’s not too much of a stretch to hope that this understanding would give us insight into the full quantum nature of spacetime.
One type of emergence is exhibited in holography, which is the idea that certain (D+1)-dimensional systems with gravity are exactly equivalent to D-dimensional quantum theories without gravity. (Note that we’re calling time a dimension here. For example, you would say that on a day-to-day basis we experience D = 4 dimensions.) In this case, that extra +1 dimension and the concomitant gravitational dynamics are emergent phenomena.
A nice aspect of holography is that it is explicitly realized by the AdS/CFT correspondence. This correspondence proposes that a particular class of spacetimes—ones that asymptotically look like anti-de Sitter space, or AdS—are equivalent to states of a particular type of quantum system—a conformal field theory, or CFT. A convenient visualization is to draw the AdS spacetime as a cylinder, where time marches forward as you move up the cylinder and different slices of the cylinder correspond to snapshots of space at different instants of time. Conveniently, in this picture you can think of the corresponding CFT as living on the boundary of the cylinder, which, you should note, has one less dimension than the “bulk” inside the cylinder.
Even within this nice picture of holography that we get from the AdS/CFT correspondence, there is a question of how exactly do CFT, or boundary quantities map onto quantities in the AdS bulk. This is where a certain tool from quantum information theory called tensor networks has recently shown a lot of promise.
A tensor network is a way to efficiently represent certain states of a quantum system. Moreover, they have nice graphical representations which look something like this:
Beni discussed one type of tensor network in his post on holographic codes. In this post, let’s discuss the tensor network shown above, which is known as the Multiscale Entanglement Renormalization Ansatz, or MERA.
The MERA was initially developed by Guifre Vidal and Glen Evenbly as an efficient approximation to the ground state of a CFT. Roughly speaking, in the picture of a MERA above, one starts with a simple state at the centre, and as you move outward through the network, the MERA tells you how to build up a CFT state which lives on the legs at the boundary. The MERA caught the eye of Brian Swingle, who noticed that it looks an awfully lot like a discretization of a slice of the AdS cylinder shown above. As such, it wasn’t a preposterously big leap to suggest a possible “AdS/MERA correspondence.” Namely, perhaps it’s more than a simple coincidence that a MERA both encodes a CFT state and resembles a slice of AdS. Perhaps the MERA gives us the tools that are required to construct a map between the boundary and the bulk!
So, how seriously should one take the possibility of an AdS/MERA correspondence? That’s the question that my colleagues and I addressed in a recent paper. Essentially, there are several properties that a consistent holographic theory should satisfy in both the bulk and the boundary. We asked whether these properties are still simultaneously satisfied in a correspondence where the bulk and boundary are related by a MERA.
What we found was that you invariably run into inconsistencies between bulk and boundary physics, at least in the simplest construals of what an AdS/MERA correspondence might be. This doesn’t mean that there is no hope for an AdS/MERA correspondence. Rather, it says that the simplest approach will not work. For a good correspondence, you would need to augment the MERA with some additional structure, or perhaps consider different tensor networks altogether. For instance, the holographic code features a tensor network which hints at a possible bulk/boundary correspondence, and the consistency conditions that we proposed are a good list of checks for Beni and company as they work out the extent to which the code can describe holographic CFTs. Indeed, a good way to summarize how our work fits into the picture of quantum gravity alongside holography and tensors networks is by saying that it’s nice to have good signposts on the road when you don’t have a map.
# Hello, my name is QUANTUM MASTER EQUATION
“Why does it have that name?”
“I don’t know.” Lecturers have shrugged. “It’s just a name.”
This spring, I asked about master equations. I thought of them as tools used in statistical mechanics, the study of vast numbers of particles. We can’t measure vast numbers of particles, so we can’t learn about stat-mech systems everything one might want to know. The magma beneath Santorini, for example, consists of about 1024 molecules. Good luck measuring every one.
Imagine, as another example, using a quantum computer to solve a problem. We load information by initializing the computer to a certain state: We orient the computer’s particles in certain directions. We run a program, then read out the output.
Suppose the computer sits on a tabletop, exposed to the air like leftover casserole no one wants to save for tomorrow. Air molecules bounce off the computer, becoming entangled with the hardware. This entanglement, or quantum correlation, alters the computer’s state, just as flies alter a casserole.* To understand the computer’s output—which depends on the state, which depends on the air—we must have a description of the air. But we can’t measure all those air molecules, just as we can’t measure all the molecules in Santorini’s magma.
We can package our knowledge about the computer’s state into a mathematical object, called a density operator, labeled by ρ(t). A quantum master equation describes how ρ(t) changes. I had no idea, till this spring, why we call master equations “master equations.” Had someone named “John Master” invented them? Had the inspiration for the Russell Crowe movie Master and Commander? Or the Igor who lisps, “Yeth, mathter” in adaptations of Frankenstein?
Jenia Mozgunov, a fellow student and Preskillite, proposed an answer: Using master equations, we can calculate how averages of observable properties change. Imagine describing a laser, a cavity that spews out light. A master equation reveals how the average number of photons (particles of light) in the cavity changes. We want to predict these averages because experimentalists measure them. Because master equations spawn many predictions—many equations—they merit the label “master.”
Jenia’s hypothesis appealed to me, but I wanted certainty. I wanted Truth. I opened my laptop and navigated to Facebook.
“Does anyone know,” I wrote in my status, “why master equations are called ‘master equations’?”
Ian Durham, a physicist at St. Anselm College, cited Tom Moore’s Six Ideas that Shaped Physics. Most physics problems, Ian wrote, involve “some overarching principle.” Example principles include energy conservation and invariance under discrete translations (the system looks the same after you step in some direction). A master equation encapsulates this principle.
Ian’s explanation sounded sensible. But fewer people “liked” his reply on Facebook than “liked” a quip by a college friend: Master equations deserve their name because “[t]hey didn’t complete all the requirements for the doctorate.”
My advisor, John Preskill, dug through two to three books, one set of lecture notes, one German Wikipedia page, one to two articles, and Google Scholar. He concluded that Nordsieck, Lamb, and Uhlenbeck coined “master equation.” According to a 1940 paper of theirs,** “When the probabilities of the elementary processes are known, one can write down a continuity equation for W [a set of probabilities], from which all other equations can be derived and which we will call therefore the ‘master’ equation.”
“Are you sure you were meant to be a physicist,” I asked John, “rather than a historian?”
“Procrastination is a powerful motivator,” he replied.
Lecturers have shrugged at questions about names. Then they’ve paused, pondered, and begun, “I guess because…” Theorems and identities derive their names from symmetries, proof techniques, geometric illustrations, and applications to problems I’d thought unrelated. A name taught me about uses for master equations. Names reveal physics I wouldn’t learn without asking about names. Names aren’t just names. They’re lamps and guides.
Pity about the origin of “master equation,” though. I wish an Igor had invented them.
*Apologies if I’ve spoiled your appetite.
**A. Nordsieck, W. E. Lamb, and G. E. Uhlenbeck, “On the theory of cosmic-ray showers I,” Physica 7, 344-60 (1940), p. 353.
# Mingling stat mech with quantum info in Maryland
I felt like a yoyo.
I was standing in a hallway at the University of Maryland. On one side stood quantum-information theorists. On the other side stood statistical-mechanics scientists.* The groups eyed each other, like Jets and Sharks in West Side Story, except without fighting or dancing.
This March, the groups were generous enough to host me for a visit. I parked first at QuICS, the Joint Center for Quantum Information and Computer Science. Established in October 2014, QuICS had moved into renovated offices the previous month. QuICSland boasts bright colors, sprawling armchairs, and the scent of novelty. So recently had QuICS arrived that the restroom had not acquired toilet paper (as I learned later than I’d have preferred).
Photo credit: QuICS
From QuICS, I yoyo-ed to the chemistry building, where Chris Jarzynski’s group studies fluctuation relations. Fluctuation relations, introduced elsewhere on this blog, describe out-of-equilibrium systems. A system is out of equilibrium if large-scale properties of it change. Many systems operate out of equilibrium—boiling soup, combustion engines, hurricanes, and living creatures, for instance. Physicists want to describe nonequilibrium processes but have trouble: Living creatures are complicated. Hence the buzz about fluctuation relations.
My first Friday in Maryland, I presented a seminar about quantum voting for QuICS. The next Tuesday, I was to present about one-shot information theory for stat-mech enthusiasts. Each week, the stat-mech crowd invites its speaker to lunch. Chris Jarzynski recommended I invite QuICS. Hence the Jets-and-Sharks tableau.
“Have you interacted before?” I asked the hallway.
“No,” said a voice. QuICS hadn’t existed till last fall, and some QuICSers hadn’t had offices till the previous month.**
Silence.
“We’re QuICS,” volunteered Stephen Jordan, a quantum-computation theorist, “the Joint Center for Quantum Information and Computer Science.”
So began the mingling. It continued at lunch, which we shared at three circular tables we’d dragged into a chain. The mingling continued during the seminar, as QuICSers sat with chemists, materials scientists, and control theorists. The mingling continued the next day, when QuICSer Alexey Gorshkov joined my discussion with the Jarzynski group. Back and forth we yoyo-ed, between buildings and topics.
“Mingled,” said Yigit Subasi. Yigit, a postdoc of Chris’s, specialized in quantum physics as a PhD student. I’d asked how he thinks about quantum fluctuation relations. Since Chris and colleagues ignited fluctuation-relation research, theorems have proliferated like vines in a jungle. Everyone and his aunty seems to have invented a fluctuation theorem. I canvassed Marylanders for bushwhacking tips.
Imagine, said Yigit, a system whose state you know. Imagine a gas, whose temperature you’ve measured, at equilibrium in a box. Or imagine a trapped ion. Begin with a state about which you have information.
Imagine performing work on the system “violently.” Compress the gas quickly, so the particles roil. Shine light on the ion. The system will leave equilibrium. “The information,” said Yigit, “gets mingled.”
Imagine halting the compression. Imagine switching off the light. Combine your information about the initial state with assumptions and physical laws.*** Manipulate equations in the right way, and the information might “unmingle.” You might capture properties of the violence in a fluctuation relation.
With Zhiyue Lu and Andrew Maven Smith of Chris Jarzynski’s group (left) and with QuICSers (right)
I’m grateful to have exchanged information in Maryland, to have yoyo-ed between groups. We have work to perform together. I have transformations to undergo.**** Let the unmingling begin.
With gratitude to Alexey Gorshkov and QuICS, and to Chris Jarzynski and the University of Maryland Department of Chemistry, for their hospitality, conversation, and camaraderie.
*Statistical mechanics is the study of systems that contain vast numbers of particles, like the air we breathe and white dwarf stars. I harp on about statistical mechanics often.
**Before QuICS’s birth, a future QuICSer had collaborated with a postdoc of Chris’s on combining quantum information with fluctuation relations.
***Yes, physical laws are assumptions. But they’re glorified assumptions.
****Hopefully nonviolent transformations.
# Quantum gravity from quantum error-correcting codes?
The lessons we learned from the Ryu-Takayanagi formula, the firewall paradox and the ER=EPR conjecture have convinced us that quantum information theory can become a powerful tool to sharpen our understanding of various problems in high-energy physics. But, many of the concepts utilized so far rely on entanglement entropy and its generalizations, quantities developed by Von Neumann more than 60 years ago. We live in the 21st century. Why don’t we use more modern concepts, such as the theory of quantum error-correcting codes?
In a recent paper with Daniel Harlow, Fernando Pastawski and John Preskill, we have proposed a toy model of the AdS/CFT correspondence based on quantum error-correcting codes. Fernando has already written how this research project started after a fateful visit by Daniel to Caltech and John’s remarkable prediction in 1999. In this post, I hope to write an introduction which may serve as a reader’s guide to our paper, explaining why I’m so fascinated by the beauty of the toy model.
This is certainly a challenging task because I need to make it accessible to everyone while explaining real physics behind the paper. My personal philosophy is that a toy model must be as simple as possible while capturing key properties of the system of interest. In this post, I will try to extract some key features of the AdS/CFT correspondence and construct a toy model which captures these features. This post may be a bit technical compared to other recent posts, but anyway, let me give it a try…
Bulk locality paradox and quantum error-correction
The AdS/CFT correspondence says that there is some kind of correspondence between quantum gravity on (d+1)-dimensional asymptotically-AdS space and d-dimensional conformal field theory on its boundary. But how are they related?
The AdS-Rindler reconstruction tells us how to “reconstruct” a bulk operator from boundary operators. Consider a bulk operator $\phi$ and a boundary region A on a hyperbolic space (in other words, a negatively-curved plane). On a fixed time-slice, the causal wedge of A is a bulk region enclosed by the geodesic line of A (a curve with a minimal length). The AdS-Rindler reconstruction says that $\phi$ can be represented by some integral of local boundary operators supported on A if and only if $\phi$ is contained inside the causal wedge of A. Of course, there are multiple regions A,B,C,… whose causal wedges contain $\phi$, and the reconstruction should work for any such region.
The Rindler-wedge reconstruction
That a bulk operator in the causal wedge can be reconstructed by local boundary operators, however, leads to a rather perplexing paradox in the AdS/CFT correspondence. Consider a bulk operator $\phi$ at the center of a hyperbolic space, and split the boundary into three pieces, A, B, C. Then the geodesic line for the union of BC encloses the bulk operator, that is, $\phi$ is contained inside the causal wedge of BC. So, $\phi$ can be represented by local boundary operators supported on BC. But the same argument applies to AB and CA, implying that the bulk operator $\phi$ corresponds to local boundary operators which are supported inside AB, BC and CA simultaneously. It would seem then that the bulk operator $\phi$ must correspond to an identity operator times a complex phase. In fact, similar arguments apply to any bulk operators, and thus, all the bulk operators must correspond to identity operators on the boundary. Then, the AdS/CFT correspondence seems so boring…
The bulk operator at the center is contained inside causal wedges of BC, AB, AC. Does this mean that the bulk operator corresponds to an identity operator on the boundary?
Almheiri, Dong and Harlow have recently proposed an intriguing way of reconciling this paradox with the AdS/CFT correspondence. They proposed that the AdS/CFT correspondence can be viewed as a quantum error-correcting code. Their idea is as follows. Instead of $\phi$ corresponding to a single boundary operator, $\phi$ may correspond to different operators in different regions, say $O_{AB}$, $O_{BC}$, $O_{CA}$ living in AB, BC, CA respectively. Even though $O_{AB}$, $O_{BC}$, $O_{CA}$ are different boundary operators, they may be equivalent inside a certain low energy subspace on the boundary.
This situation resembles the so-called quantum secret-sharing code. The quantum information at the center of the bulk cannot be accessed from any single party A, B or C because $\phi$ does not have representation on A, B, or C. It can be accessed only if multiple parties cooperate and perform joint measurements. It seems that a quantum secret is shared among three parties, and the AdS/CFT correspondence somehow realizes the three-party quantum secret-sharing code!
Entanglement wedge reconstruction?
Recently, causal wedge reconstruction has been further generalized to the notion of entanglement wedge reconstruction. Imagine we split the boundary into four pieces A,B,C,D such that A,C are larger than B,D. Then the geodesic lines for A and C do not form the geodesic line for the union of A and C because we can draw shorter arcs by connecting endpoints of A and C, which form the global geodesic line. The entanglement wedge of AC is a bulk region enclosed by this global geodesic line of AC. And the entanglement wedge reconstruction predicts that $\phi$ can be represented as an integral of local boundary operators on AC if and only if $\phi$ is inside the entanglement wedge of AC [1].
Causal wedge vs entanglement wedge.
Building a minimal toy model; the five-qubit code
Okay, now let’s try to construct a toy model which admits causal and entanglement wedge reconstructions of bulk operators. Because I want a simple toy model, I take a rather bold assumption that the bulk consists of a single qubit while the boundary consists of five qubits, denoted by A, B, C, D, E.
Reconstruction of a bulk operator in the “minimal” model.
What does causal wedge reconstruction teach us in this minimal setup of five and one qubits? First, we split the boundary system into two pieces, ABC and DE and observe that the bulk operator $\phi$ is contained inside the causal wedge of ABC. From the rotational symmetries, we know that the bulk operator $\phi$ must have representations on ABC, BCD, CDE, DEA, EAB. Next, we split the boundary system into four pieces, AB, C, D and E, and observe that the bulk operator $\phi$ is contained inside the entanglement wedge of AB and D. So, the bulk operator $\phi$ must have representations on ABD, BCE, CDA, DEB, EAC. In summary, we have the following:
• The bulk operator must have representations on R if and only if R contains three or more qubits.
This is the property I want my toy model to possess.
What kinds of physical systems have such a property? Luckily, we quantum information theorists know the answer; the five-qubit code. The five-qubit code, proposed here and here, has an ability to encode one logical qubit into five-qubit entangled states and corrects any single qubit error. We can view the five-qubit code as a quantum encoding isometry from one-qubit states to five-qubit states:
$\alpha | 0 \rangle + \beta | 1 \rangle \rightarrow \alpha | \tilde{0} \rangle + \beta | \tilde{1} \rangle$
where $| \tilde{0} \rangle$ and $| \tilde{1} \rangle$ are the basis for a logical qubit. In quantum coding theory, logical Pauli operators $\bar{X}$ and $\bar{Z}$ are Pauli operators which act like Pauli X (bit flip) and Z (phase flip) on a logical qubit spanned by $| \tilde{0} \rangle$ and $| \tilde{1} \rangle$. In the five-qubit code, for any set of qubits R with volume 3, some representations of logical Pauli X and Z operators, $\bar{X}_{R}$ and $\bar{Z}_{R}$, can be found on R. While $\bar{X}_{R}$ and $\bar{X}_{R'}$ are different operators for $R \not= R'$, they act exactly in the same manner on the codeword subspace spanned by $| \tilde{0} \rangle$ and $| \tilde{1} \rangle$. This is exactly the property I was looking for.
Holographic quantum error-correcting codes
We just found possibly the smallest toy model of the AdS/CFT correspondence, the five-qubit code! The remaining task is to construct a larger model. For this goal, we view the encoding isometry of the five-qubit code as a six-leg tensor. The holographic quantum code is a network of such six-leg tensors covering a hyperbolic space where each tensor has one open leg. These open legs on the bulk are interpreted as logical input legs of a quantum error-correcting code while open legs on the boundary are identified as outputs where quantum information is encoded. Then the entire tensor network can be viewed as an encoding isometry.
The six-leg tensor has some nice properties. Imagine we inject some Pauli operator into one of six legs in the tensor. Then, for any given choice of three legs, there always exists a Pauli operator acting on them which counteracts the effect of the injection. An example is shown below:
In other words, if an operator is injected from one tensor leg, one can “push” it into other three tensor legs.
Finally, let’s demonstrate causal wedge reconstruction of bulk logical operators. Pick an arbitrary open tensor leg in the bulk and inject some Pauli operator into it. We can “push” it into three tensor legs, which are then injected into neighboring tensors. By repeatedly pushing operators to the boundary in the network, we eventually have some representation of the operator living on a piece of boundary region A. And the bulk operator is contained inside the causal wedge of A. (Here, the length of the curve can be defined as the number of tensor legs cut by the curve). You can also push operators into the boundary by choosing different tensor legs which lead to different representations of a logical operator. You can even have a rather exotic representation which is supported non-locally over two disjoint pieces of the boundary, realizing entanglement wedge reconstruction.
Causal wedge and entanglement wedge reconstruction.
What’s next?
This post is already pretty long and I need to wrap it up…
Shor’s quantum factoring algorithm is a revolutionary invention which opened a whole new research avenue of quantum information science. It is often forgotten, but the first quantum error-correcting code is another important invention by Peter Shor (and independently by Andrew Steane) which enabled a proof that the quantum computation can be performed fault-tolerantly. The theory of quantum error-correcting codes has found interesting applications in studies of condensed matter physics, such as topological phases of matter. Perhaps then, quantum coding theory will also find applications in high energy physics.
Indeed, many interesting open problems are awaiting us. Is entanglement wedge reconstruction a generic feature of tensor networks? How do we describe black holes by quantum error-correcting codes? Can we build a fast scrambler by tensor networks? Is entanglement a wormhole (or maybe a perfect tensor)? Can we resolve the firewall paradox by holographic quantum codes? Can the physics of quantum gravity be described by tensor networks? Or can the theory of quantum gravity provide us with novel constructions of quantum codes?
I feel that now is the time for quantum information scientists to jump into the research of black holes. We don’t know if we will be burned by a firewall or not … , but it is worth trying.
1. Whether entanglement wedge reconstruction is possible in the AdS/CFT correspondence or not still remains controversial. In the spirit of the Ryu-Takayanagi formula which relates entanglement entropy to the length of a global geodesic line, entanglement wedge reconstruction seems natural. But that a bulk operator can be reconstructed from boundary operators on two separate pieces A and C non-locally sounds rather exotic. In our paper, we constructed a toy model of tensor networks which allows both causal and entanglement wedge reconstruction in many cases. For details, see our paper.
|
2015-10-05 10:08:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 64, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5380311608314514, "perplexity": 1444.9365759330535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736677221.11/warc/CC-MAIN-20151001215757-00050-ip-10-137-6-227.ec2.internal.warc.gz"}
|
https://deepai.org/publication/formal-methods-with-a-touch-of-magic
|
# Formal Methods with a Touch of Magic
Machine learning and formal methods have complimentary benefits and drawbacks. In this work, we address the controller-design problem with a combination of techniques from both fields. The use of black-box neural networks in deep reinforcement learning (deep RL) poses a challenge for such a combination. Instead of reasoning formally about the output of deep RL, which we call the wizard, we extract from it a decision-tree based model, which we refer to as the magic book. Using the extracted model as an intermediary, we are able to handle problems that are infeasible for either deep RL or formal methods by themselves. First, we suggest, for the first time, combining a magic book in a synthesis procedure. We synthesize a stand-alone correct-by-design controller that enjoys the favorable performance of RL. Second, we incorporate a magic book in a bounded model checking (BMC) procedure. BMC allows us to find numerous traces of the plant under the control of the wizard, which a user can use to increase the trustworthiness of the wizard and direct further training.
## Authors
• 2 publications
• 7 publications
• 18 publications
• 7 publications
11/17/2017
### A Supervisory Control Algorithm Based on Property-Directed Reachability
We present an algorithm for synthesising a controller (supervisor) for a...
02/02/2021
### An Abstraction-based Method to Check Multi-Agent Deep Reinforcement-Learning Behaviors
Multi-agent reinforcement learning (RL) often struggles to ensure the sa...
10/02/2018
### CEM-RL: Combining evolutionary and gradient-based methods for policy search
Deep neuroevolution and deep reinforcement learning (deep RL) algorithms...
05/14/2020
### Probabilistic Guarantees for Safe Deep Reinforcement Learning
Deep reinforcement learning has been successfully applied to many contro...
11/17/2016
### Learning to reinforcement learn
In recent years deep reinforcement learning (RL) systems have attained s...
03/13/2019
### Task-oriented Design through Deep Reinforcement Learning
We propose a new low-cost machine-learning-based methodology which assis...
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## I Introduction
Machine-learning techniques and, in particular, the use of neural networks (NNs), are exploding in popularity and becoming a vital part of the development of many technologies. There is a challenge, however, in deploying systems that use trained components, which are inherently black-box. For a system to be used by a human, it must be trustworthy: provably correct, or predictable, at the least. Current trained systems lack either of these properties.
In this work, we focus on the controller-design problem. Abstractly speaking, a controller is a device that interacts with a plant. At each time step, the plant outputs its state and the controller feeds back an action. Combining techniques from both formal methods and machine learning is especially appealing in the controller-design problem since it is critical that the designed controller is both correct and that it optimizes plant performance.
Reinforcement learning (RL) is the main machine-learning tool for designing controllers. The RL approach is based on “trial and error”: the agent randomly explores its environment, receives rewards and learns from experience how to maximize them. RL has made a quantum leap in terms of scalability since the recent introduction of NNs into the approach, termed deep RL [1]. We call the output of deep RL the wizard: it optimizes plant performance but, since it is a NN, it does not reveal its decision procedure. More importantly, there are no guarantees on the wizard and it can behave unexpectedly and even incorrectly.
Reasoning about systems that use NNs poses a challenge for formal methods. First, in terms of scalability (NNs tend to be large), and second, the operations that NNs depend on are challenging for formal methods tools, namely NNs use numerical rather than Boolean operations and ReLuneurons use the max operator, which SMT tools struggle with.
We propose a novel approach based on extracting a decision-tree-based model from the wizard, which approximates its operation and is intended to reveal its decision-making process. Hence, we refer to this model as the magic book. Our requirements for the magic book are that it is (1) simple enough for formal methods to use, and (2) a good approximation of the NN.
Extracting decision-tree-based models that approximate a complicated function is an established practice [2]
. The assumption that allows this extraction to work is that a NN contains substantial redundancy. During training, the NN “learns” heuristics that it uses to optimize plant performance. The heuristics can be compactly captured in a small model, e.g., in a decision-tree. This assumption has led, for example, to attempts of distilling knowledge from a trained NN to a second NN during its training
[3, 4], and of minimizing NNs (e.g., [5]). The extraction of a simple model is especially common in explainable AI (XAI) [6], where the goal is to explain the operation of a learned system to a human user.
We use the tree-based magic book to solve problems that are infeasible both for deep RL and for formal methods alone. Specifically, we illustrate the magic book’s benefit in two approaches for designing controllers as we elaborate below.
Reactive synthesis [7] is a formal approach to design controllers. The input is a qualitative specification and the output is a correct-by-design controller. The fact that the controller is provably correct, is the strength of synthesis. A first weakness of traditional synthesis is that it is purely qualitative and specifications cannot naturally express quantitative performance. There is a recent surge of quantitative approaches to synthesis (e.g., [8, 9, 10]). However, these approaches suffer from other weaknesses of synthesis: deep RL vastly outperforms synthesis in terms of scalability. Also, in the average-case, RL-based controllers beat synthesized controllers since the goal in synthesis is to maximize worst-case performance.
Synthesis is often reduced to solving a two-player graph game; Player represents the controller and Player represents the plant. In each step, Player reveals the current state of the plant and Player responds by choosing an action. In our construction, when Player chooses , we extract from the magic book the action that is taken at . Player ’s action then depends on as we elaborate below. The construction of the game arena thus depends on the magic book, and using the wizard instead is infeasible.
We present a novel approach for introducing performance considerations into reactive synthesis. We synthesize a controller that satisfies a given qualitative specification while following the magic book as closely as possible. We formalize the later as a quantitative objective: whenever Player agrees with the choice of action suggested by Player , he receives a reward, and the goal is to maximize rewards. Since the magic book is a proxy for the RL-generated wizard, we obtain the best of both worlds: a provably correct controller that enjoys the high average-case performance of RL. In our experiments, we synthesize a controller for a taxi that travels on a grid for the specification “visit a gas station every steps” while following advice from a wizard that is trained to collect as many passengers as possible in a given time frame.
In a second application, we use a magic book to relax the adversarial assumption on the environment in a multi-agent setting. We are thus able to synthesize controllers for specifications that are otherwise unrealizable, i.e., for which traditional synthesis does not return any controller. Our goal is to synthesize a controller for an agent that interacts with an environment that consists of other agents. Instead of modeling the other agents as adversarial, we assume that they operate according to a magic book. This restricts their possible actions and regains realizability. For example, suppose a taxi that is out of our control, shares a network of roads with a bus, under our control. Our goal is to synthesize a controller that guarantees that the bus travels between two stations without crashing into a taxi. While an adversarial taxi can block the bus, by assuming that the taxi operates according to a magic book, we limit Player ’s action in the game and find a winning Player strategy that corresponds to a correct controller.
Bounded model checking [11] (BMC) is an established technique to find bounded traces of a system that satisfy a given specification. In a second approach to the controller-design problem, we use BMC as an XAI tool to increase the trustworthiness of a wizard before outputting it as the controller of the plant. We rely on BMC to find (many) traces of the plant under the control of the wizard that are tedious to find manually.
We solve BMC by constructing an SMT program that intuitively simulates the operation of the plant under the control of the magic book rather than under the control of the wizard. This leads to a simple reduction and a significant performance gain: in our experiments, we use the standard SMT solver Z3 [12] to extract thousands of witnesses within minutes, whereas Z3 is incapable of solving extremely modest wizard-based BMC instances. Since traces returned by BMC witness the magic book, a secondary simple test is required to check that the traces witness the wizard as well. In our experiments, we find that many traces are indeed shared between the two, since the magic book is a good approximation of the wizard. Thus, our procedure efficiently finds numerous traces of the plant under the control of the wizard.
A first application of BMC is in verification; namely, we find counterexamples for a given specification. For example, when controlling a taxi, a violation of a liveness property is an infinite loop in which no passenger is collected. We find it more appealing to use BMC as an XAI tool. For example, BMC allows us to find “suspicious” traces that are not necessarily incorrect; e.g., when controlling a taxi, a passenger that is not closest is collected first. Individual traces can serve as explanations. Alternatively, we use BMC’s ability to find many traces and gather a large dataset. We extract a small human-interpretable model from the dataset that attempts to explain the wizard’s decision-making procedure. For example, the model serves as an answer to the question: when does the wizard prefer collecting a passenger that is not closest?
### I-a Related work
We compare our synthesis approach to shielding [13, 14], which adds guarantees to a learned controller at runtime by monitoring the wizard and correcting its actions. Unlike shielding, the magic book allows us to open up the black-box wizard, which, for example, enables our controller to cross an obstacle that was not present in training, a task that is inherently impossible for a shield-based controller. A second key difference is that we produce stand-alone controllers whereas a shield-based approach needs to execute the NN wizard in each step. Our method is thus preferable in settings where running a NN is costly, e.g., embedded systems or real time systems.
To the best of our knowledge, synthesis in combination with a magic book was never studied. Previously, finding counterexamples for tree-based controllers that are extracted from NN controllers was studied in [15] and [16]
. The ultimate goal in those works is to output a correct tree-based controller. A first weakness of this approach is that, since both wizard and magic book are trained, they exhibit many correctness violations. We believe that repairing them manually while maintaining high performance is a challenging task. Our synthesis procedure assists in automating this procedure. Second, in some cases, a designer would prefer to use a NN controller rather than a tree-based one since NNs tend to generalize better than tree-based models. Hence, we promote the use of BMC for XAI to increase the trustworthiness of the wizard. Finally, the case studies the authors demonstrate are different from ours, thus they strengthen the claim that a tree-based classifier extraction is not specific to our domain rather it is a general concept.
A specialized wizard-based BMC tool was recently shown in [17], thus unlike our approach, there is no need to check that the output trace is also a witness for the wizard. More importantly, their method is “sound”: if their method terminates without finding a counterexample for bound , then there is indeed no violation of length . Beyond the disadvantages listed above, the main disadvantage of their approach is scalability, which is not clear in the paper. As we describe in the experiments section, our experience is that a wizard-based BMC implemented in Z3 does not scale.
Our BMC procedure finds traces that witness a temporal behavior of the plant. This is very different from finding adversarial examples, which are inputs slightly perturbed so that to lead to a different output. Finding adversarial examples and verifying robustness have attracted considerable attention in NNs (for example, [18, 19, 20]
) as well as in random-forest classifiers (e.g.,
[21, 22]).
Somewhat similar in spirit to our approach is applying program synthesis to extract a program from a NN [23, 24], which, similar to the role of the magic book, is an alternative small model for application of formal methods.
Finally, examples of other combinations of RL with synthesis include works that run an online version of RL (see [25] and references therein), an execution of RL restricted to correct traces [26], and RL with safety specifications [27].
## Ii Preliminaries
#### Plant and controller
We formalize the interaction between a controller and a plant. The plant is modelled as a Markov decision process (MDP) which is , where is a finite set of states, is an initial configuration of the state, is a finite collection of actions, is a reward provided in each state, and
is a probabilistic transition function that, given a state and an action, produces a probability distribution over states.
###### Example 1.
Our running example throughout the paper is a taxi that travels on an grid and collects passengers. Whenever a passenger is collected, it re-appears in a random location. A state of the plant contains the locations of the taxi and the passengers, thus it is a tuple , where for , the pair is a position on the grid, is the position of the taxi, and is the position of Passenger . The set of actions is . The transitions of are largely deterministic: given an action , we obtain the updated state by updating the position of the taxi deterministically, and if the taxi collects a passenger, i.e., , for some , then the new position of Passenger is chosen uniformly at random.
The controller is a policy, which prescribes which action to take given the history of visited states, thus it is a function . A policy is positional if the action that it prescribes depends only on the current position, thus it is a function . We are interested in finding an optimal and correct policy as we define below.
#### Qualitative correctness
We consider a strong notion of qualitative correctness that disregards probabilistic events, often called surely correctness. A specification is . We define the support of at given as and, for a policy , we define the support of to be . We define the surely language of w.r.t. , denoted . A run is in iff we have and for every , we have , where . We say that is surely-correct for plant w.r.t. a specification iff it allows only correct runs of , thus .
#### Quantitative performance and deep reinforcement learning
The goal of reinforcement learning (RL) is to find a policy in an MDP that maximizes the expected reward [28]. In a finite MDP , the state at a time step
is a random variable, denoted
. Each time step entails a reward, which is also a random variable, denoted . The probability that and get particular values depends solely on the previous state and action. Formally, for an initial state , we define , and for and , we have We consider discounted rewards. Let be a discount factor. The expected reward that a policy ensures starting at state is , where is defined w.r.t. as in the above. The goal is to find the optimal policy that attains .
We consider the Q-learning algorithm for solving MDPs, which relies on a function such that represents the expected value under the assumption that the initial state is and the first action to be taken is , thus . Clearly, given the function , one can obtain an optimal positional policy , by defining , for every state
. In Q-learning, the Q function is estimated and iteratively refined using the Bellman equation.
Traditional implementations of Q-learning assume that the MDP is represented explicitly. Deep RL [1] implements the Q-learning algorithm using a symbolic representation of the MDP as a NN. The NN takes as input a state and outputs for each , an estimate of . The technical challenge in deep RL is that it combines training of the NN with estimating the Q function. We call the NN that deep RL outputs the wizard. Even though deep RL does not provide any guarantees on the wizard, in practice it has shown remarkable success.
#### Magic books from decision-tree-based classifiers
Recall that the output of deep RL is a positional function that is represented by a NN . We are interested in extracting a small function MB of the same type that approximates Wiz well. We use decision-tree based classifiers as our model of choice for MB. Each internal node of a decision tree is labeled with a predicate over and each leaf is labeled with an action in . A plant state gives rise to a unique path in a decision tree , denoted , in the expected manner. The first node is the root. Upon visiting an internal node , the next node in depends on the satisfaction value of . Suppose is the sequence of predicates traversed by a path , we use to denote . Thus, for every we have iff satisfies . When ends in a leaf labeled , we say that the tree votes for . A forest contains several trees. On input , each tree votes for an action and the action receiving most votes is output.
To obtain MB from Wiz, we first execute Wiz with the plant on a considerable number of steps to collect pairs of the form , for , where is the system state at time . We then employ standard techniques on this dataset to construct either a decision tree, or a forest of decision trees using the state-of-the-art random forest [29] or
[30] techniques.
###### Remark 1.
One might wonder whether the wizard is an essential step in the construction of the magic book. That is, whether it is possible to obtain a decision tree directly from RL. While some attempts have been made to use decision trees to succinctly represent a policy [31], the combination of decision trees with RL is not as natural as it is with other models (such as NNs). It has never shown great success and has largely been abandoned. Thus, we argue that the wizard is indeed essential. Extracting a decision-tree controller from a NN was also done in [15, 16].
## Iii Synthesis with a Touch of Magic
Our primary goal in this section is to automatically construct a correct controller and performance is a secondary consideration. We incorporate a magic book into synthesis and illustrate two applications of the constructions that are infeasible without a magic book.
### Iii-a Constructing a game
Synthesis is often reduced to a two-player graph game (see [32]). In this section, we describe a construction of a game arena that is based on a magic book and in the next sections we complete the construction by describing the players’ objectives and illustrate applications. In the traditional game, Player represents the environment and in each turn, he reveals the current location of the plant. Player , who represents the controller, answers with an action. A strategy for Player corresponds to a policy (controller) since, given the history of observed plant states, it prescribes which action to feed in to the plant next. The traditional goal is to find a Player strategy that guarantees that a given specification is satisfied no matter how Player plays. Traditional synthesis is purely qualitative; namely, it returns some correct policy with no consideration to its performance. When no correct controller exists, we say that is un-realizable.
Formally, a graph game is played on an arena , where is a set of vertices, for , Player ’s possible actions are , and is a deterministic transition function. The game proceeds by placing a token on a vertex in . When the token is placed on , Player moves first and chooses . Then, Player chooses and the token proceeds to . In games, rather than using the term “policy”, we use the term strategy. Two strategies and for the two players and an initial vertex induce a unique infinite play, which we denote by .
We describe our construction in which the roles of the players is slightly altered. Consider a plant with state space and actions . The arena of our synthesis game is based on two abstractions and of . While we assume is provided by a user, the partition is extracted from the magic book. The arena is , where is defined below. Suppose that the token is placed on (see Fig. 1). Intuitively, the actual location of the plant is a state with . Player moves first and chooses a set such that . Intuitively, a Player action reveals that the actual state of the plant is in . Player reacts by choosing an action . We denote by the set of possible next locations the plant can be in, thus . Then, the next state in the game according to is the minimal-sized set such that .
Suppose for ease of presentation that the magic book is a decision tree , and the construction easily generalizes to forests. Recall that a state produces a unique path , which corresponds to sequence of predicates , and . We define . An immediate consequence of the construction is the following.
###### Lemma 1.
For every there is such that , for all .
In the following lemma we formalize the intuition that Player over-approximates the plant. It is not hard, given a Player strategy , to obtain a policy that follows it. For , we use and to denote the unique abstract sets that belongs to.
###### Lemma 2.
Let be a Player strategy. Consider a trace . Then, there is a Player strategy such that .
###### Proof.
We define inductively so that for every , the -th vertex of is . Suppose the invariant holds for . Player chooses . The definition of implies that the invariant is maintained, thus . ∎
We note that the converse of Lemma 2 is not necessarily correct, thus Player strictly over-approximates the plant. Indeed, suppose that the token is placed on , Player chooses , Player chooses , and the token proceeds to . Intuitively, the plant state was in and thus should now be in . In the subsequent move, however, Player is allowed to choose any with , even one that does not intersect .
In this section, we abstain from solving the problem of finding a correct and optimal controller; a problem that is computationally hard for explicit systems, not to mention symbolically-represented systems like the ones we consider. Instead, in order to add performance consideration to synthesis, we think of the wizard as an authority in terms of performance and solve the (hopefully simpler) problem of constructing a correct controller that follows the wizard’s actions as closely as possible. We use the magic book as a proxy for the wizard and assume that following its actions most of the time results in favorable performance.
The game arena is constructed as in the previous section. Player ’s goal is to ensure that a given specification is satisfied while optimizing a quantitative objective that we use to formalize the notion of “following the magic book”. For simplicity, we consider finite paths, thus , and the definitions can be generalized to infinite plays. By Lem. 1, every Player action corresponds to a unique action in , which we denote by . We think of Player as “suggesting” the action since for every , we have . To motivate Player to use , when he “accepts” the suggestion and chooses the same action, he obtains a reward of and otherwise he obtains no reward. Then, Player ’s goal in the game is to maximize the sum of rewards that he obtains.
We formalize the guarantees of the controller that we synthesize w.r.t. an optimal strategy for Player . Intuitively, the payoff that guarantees in the game is a lower on the number of times agrees with the magic book in any trace of the plant. Let and be two strategies for the two players. We use to denote the payoff of Player in the game. When , we set , thus Player first tries to ensure that holds. If , the score is the sum of rewards in . We assign a score to in a path-based manner. Let . For every , we issue a reward of if , and we denote by , the sum of rewards, which represents the sum of states in which agrees with MB throughout . The following theorem follows from Lem. 2.
###### Theorem 1.
Let be a strategy that achieves . If , then is correct w.r.t. . Moreover, for every we have .
### Iii-C Multi-agent synthesis
In this section, we design a controller in a multi-agent setting, where traditional synthesis is unrealizable and thus does not return any controller.
For ease of presentation, we focus on two agents, and the construction can be generalized to more agents in a straightforward manner. We assume that the set of actions is partitioned between the two agents, thus . In each step, the players simultaneously select actions, where for , Player selects an action in . As before, the joint action determines a probability distribution on the next state according to . Our goal is to find a controller for Agent that satisfies a given specification no matter how Player plays.
###### Example 2.
Suppose that the grid has two means of transportation: a bus (Agent ) and a taxi (Agent ). We are interested in synthesizing a bus controller for the specification “travel between two stations while not hitting the taxi”. If one models the taxi as an adversary, the specification is clearly not realizable: the taxi parks in one of the targets so that the bus cannot visit it without crashing into the taxi.
We assume that Agent is operating according to a magic book. As in the previous section, we require an abstraction such that and the abstraction is obtained from the magic book. We construct a game arena as in Section III-A and Player wins an infinite play iff it satisfies .
The way the magic book is employed here is that it restricts the possible actions that Player can take. Going back to the taxi and bus example, at a state , Player essentially chooses how to move the taxi. Suppose the token is placed on . Player cannot choose to move the taxi in any direction; indeed, he can choose only when there is a state such that . The following theorem is an immediate consequence of Lem. 2.
###### Theorem 2.
Let be a winning strategy: for every , satisfies . Then, .
In Remark 3 we discuss the guarantees on the magic book that are needed to assume that Agent operates according to a wizard rather than a magic book.
## Iv BMC Based on Magic Books
In this section, we describe a bounded-model-checking (BMC) [11] procedure that is based on a tree-based magic book. We use our procedure in verification and as an explainability tool to increase the trustworthiness of the wizard before outputting it as the controller for the plant.
###### Definition 1 (Bounded model checking).
Given a plant with state space , a specification , a bound , and a policy , output a run of length in if one exists.
BMC reduces to the satisfiability problem for satisfiability modulo theories (SMT), where the goal is, given a set of constraints over a set of variables , either find a satisfying assignment to or return that none exists. We are interested in solving BMC for wizards, i.e., finding a path in . However, as can be seen in the proof of Thm. 3 below, the SMT program needs to simulate the execution of the wizard, thus it becomes both large and challenging (due to the max operator) for standard SMT solvers. Instead, we solve BMC for magic books to find a path . Since MB is a good approximation for Wiz, we often have .
###### Theorem 3.
BMC reduces to SMT. Specifically, given a plant with states , a specification , a policy given as a tree-based magic book, and a bound , there is an SMT formula whose satisfying assignments correspond to paths of length in .
###### Proof.
The first steps of the reduction are standard. Consider a policy and a bound . The variables consist of state variables and action variables . We add constraints so that, for a satisfying assignment , for , each corresponds to a state in , and for , each corresponds to an action in . Moreover, for , the constraints ensure that , thus we obtain a path in .
We consider a specification that can be represented as an SMT constraint over and add constraints so that the path we find is in .
The missing component from this construction ensures that the action is indeed the action that selects at state . For that, we need to simulate the operation of using constraints. Suppose first that is represented using a decision tree . For a path in , recall that is the predicate that is satisfied by every state such that . Moreover, recall that each is a predicate over . For , we create a copy of using the variables so that it is satisfied iff satisfies . For , let denote the set of paths in that end in the action . We add a constraint that states that if is true at time , then . Finally, when MB is a forest, we need to count the number of trees that vote for each action and set to equal the action with the highest count. ∎
###### Remark 2.
(The size of the SMT program). In the construction in Theorem 3, as is standard in BMC, we use roughly copies of , where the size of each copy depends on the representation size of . In addition, we need a constraint that represents , which in our examples, is of size . The main bottleneck are the constraints that represent . Each path appears exactly once in a constraint, and we use copies of , thus the total size of these constraints is , where is the number of paths in the trees in the forest.
###### Example 3.
Recall the description of the plant in Example 1 in which a taxi travels in a grid. We illustrate how to simulate the plant using an SMT program. A state at time is a tuple of variables that take integer values in . The position of the taxi at time is and the position of Passenger is . The transition function is represented using constraints. For example, the constraint means that when the action up is taken, the taxi moves one step up. The constraint means that if Passenger is not collected by the taxi at time , its location should not change. A key point is that when Passenger is collected, we do not constrain his new location, thus we replace the randomness in with nondeterminism.
#### Verification
In verification, our goal is to find violations of the wizard for a given specification.
###### Example 4.
We show how to express the specification “the taxi never enters a loop in which no passenger is collected” as an SMT constraint based on the construction in Example 3. We simplify slightly and use the constraint that means that the taxi returns to its initial position to close a cycle at the end of the trace. We add a second constraint that means that all passengers stay in their original position throughout the trace. In Fig. 3 (right), we depict a lasso-shaped trace that witnesses a violation of this property.
###### Remark 3 (Soundness).
The benefit of using magic books is scalability, and the draw-back is soundness. For example, when the SMT formula is unsatisfiable for a bound , this only means that there are no violations of the magic book of length , and there can still be a violation of the wizard. To regain soundness we would need guarantees on the relation between the magic book and the wizard. An example of a guarantee is that the two functions coincide, thus for every state , we have . However, if at all possible, we expect such a strong guarantee to come at the expense of a huge magic book, thus bringing us back to square one. We are more optimistic that one can find small magic books with approximation guarantees. For example, one can define a magic book as a function that “suggests” a set of actions rather than only one, and require that for every state , we have . Such guarantees suffice to regain soundness both in BMC and for the synthesis application in Section III-C. We leave for future work obtaining such magic books.
#### Explainability
We illustrate how BMC can be used as an XAI tool. BMC allows us to find corner-case traces that are hard to find in a manual simulation and the individual traces can serve as explanations. For example, in Fig. 3 (left), we depict a trace that is obtained using BMC for the property “the first passenger to be collected is not the closest”.
A second application of BMC is based on gathering a large number of traces. We construct a small human-readable model that explains the decision procedure of the wizard. We note that while the magic book is already a small model that approximates the wizard, its size is way to large for a human to reason about. For us, a small model is one decision tree of depth at most . Moreover, the magic book is a “local” function, its type is from states to actions, whereas a human is typically interested in “global” behavior, e.g., which action to take next as opposed to which passenger is collected next, respectively.
We rely on the user to supply specifications . We gather a dataset that consists of pairs of the form , for each , where is such that when the plant starts at configuration under the control of the wizard, then is satisfied. To find many traces that satisfy , we iteratively call an SMT solver. Suppose it finds a trace . Then, before the next call, we add the constraint to the SMT program so that is not found again. In practice, the amortized running time of this simple algorithm is low. One reason is that generating the SMT program takes considerable time, even when comparing to the time it takes to solve it. This running time is essentially amortized over all executions since the running time of adding a single constraint is negligible. In addition, the SMT solver learns the structure of the SMT program and uses it to speed up subsequent executions.
###### Example 5.
Suppose we are interested in understanding if and how the wizard prioritizes collecting passengers. We consider the specifications “Passenger is collected first”, for . It can be formalized using the following constraints. The constraint means that Passenger is not collected since it stays in place throughout the whole trace, and we add such a constraint for all but one passenger. The constraint means that Passenger must have been collected at least once since its final position differs from his initial position. In Fig. 4 we depict a tree that we extract using these specifications.
## V Experiments
#### Setup
We illustrate our approach using an implementation of the case study that is our running example: a taxi traveling on a grid and collecting passengers. We set the size of the grid to be and the number of passengers to , thus the state space is almost . All simulations were programmed in Python and run on a personal computer with an Intel Core i3-4130 3.40GHz CPU, 7.7 GiB memory runnning Ubuntu.
#### Training a wizard using deep RL
The plant state in our training is a -tuple that, for each passenger, contains the distances to the taxi on both axes. When the taxi collects a passenger, the agent receives a reward of . Multi-objective RL is notoriously difficult because the agent gets confused by the various targets. We thus found it useful to add a “hint” when the taxi does not collect a passenger: at time , if a passenger is not collected, the agent receives a reward of , where , for and , is the manhattan distance between the taxi and passenger at time
. We use the Python library Keras
[33] and the “Adam” optimizer [34]
to minimize mean squared error loss. We train a NN with two hidden layers that use a ReLU activation function and with
and neurons, respectively, and a linear output layer. Each episode consists of steps and we train for episodes.
#### Extracting the magic book
We extract configuration-action pairs from episodes of the trained agent. We use Python’s scikit-learn library [35] to fit one of the tree-based classification model to the obtained dataset. Table I depicts a comparison between the models and the wizard on episodes. Performance refers to the total number of passengers collected in a simulation. It is encouraging that small forests with shallow trees (of depth not more than ) approximate the wizard well.
The specification we consider is “reach a gas station every time steps”, for some . Our controllers exhibit performance that is not too far from the wizard: see Table I for the performance with and synthesis based on different tree models (take into account that the wizard does not visit the gas station). We view this experiment as a success: we achieve our goal of synthesizing a correct controller that achieves favorable performance. We point out that since traditional synthesis does not address performance, a controller that it produces visits the gas station every steps but does not collect any passenger.
#### Comparing with a shield-based approach
A shield-based controller [13, 14] consists of a shield that uses a wizard as a black box: given a plant state , the wizard is run to obtain , then is fed to the shield to obtain , which is issued to the plant. We demonstrate how our synthesis procedure manages to open up the black-box wizard. In Fig. 5, we depict the result of an experiment in which we add a wall to the grid that was not present in training. Crossing a wall is inherently impossible for the shield-based controller since when the wizard suggests an action that is not allowed, the best the shield can do is choose an arbitrary substitute. Our controller, on the other hand, intuitively directs the taxi to areas in the grid where the magic book is “certain” of its actions (a notion which is convenient to define when the magic book is a forest). Since these positions are often located near passengers, the taxi manages to cross the wall.
#### BMC: Scalability and success rate
We use the standard state-of-the-art SMT solver Z3 [12] to solve BMC. In Table II, we consider the following specifications for XAI: “Passenger is collected first and at time , even though it is not closest”, where is the bound for BMC and for . We perform the following experiment times and average the results. We run BMC to collect traces. We depict the amortized running time of finding a trace, i.e., the total running time divided by . Recall that the traces witness the magic book. We count the number of traces out of the that also witness the wizard, and depict their ratio. We find both results encouraging: finding a dataset of non-trivial witness traces of the wizard is feasible.
#### Wizard-based BMC
We implemented a BMC procedure that simulates the wizard instead of the magic book and ran it using Z3. We observe extremely poor scalability: an extremely modest SMT query to find a path of length timed-out at min, and even when the initial state is set, the running time is min!
#### BMC: Verification and Explainability
For verification, we consider the specifications “the taxi never hits the wall” and “the taxi never enters a loop in which no passenger is collected”. Even though violations of these specifications were not observed in numerous simulations, we find counterexamples for both (see a depiction for the second property in Fig. 3 on the right). We illustrate explainability with the property “the closest passenger is not collected first” by depicting an example trace for it in Fig. 3 on the left. In Fig. 4, we depict a decision tree, obtained from a dataset consisting of examples, as an attempt to explain when the wizard chooses to collect passenger first, for .
### V-a Discussion
In this work, we address the controller-design problem using a combination of techniques from formal methods and machine learning. The challenge in this combination is that formal methods struggle with the use of neural networks (NNs). We bypass this difficulty using a novel procedure that, instead of reasoning on the NN that deep RL trains (the wizard), extracts from the wizard a small model that approximates its operation (the magic book). We illustrate the advantage of using the magic book by tackling problems that are out of reach for either formal methods or machine learning separately. Specifically, to the best of our knowledge, we are the first to incorporate a magic book in a reactive synthesis procedure thereby synthesizing a stand-alone controller with performance considerations. Second, we use a magic-book based BMC procedure as an XAI tool to increase the trustworthiness of the wizard.
We list several directions for future work. We find it an interesting and important problem to extract magic books with provable guarantees (see Remark 3). Another line of future work is finding other domains in which magic books can be extracted and other applications for magic books. One concrete domain is in speeding up solvers (e.g., SAT, SMT, QBF, etc). Recently, there are attempts at replacing traditional engineered heuristics with learned heuristics (e.g, [36, 37]). This approach was shown to be fruitful in [38], where an RL-based SAT solver performed less operations than a standard SAT solver. However, at runtime, the SAT solver has the upper hand since the bottleneck becomes the calls to the NN. We find it interesting to use a magic book instead of a NN in this domain so that a solver would benefit from using a learned heuristic without paying the cost of a high runtime.
Our synthesis procedure is based on an abstraction of the plant. In the future, we plan to investigate an iterative refinement scheme for the abstraction. Refinement in our setting is not standard since it includes a quantitative game (e.g., [39]), and more interesting, there is inaccuracy introduced by the magic book and wizard. Refinement can be applied both to the process of extracting the decision tree from the NN as well as improving the performance of the wizard using training.
## References
• [1] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. A. Riedmiller, “Playing atari with deep reinforcement learning,” CoRR, vol. abs/1312.5602, 2013. [Online]. Available: http://arxiv.org/abs/1312.5602
• [2] D. Ernst, P. Geurts, and L. Wehenkel, “Tree-based batch mode reinforcement learning,” JMLR, vol. 6, no. Apr, pp. 503–556, 2005.
• [3] G. E. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” CoRR, vol. abs/1503.02531, 2015. [Online]. Available: http://arxiv.org/abs/1503.02531
• [4] N. Frosst and G. Hinton, “Distilling a neural network into a soft decision tree,” 2017.
• [5] D. Shriver, D. Xu, S. Elbaum, and M. B. Dwyer, “Refactoring neural networks for verification,” arXiv preprint arXiv:1908.08026, 2019.
• [6]
IEEE Access, vol. 6, pp. 52 138–52 160, 2018.
• [7] A. Pnueli and R. Rosner, “On the synthesis of a reactive module,” in Proc. 16th POPL, 1989, pp. 179–190.
• [8] R. Bloem, K. Chatterjee, T. A. Henzinger, and B. Jobstmann, “Better quality in synthesis through quantitative objectives,” in Proc. 21st CAV, 2009, pp. 140–156.
• [9] A. Bohy, V. Bruyère, E. Filiot, and J. Raskin, “Synthesis from LTL specifications with mean-payoff objectives,” in Proc. 19th TACAS, 2013, pp. 169–184.
• [10] S. Almagor, O. Kupferman, J. O. Ringert, and Y. Velner, “Quantitative assume guarantee synthesis,” in Proc. 29th CAV, 2017, pp. 353–374.
• [11] A. Biere, A. Cimatti, E. M. Clarke, O. Strichman, and Y. Zhu, “Bounded model checking,” Advances in Computers, vol. 58, pp. 117–148, 2003.
• [12] L. M. de Moura and N. Bjørner, “Z3: an efficient SMT solver,” in Proc. 14th TACAS 2008, ser. LNCS, vol. 4963. Springer, 2008, pp. 337–340. [Online]. Available: https://doi.org/10.1007/978-3-540-78800-3_24
• [13] B. Könighofer, M. Alshiekh, R. Bloem, L. Humphrey, R. Könighofer, U. Topcu, and C. Wang, “Shield synthesis,” FMSD, vol. 51, no. 2, pp. 332–361, 2017.
• [14] G. Avni, R. Bloem, K. Chatterjee, T. A. Henzinger, B. Könighofer, and S. Pranger, “Run-time optimization for learned controllers through quantitative games,” in Proc. 31st CAV, 2019, pp. 630–649.
• [15] O. Bastani, Y. Pu, and A. Solar-Lezama, “Verifiable reinforcement learning via policy extraction,” in Proc. 31st NeurIPS, 2018, pp. 2499–2509.
• [16] J. Tornblom and S. Nadjm-Tehrani, “Formal verification of input-output mappings of tree ensembles,” CoRR, vol. abs/1905.04194, 2019, https://arxiv.org/abs/1905.04194.
• [17] Y. Kazak, C. W. Barrett, G. Katz, and M. Schapira, “Verifying deep-RL-driven systems,” in Proc. of NetAI@SIGCOMM, 2019, pp. 83–89.
• [18] G. Katz, C. W. Barrett, D. L. Dill, K. Julian, and M. J. Kochenderfer, “Reluplex: An efficient SMT solver for verifying deep neural networks,” in Proc. 29th CAV, 2017, pp. 97–117.
• [19] T. Gehr, M. Mirman, D. Drachsler-Cohen, P. Tsankov, S. Chaudhuri, and M. T. Vechev, “AI2: safety and robustness certification of neural networks with abstract interpretation,” in Proc. 39th SP, 2018, pp. 3–18.
• [20] X. Huang, M. Kwiatkowska, S. Wang, and M. Wu, “Safety verification of deep neural networks,” in Proc. 29th CAV, 2017, pp. 3–29.
• [21] G. Einziger, M. Goldstein, Y. Sa’ar, and I. Segall, “Verifying robustness of gradient boosted models,” in Proc. 33rd AAAI, 2019, pp. 2446–2453.
• [22] S. Drews, A. Albarghouthi, and L. D’Antoni, “Proving data-poisoning robustness in decision trees,” CoRR, vol. abs/1912.00981, 2019. [Online]. Available: http://arxiv.org/abs/1912.00981
• [23] L. Valkov, D. Chaudhari, A. Srivastava, C. A. Sutton, and S. Chaudhuri, “HOUDINI: lifelong learning as program synthesis,” in Proc. 31st NeurIPS, 2018, pp. 8701–8712.
• [24] A. Verma, V. Murali, R. Singh, P. Kohli, and S. Chaudhuri, “Programmatically interpretable reinforcement learning,” in Proc. 35th ICML, 2018, pp. 5052–5061.
• [25] M. Jaeger, P. G. Jensen, K. G. Larsen, A. Legay, S. Sedwards, and J. H. Taankvist, “Teaching stratego to play ball: Optimal synthesis for continuous space MDPs,” in Proc. 17th ATVA, 2019, pp. 81–97.
• [26] J. Kretínský, G. A. Pérez, and J. Raskin, “Learning-based mean-payoff optimization in an unknown MDP under omega-regular constraints,” in Proc. 29th CONCUR, 2018, pp. 8:1–8:18.
• [27] M. Wen, R. Ehlers, and U. Topcu, “Correct-by-synthesis reinforcement learning with temporal logic constraints,” in Proc. IROS, 2015, pp. 4983–4990.
• [28] R. S. Sutton, A. G. Barto, and R. J. Williams, “Reinforcement learning is direct adaptive optimal control,” IEEE CSM, vol. 12, no. 2, pp. 19–22, 1992.
• [29] L. Breiman, “Random forests,” Machine Learning, vol. 45, no. 1, pp. 5–32, 2001.
• [30]
T. Chen and C. Guestrin, “Xgboost: A scalable tree boosting system,” in
Proc. 22nd ACM SIGKDD. ACM, 2016, pp. 785–794.
• [31] L. D. Pyeatt and A. E. Howe, “Decision tree function approximation in reinforcement learning,” in Proc. 3rd International Symposium on Adaptive Systems, 2001, pp. 70–77.
• [32] R. Bloem, K. Chatterjee, and B. Jobstmann, “Graph games and reactive synthesis,” in Handbook of Model Checking, 2018, pp. 921–962.
• [33] F. Chollet, “Keras,” https://github.com/fchollet/keras, 2015.
• [34] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
• [35] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, “Scikit-learn: Machine learning in Python,” JMLR, vol. 12, pp. 2825–2830, 2011.
• [36] M. Soos, R. Kulkarni, and K. S. Meel, “Crystalball: Gazing in the black box of SAT solving,” in Proc. 22nd SAT, 2019, pp. 371–387.
• [37] G. Lederman, M. N. Rabe, S. Seshia, and E. A. Lee, “Learning heuristics for quantified boolean formulas through reinforcement learning,” in Proc. 8th ICLR, 2020.
• [38] E. Yolcu and B. Póczos, “Learning local search heuristics for boolean satisfiability,” in Proc. 32nd NeurIPS, 2019, pp. 7990–8001.
• [39] G. Avni and O. Kupferman, “Making weighted containment feasible: A heuristic based on simulation and abstraction,” in Proc. 23rd CONCUR, 2012, pp. 84–99.
|
2022-05-25 01:01:36
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8094058036804199, "perplexity": 1159.0979568633475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662577757.82/warc/CC-MAIN-20220524233716-20220525023716-00439.warc.gz"}
|
https://www.x-mol.com/paper/1216928257247301632
|
Physical Review Letters ( IF 8.385 ) Pub Date :
K. S. Babu, P. S. Bhupal Dev, Sudip Jana, and Yicong Sui
We propose a new way to probe non-standard interactions (NSI) of neutrinos with matter using the ultra-high energy (UHE) neutrino data at current and future neutrino telescopes. We consider the Zee model of radiative neutrino mass generation as a prototype, which allows two charged scalars – one $SU\left(2{\right)}_{L}$-doublet and one singlet, both being leptophilic, to be as light as 100 GeV, thereby inducing potentially observable NSI with electrons. We show that these light charged Zee-scalars could give rise to a Glashow-like resonance feature in the UHE neutrino event spectrum at the IceCube neutrino observatory and its high-energy upgrade IceCube-Gen2, which can probe a sizable fraction of the allowed NSI parameter space.
down
wechat
bug
|
2020-08-05 22:29:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7308659553527832, "perplexity": 5695.927455463419}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735989.10/warc/CC-MAIN-20200805212258-20200806002258-00050.warc.gz"}
|
https://www.physicsforums.com/threads/combining-distributions.246142/
|
# Combining distributions
1. Jul 21, 2008
### CRGreathouse
Warning: I've only taken one stats class, back as an undergrad (though it was a very fast-paced class designed for mathematicians). My understanding of all things statistical is consequently weak.
I'm trying to design a program to accurately time functions. The functions themselves are of no importance here, only the timing code.
At the moment my program runs the test suite (10 million runs) with an empty function, to measure overhead. It stores a fixed number of runs, 7 at the moment, then computes the average and standard deviation of the overhead. This lets me construct a 95% confidence interval for the overhead:
$$[\mu-1.96\sigma,\mu+1.96\sigma]$$
Simple enough so far, yes? So then I time each actual function once. (I don't want to ruin them multiple times because the real functions, as expected, take a fair bit longer than the empty function.) At this point I make the assumption that the distribution of the timing errors of the functions is the same as that of the overhead function (which seems reasonable to me). This gives me a 95% confidence interval (under my assumption) as such:
$$[t(1-1.96\sigma/\mu), t(1+1.96\sigma/\mu)]$$
Here's the part I want help on. I combine the intervals by taking the low-end estimate for the function's speed and subtracting the high-end estimate for the overhead, to the high-end estimate for the function minus the low-end estimate for the overhead. How do I describe my confidence that this is correct? Less than 95% (errors can accumulate), more than 95% (errors are likely to cancel, maybe like sqrt(2) rather than 2?), or just 95%? Is there a better way to calculate this? Have I made mistakes or bad assumptions in my analysis?
2. Jul 21, 2008
### Focus
I am somewhat confused about what you are trying to do. Maybe if you write it out in maths language I could be of more help. From what I understand you are trying to get a 95% confidence interval for the mean (of some sort). Pardon me if I am wrong here but I am thinking that you wish to approximate $$\mu$$ for $$N(\mu,\sigma^2)$$ from a set of data by taking the mean as an estimate for $$\mu$$. The confidance interval you have makes sense only if you write it as$$[\hat{\mu}-1.96\sigma,\hat{\mu}+1.96\sigma]$$. This assumes that you know $$\sigma$$ which I highly doubt. To get a confidence interval when you also have to estimate $$\sigma$$ is given by $$[\hat{\mu}-t_{n-1,0.025} \hat{\sigma},\hat{\mu}+t_{n-1,0.025} \hat{\sigma}]$$ where the t is from Students t-distribution.
Don't quite understand the rest of it but I hope this helps.
Warning: I found statistics quite boring, I may be trying to blacken its name.
3. Jul 21, 2008
### CRGreathouse
I'm being quite lax in my notation, forgive me. I wrote $\mu$ for $\hat{\mu}$ and $\sigma$ for $\hat{\sigma}.$ These figures come from a small sample of a potentially infinite data source.
Example:
I have, say, five measurements for the overhead:
[0.5, 0.6, 0.4, 0.5, 0.35]
which have average 0.47 and standard deviation 0.0975. This gives a 95% confidence interval of
[0.279, 0.661]
for the true value of the overhead. (This is the sample mean plus/minus the standard deviation times 1.96; the 1.96 comes from a z-table.)
Now I don't actually want this interval. What I want is to subtract the true value of the overhead from a set of measurements and get the measurements less overhead. But since I don't have that, I subtract the range from the measurements:
$$[m-0.661,m-0.279]$$
But of course I don't actually have the true value for the measurements themselves; I have only a single measurement for each. So I make the assumption in my first post which lets me estimate the confidence interval for the measurements. First I construct the relative error:
$$e_\text{rel}\approx1.96\hat{\sigma}/\hat{\mu}\approx1.96\cdot0.0975/0.46=0.406$$
Then I form the interval about the measurement:
$$[m(1-e_\text{rel}),m(1+e_\text{rel})]$$
But this assumes, in effect, that the worst case of the overhead error corresponds to the worst case of the measurement error, which doesn't seem likely. So I seem to think that the actual confidence of my final result is more than 95%. I'd like a way to calculate my confidence in this final interval -- in this case, so I can reduce the size of my final interval by dropping the confidence from perhaps 99% to 95%.
4. Aug 1, 2008
### Focus
If you want to substract the true value of the overhead then surely your CI should just be N(0,k). I'm still really confused about what you are trying to do so excuse me. You should also use $$[\hat{\mu}-t_{n-1,0.025} \hat{\sigma},\hat{\mu}+t_{n-1,0.025} \hat{\sigma}]$$ to compute your CI as you are estimating sigma^2 as well. I also have no idea what an overhead is but it sounds quite fancy, good luck with it!
5. Aug 1, 2008
### CRGreathouse
Nothing fancy. I'm timing a certain process for (say) ten million iterations, and there is time taken up by the iterations themselves (and I just want to measure the time of the process). so I run a 'do-nothing' process in the same loop, and that's my overhead. The actual recorded time should be the time of the process (ten-millionfold) plus the time of the overhead. But with measurement errors, that's hard to get right -- sometimes the process is fast enough that the overhead dominates the runtime.
6. Aug 1, 2008
### Focus
Well then you should measure the overhead and the process which means your error for the proccess without the overhead is (given that they are independent) $$N(\hat{\mu_1}-\hat{\mu_2},\hat{\sigma_1}^2+\hat{\sigma_2}^2)$$ be sure to use students t distribution when calculating the CI as the extra uncertainty from estimating variances is accounted for in that.
Must be quite boring running do-nothings all day. I hope they are paying you well for this :D.
7. Aug 1, 2008
|
2016-12-07 20:31:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.745293140411377, "perplexity": 500.44736998367955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542246.21/warc/CC-MAIN-20161202170902-00052-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://moodle.cs.pdx.edu/mod/page/view.php?id=487
|
## Macros
• Idea as old as computing; map from text to text using textual "functions"
• Famous macro systems: LISP, M4, TeX, CPP, Scheme
• Often a "preprocessor" like CPP: source-to-source
• Modern macro systems are a bit lower-level
## Rust Macros
• Mappings from parsed source tokens to source tokens
• Two kinds:
• "Declarative", which use rules and matching: e.g. println!()
• "Procedural", which call Rust functions with token trees: e.g. #[derive]
• Let's talk first about declarative macros.
## Introducing A Macro
• macro_rules! itself looks / acts like a macro
• Argument is a sequence of rules
• Each rule has a LHS that is a token pattern to match, and a RHS that is tokens to rewrite using the match
• Both sides are lexed by the compiler: you can't use arbitrary text
• examples/debug-macro.rs
## Rules Run In Order
• The macro rules match from top to bottom. The first matching rule is chosen
• A rule may suffer from type: the patterns match syntactically, but the pattern type is wrong. If this happens, compilation will fail right there
## Macro Bugs
• Double-expansion is dangerous, as with CPP. examples/square-macro.rs
• Macros are just tokenized, so weird errors in the macro rule bodies won't be caught at macro expansion time -- they will be caught at code compile time. examples/macro-body-bug.rs
## Macro Debugging
• log_syntax!() will print its arguments to the terminal at compile time
• rustc -Z unstable-options --pretty expanded or the cargo-expand program can be used to show the preprocessed program as text
## Repetition and Condition
• Powerful, but easy to get wrong. examples/debug-macro-rep.rs
• Varargs is 70% of the reason for Rust macros
## Rules Can Be Recursive
• Note that our debug! example expands eprintln!. It can also expand itself, either directly or indirectly. examples/macro-nargs.rs
• Note that this expansion is at compile time: the source code can get huge and take a long time to generate and compile
• There is an expansion recursion depth limit of 64 to prevent runaway macros from overrunning the compiler stack. The depth limit can be increased with #![recursion_limit = "256"] or something similar
• #![feature(trace_macros)] can be useful here for debugging expansions
## More Facilities
• Lots of compiler builtins, e.g. line!(). See the book for details
• Lots of "fragment types", e.g. ident, ty, tt
• A tt fragment is special: it matches any "token tree" the Rust compiler can build. This is either a list of stuff inside some kind of outer brackets, or it's a single token of arbitrary kind. examples/macro-tt.rs
## Scope Stuff
• Local variables and arguments created inside a macro "cannot escape": they are in a different namespace and thus "hygienic"
• This won't compile
macro_rules! make_point {
($x:expr,$y:expr, $t:ty) => (let x:$t = $x; let y:$t = \$y;)
}
fn main() {
make_point!(3, 2, u32);
println!("{} {}", x, y);
}
• Making macros visible to another crate requires #[macro_export] per-macro
• Macro import is controlled by normal module import rules
## This is a bit dated, but otherwise great: Little Book of Rust Macros
Last modified: Tuesday, 25 May 2021, 1:32 AM
|
2022-05-17 19:52:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5885896682739258, "perplexity": 9478.768755981739}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662520817.27/warc/CC-MAIN-20220517194243-20220517224243-00705.warc.gz"}
|
https://www.educative.io/answers/how-to-solve-the-combinatoric-selections-problem
|
Related Tags
project euler
# How to solve the combinatoric selections problem
Combinatoric selections is problem #53 on Project Euler. The problem statement is as follows.
There are exactly ten ways of selecting a combination of three from five numbers, 12345:
123, 124, 125, 134, 135, 145, 234, 235, 245, and 345
In combinatorics, we use the notation ${5 \choose 3} = 10$.
In general, ${n \choose r} = \frac{n!}{r!(n-r)!}$, where $r \le n$. $n! = n \times (n-1) \times ... \times 3 \times 2 \times 1$, and $0! = 1$.
It is not until $n=23$ that a value exceeds one million: ${23 \choose 10} = 1144066$.
How many not necessarily distinct values of ${n \choose r}$ for $1 \le n \le 100$ are greater than one million?
## C++ solution
#include <iostream>
#include <vector>
using namespace std;
int main()
{
unsigned long long count = 0;
int n = 23;
unsigned long long k = 1000000;
vector<vector<unsigned long long>> pascalsTriangle(n+1);
for(int i = 0; i <= n; i++){
for(int j = 0; j<= i; j++){
if(i == 0 || j == i || j == 0){
pascalsTriangle[i].push_back(1);
}
else{
pascalsTriangle[i].push_back(pascalsTriangle[i-1][j-1] + pascalsTriangle[i-1][j]);
if (pascalsTriangle[i][j] >= k){
count++;
pascalsTriangle[i][j] = k;
}
}
}
}
cout << count << endl;
return 0;
}
## Explanation
The problem is quite simple. All we are required to do is calculate ${n \choose r}$ for $2 \le n \le N$ and return the count of the number of values above a certain threshold, $K$.
The naive approach is to calculate the value of ${n \choose r}$ using the formula provided in the problem statement. However, that approach, while correct, is slow and inefficient. Moreover, it suffers from integer overflows when calculating the factorials and can give inaccurate results.
A much better approach is to generate Pascal’s triangle. This method is faster because it only uses the addition operation instead of having to calculate large factorials and then perform multiplication and division operations on them.
A visual representation of the Pascal's triangle inside the 2-D vector
We generate Pascal’s triangle in the main for loop. There are three conditions for which we fill a slot with $1$:
• If the row index is 0, meaning we are at the topmost block.
• If the column index is 0, meaning we are at the beginning of a row.
• If the column and row index are equal, meaning we are at the end of a row.
We deal with these three conditions in the following code snippet:
if(i == 0 || j == i || j == 0){
pascalsTriangle[i].push_back(1);
}
For the rest of the slots, we use the slots above them and add them together to get the value of the new block. The slots that we use are the ones directly above the new block, at position [row - 1][columns] and the one adjacent to it at position [row - 1][columns - 1].
How the next number is generated in the Pascal Triangle
The most important part is in the last if condition. We check if the new number is greater than or equal to $K$, and if so, we increment the count. An important step is to set the number to $K$ as well. This will prevent integer overflow errors for large values of $N$.
RELATED TAGS
project euler
CONTRIBUTOR
|
2022-08-16 12:36:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 17, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6254703402519226, "perplexity": 517.6767186482806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572304.13/warc/CC-MAIN-20220816120802-20220816150802-00279.warc.gz"}
|
https://dispatches.readthedocs.io/en/main/workflow/grid_surrogates.html
|
# Surrogate Models for Grid Outcomes
## 1. The Steady-State Co-Optimization with Market Interactions
We developed two surrogate model architectures to map market input parameters to market outputs. Each surrogate takes 8 inputs from the production cost model (PCM) Prescient outlined in table below. The data for training the surrogates is from the Prescient sensitivity analysis.
Input
Descriptions
Units
x1
Maximum Designed Capacity (Pmax)
MW
x2
Minimum Operating Multiplier
x3
Ramp Rate Multiplier
x4
Minimum Up Time
hr
x5
Minimum Down Multiplier
x6
Marginal Cost
$/MWh x7 No Load Cost$/hr@Pmax
x8
Representative Startup Cost
$/MW capacity Output Descriptions Units y1 Annual Revenue MM$
y2
Annual Number of Startups
#
yz
Annual Hours Dispatched in zone z
hr
Market revenue y1 is a surrogate function of the bid parameters, x, which correspond to the data which each individual resource communicates to the wholesale electricity market. y2 approximates the number of startups of the generator during the simulation time periods. yz is the surrogate for frequency of each scenario, we use eleven total zones to represent generator power output scaled by the nameplate capacity (maximum power output). The zones consist of an ’off’ state and ten power outputs between the minimum and maximum output of the generator, i.e., 0-10%, 10-20%, …, 90-100%.
## 2. ALAMO Surrogate Models
We use ALAMO (version 2021.12.28) (https://idaes-pse.readthedocs.io/en/1.4.4/apps/alamopy.html) to train algebraic surrogates which consists of a linear combination of nonlinear basis functions xj and regressed coefficients for coefficient $$\beta$$j for index j in set B
$y_alamo = \sum_{j \in \beta} \beta_j X_j(x)$
For training, ALAMO considers monomial and binomial basis functions with up to 15 total terms with power values of 1, 2, and 3. We use Bayesian Information Criteria (BIC) implemented in ALAMO to select the best algebraic surrogate using enumeration mode. In total, we train a total of fourteen surrogate models using the ALAMO version accessible through the IDAES-PSE interface: revenue (one), number of startups (one), and surrogates for each zone (eleven).
Three ALAMO surrogate models are trained in ‘train_nstartups_idaes.py’, ‘train_revenue_idaes.py’ and ‘train_zones_idaes.py’. The input training data can be read in or simulated using available Python packages and 1/3 of the training data are withheld for testing the model. The data are normalized before fed to the trainer. There are no other arguments needed to specify the training. ALAMO solves ordinary least squares regression problems and generates the output results in the json files. (The ALAMO training options are default set in ‘train_nstartups/revenue/zones.py’) There will be three output json files. The ‘alamo_nstartups/revenue/zones.json’ stores the coefficients of the monomial and binomial basis functions. The ‘alamo_parameters_nstartups/revenue/zones.json’ saves scaling and training bounds for the input data. The ‘alamo_nstartups/revenue/zones_accuracy.json’ has the computed R2 matrices.
## 3. Neural Network (NN) Surrogate Models
Feed-forward neural network (NN) surrogate models are trained.
$x = z_0$
$z_k = \sigma(W_k z_{k-1} + b_k), k\in \{1,2,...,K-1\}$
$y_{nn} = W_k z_{k-1} + b_k$
We use the ‘MLPRegressor’ package (Keras version v2.8.0, Scikit Learn version v0.24.2) with default settings to train three 2-layer neural networks.The revenue and startup surrogates contain two hidden layers with 100 nodes in the first hidden layer and 50 nodes in the second (for the annual zone output surrogate, 100 nodes in both layers).
Three NN surrogate models are trained in ‘train_nstartups.py’, ‘train_revenue.py’ and ‘train_zones.py’. The input training data can be read in or simulated using available python packages and 1/3 of the training data are split for testing the model. The data are normalized before fed to the trainer. There are no other arguments needed to specify the training. There are two output json files and one pickle file that save the results. The ‘scikit_nstartups/revenue/zones.pkl’ stores the coefficients of the neural networks. ‘The scikit_parameters_nstartups/revenue/zones.json’ saves scaling and training bounds for the input data. The ‘scikit_nstartups/revenue/zones_accuracy.json’ has the computed R2 matrices.
The accuracy of the scikit NN surrogate models can be visualized by ‘plot_scikit_nstartups/revenue/zones.py’.
A Jupyter Notebook demonstration can be found in the following link: https://github.com/jalving/dispatches/blob/prescient_verify/dispatches/workflow/surrogate_design/rankine_cycle_case/grid_surrogate_design.ipynb
## 4. Optimization with Surrogate Models
We can implement the steady-state co-optimization with market interactions in part 1 using ‘run_surrogate_alamo.py’ and ‘run_surrogate_nn.py’. The scripts formulate the optimization using Pyomo and use Python packages to add the surrogate model coefficients and input data bounds from the json and pickle files. Optionally, some surrogate inputs may be fixed (removed as optimization degrees of freedom) in the scripts. The optimization solution is stored in ‘conceptual_design_solution_alamo/nn.json’s which can be read by the Prescient for further verification.
|
2023-01-28 12:56:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33721402287483215, "perplexity": 3027.1889277997902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499634.11/warc/CC-MAIN-20230128121809-20230128151809-00810.warc.gz"}
|
https://www.rdocumentation.org/packages/lsr/versions/0.5/topics/unlibrary
|
unlibrary
From lsr v0.5
0th
Percentile
A wrapper function to detach that removes a package from the search path, but takes a package name as input similar to library.
Usage
unlibrary(package)
Arguments
package
A package name, which may be specified with or without quotes.
Details
Unloads a package. This is just a wrapper for the detach function. However, the package argument is just the name of the package (rather than the longer string that is required by the detach function), and -- like the library function -- can be specified without quote marks. The unlibrary function does not unload dependencies, only the named package.
The name "unlibrary" is a bit of an abuse of both R terminology (in which one has a library of packages) and the English language, but I think it helps convey that the goal of the unlibrary function is to do the opposite of what the library function does.
Value
Identical to detach.
Warning
This package is under development, and has been released only due to teaching constraints. Until this notice disappears from the help files, you should assume that everything in the package is subject to change. Backwards compatibility is NOT guaranteed. Functions may be deleted in future versions and new syntax may be inconsistent with earlier versions. For the moment at least, this package should be treated with extreme caution.
library, require, detach
library(lsr) unlibrary( lsr ) # unload the lsr package library( lsr ) # reload it
|
2021-01-20 16:14:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43295347690582275, "perplexity": 1466.5058562161498}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703521139.30/warc/CC-MAIN-20210120151257-20210120181257-00103.warc.gz"}
|
https://physi.wordpress.com/category/aqa-a2-unit-4/
|
## Fleming: Left or Right Hand?
Posted in A2 Unit 4: Magnetic Fields, AQA A2 Unit 4 by Mr A on 21 Feb 2010
Fleming, like most people, had two hands. Unlike most, he had a rule for each. But when should you use the left, and when the right?
LEFT HAND RIGHT HAND A motion occurs due to the current. To work out what direction this motion is in, use the Left Hand Rule. A current is induced in the wire due to the applied motion. To find out in what direction this current flows, use the Right Hand Rule.
## Moving a wire in a magnetic field
Posted in A2 Unit 4: Magnetic Fields, AQA A2 Unit 4 by Mr A on 21 Feb 2010
Tagged with: , ,
## Electric Fields Applet
Posted in A2 Unit 4: Electric Fields, AQA A2 Unit 4 by Mr A on 14 Feb 2010
## Electric Fields
Posted in A2 Unit 4: Electric Fields, AQA A2 Unit 4 by Mr A on 23 Jan 2010
Tagged with: ,
## Gravity Wells via xkcd.com
Posted in A2 Unit 4: Gravitational Fields, AQA A2 Unit 4 by Mr A on 11 Jan 2010
## Capacitors in Series and Parallel
Posted in A2 Unit 4: Capacitors, AQA A2 Unit 4 by Mr A on 7 Dec 2009
The total effective capacitance of a group of capacitors in parallel can be found to be:
$C_{total} = C_{1} + C_{2} + C_{3}$
The total capacitance of a group of capacitors in series can be found to be:
$\frac{1}{C_{total}} = \frac{1}{C_{1}} + \frac{1}{C_{2}} + \frac{1}{C_{3}}$
Now try this worksheet on Capacitors in Series and in Parallel
## Charging and Discharging Capacitors
Posted in A2 Unit 4: Capacitors, AQA A2 Unit 4 by Mr A on 22 Nov 2009
Tagged with: , ,
## Introduction to Capacitors
Posted in A2 Unit 4: Capacitors, AQA A2 Unit 4 by Mr A on 18 Nov 2009
Tagged with: ,
## Brass Pendulum and Lenz’s Law
Posted in A2 Unit 4: Magnetic Fields, AQA A2 Unit 4 by Mr A on 12 Nov 2009
A potential difference is induced across the brass pendulum, and the current flows such that it opposes the motion of the pendulum (due to Lenz’s Law). Brass is not a magnetic material, so the magnet is not slowing down by attracting the brass (as shown by the fact that the pendulum with slits in is not slowed down). However, when the pendulum with slits in is swung through the magnetic field, the eddy currents (which cause the pendulum to slow) are not so free to move within it. Thus Lenz’s Law does not have as much effect; the current does not flow as much, so the motion is not opposed as much. This demonstrates Lenz’s Law, and why a laminated core is more efficient in a transformer.
Vodpod videos no longer available.
more about "Brass Pendulum and Lenz’s Law", posted with vodpod
## Transformers
Posted in A2 Unit 4: Magnetic Fields, AQA A2 Unit 4 by Mr A on 11 Nov 2009
View this document on Scribd
Extras
Try out a transformer demo applet
TED talk on Wireless Electricity:
|
2017-10-23 07:59:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6352606415748596, "perplexity": 2564.5666162102584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825812.89/warc/CC-MAIN-20171023073607-20171023093607-00295.warc.gz"}
|
http://physics.stackexchange.com/questions/12215/why-isnt-pressure-used-for-flight/12216
|
# Why isn't pressure used for flight?
Why isn't pressure used as flight?
I've heard that 2L bottles can hold a pressure of up to 90 PSI safely. Since $F = PA$, if the nozzle of a pressure rocket is about 4 inches squared in area, that would be a thrust of 360 pounds!? Is there something wrong with my math, or why don't we just pressurize air and use it for flight?
-
How are the bottles supposed to "hold" the pressure while you release it to gain the thrust? – leftaroundabout Jul 12 '11 at 20:56
A pump could supply more pressure while it's being released out the other side. It would still lose thrust but not as fast – exosuit Jul 12 '11 at 22:01
"A pump could supply more pressure while it's being released out the other side." Then you have to power the pump. If you are clever you do that by burning fuel in the pressure chamber at which point you have described a turbo-jet or fan-jet engine... – dmckee Jul 13 '11 at 2:17
As a side note, a compressed air rocket cart does make a good classroom demo. – dmckee Jul 13 '11 at 2:18
In a way it is. A jet engine will create a high pressure region in the combustion chamber and discharge it through the nozzle. It turns out this is inefficient, and it is better to use the pressure to drive turbine blades that push the cold air around the engine. Remember cold = higher density = higher momentum.
To a lesser degree a propeller will create a high pressure region behind the blades, that results in thrust.
-
Nice response. Internal combustion for all its faults is nothing more that expanding gases pushing the cylinders. – Fortunato Jul 12 '11 at 22:40
The principal disadvantage is the indirect use of energy. Energy is used to compress air, which in turn provides the energy to run the rocket. Any conversion of energy between forms results in loss. Hence it's more economically efficient to burn fuel rather than compress gas and then release it. This idea has been used in Compressed Air Cars much before. Also look at Compressed Air Energy.
-
What is "indirect" use of energy? Did You ever calculate the energy of such a 2 ltr-bottle at 90 psi? – Georg Jul 12 '11 at 21:25
@Georg the point is there is an energy cost to compress the air inside the bottle to 90psi which would be greater than or equal to energy of such a bottle at 90psi. – David Jul 12 '11 at 21:30
so you're saying it does generate a lot of thrust, but it's more efficient to just burn it rather than exert energy to compress it? – exosuit Jul 12 '11 at 21:55
@phycker I pretty much think this is not the reason. The energy per unit mass of fuel is a much more important factor for aircraft, and long range aircraft in particular. I think a compressed air system would be too heavy for it to get off the ground for most practical flights. – Alan Rominger Jul 12 '11 at 21:57
How meany cubic m of air at say 3000 psi do you need to flay a 747 for 5 hr. Now there's the question. I guess it wont fit in the plane – Fortunato Jul 13 '11 at 1:54
|
2014-12-19 09:39:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6800215840339661, "perplexity": 919.4966666069205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768378.98/warc/CC-MAIN-20141217075248-00102-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://authorea.com/doi/full/10.22541/au.158316290.04584436
|
Shell equations in terms of Günter’s derivatives, derived by the Г-convergence
• Roland Duduchava,
• Tengiz BUCHUKURI
Roland Duduchava
The University of Georgia
Author Profile
Tengiz BUCHUKURI
Tbilisi State University, A. Razmadze Mathematical Institute
Author Profile
Abstract
A mixed boundary value problem for the L\’ame equation in a thin layer $\Omega^h:\cC\times[-h,h]$ around a surface $\cC$ with the Lipshitz boundary is investigated. The main goal is to find out what happens when the thickness of the layer tends to zero $h\to0$. To this end we reformulate BVP into an equivalent variational problem and prove that the energy functional has the $\Gamma$-limit being the energy functional on the mid-surface $\cC$. The corresponding BVP on $\cC$, considered as the $\Gamma$-limit of the initial BVP, is written in terms of G\”unter’s tangential derivatives on $\cC$ and represents a new form of the shell equation. It is shown that the Neumann boundary condition from the initial BVP on the upper and lower surfaces transforms into a right-hand side term of the basic equation of the limit BVP.
Peer review status:ACCEPTED
25 Feb 2020Submitted to Mathematical Methods in the Applied Sciences
01 Mar 2020Submission Checks Completed
01 Mar 2020Assigned to Editor
01 Mar 2020Reviewer(s) Assigned
27 Nov 2020Review(s) Completed, Editorial Evaluation Pending
28 Nov 2020Editorial Decision: Revise Minor
|
2021-03-04 06:49:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6820223927497864, "perplexity": 1705.5486327177878}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178368608.66/warc/CC-MAIN-20210304051942-20210304081942-00395.warc.gz"}
|
http://archive.numdam.org/item/M2AN_1987__21_1_171_0/
|
How to avoid the use of Green's theorem in the Ciarlet-Raviart theory of variational crimes
ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique, Volume 21 (1987) no. 1, p. 171-191
@article{M2AN_1987__21_1_171_0,
author = {\v Zen\'\i \v sek, Alexander},
title = {How to avoid the use of Green's theorem in the Ciarlet-Raviart theory of variational crimes},
journal = {ESAIM: Mathematical Modelling and Numerical Analysis - Mod\'elisation Math\'ematique et Analyse Num\'erique},
publisher = {Dunod},
volume = {21},
number = {1},
year = {1987},
pages = {171-191},
zbl = {0623.65072},
mrnumber = {882690},
language = {en},
url = {http://www.numdam.org/item/M2AN_1987__21_1_171_0}
}
Ženíšek, Alexander. How to avoid the use of Green's theorem in the Ciarlet-Raviart theory of variational crimes. ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique, Volume 21 (1987) no. 1, pp. 171-191. http://www.numdam.org/item/M2AN_1987__21_1_171_0/
[1] P. G. Ciarlet, P. A. Raviart, The combined effect of curved boundaries and numerical integration in isoparametric finite element methods. In : The Mathematical Foundations of the Finite Element Method with Applications to Partial Differential Equations (A. K. Aziz, Editor), Academic Press, New York, 1972, pp. 409-474. | MR 421108 | Zbl 0262.65070
[2] P. G. Ciarlet, The Finite Element Method for Elliptic Problems. North-Holland, Amsterdam, 1978. | MR 520174 | Zbl 0383.65058
[3] P. Doktor, On the density of smooth functions in certain subspaces of Sobolev space. Commentationes Mathematicae Universitatis Carolinae 14 (1973), 609-622. | MR 336317 | Zbl 0268.46036
[4] G. Strang, Variational crimes in the finite element method. In : The Mathematical Foundations of the Finite Element Method with Applications to Partial Differential Equations (A. K. Aziz, Editor), Academic Press, New York, 1972, pp. 689-710. | MR 413554 | Zbl 0264.65068
[5] G. Strang, G. Fix, An Analysis of the Finite Element Method. Prentice-Hall Inc., Englewood Cliffs, N. J., 1973. | MR 443377 | Zbl 0356.65096
[6] M. Zlamal, The finite element method in domains with curved boundaries. Int. J. Numer. Meth. Engng. 5 (1973), 367-373. | MR 395262 | Zbl 0254.65073
[7] M. Zlamal, Curved elements in the finite element method. I. SIAM J. Numer. nal. 10 (1973), 229-240. | MR 395263 | Zbl 0285.65067
[8] A. Zenisek, Curved triangular finite Cm-elements. Api. Mat. 23 (1978), 346-377. | MR 502072 | Zbl 0404.35041
[9] A. Zenisek, Discrete forms of Friedrichs' inequalities in the finite element method. R.A.I.R.O. Anal. num. 15 (1981), 265-286. | Numdam | MR 631681 | Zbl 0475.65072
[10] A. Zenisek, Nonhomogeneous boundary conditions and curved triangular finite elements. Apl. Mat. (1981), 121-141. | MR 612669 | Zbl 0475.65073
|
2019-11-12 03:10:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18353258073329926, "perplexity": 2000.1482923474698}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664567.4/warc/CC-MAIN-20191112024224-20191112052224-00295.warc.gz"}
|
https://www.physicsforums.com/threads/reflectance-of-nanotubes.674324/
|
# Reflectance of nanotubes
Tags:
1. Feb 25, 2013
### ralden
hi guys, I'm working about nanotechnology, and my current research is about nanotubes. Based on my observation, as the nanotubes length increased the reflectance also increased, one of my possible explanation is that; since dielectric constant is dependent on the dimension of the material, and refractive index is directly proportional to dielectric constant, and since reflectance may expressed in terms of refractive indices, therefore, as dielectric constant increased reflectance also increased, but i must also considered the absorption of that material, what happen to the absorption when nanotubes length increased? is it also affect the reflectance? please shared your knowledge and opinion. thank you
2. Feb 25, 2013
### DrDu
Maybe you could be a little bit more specific: Are you observing single nanotubes, solutions or bulk material? About what length scale are you taking, specifically larger or smaller than the wavelength of light? What kind and and of which material are the nanotubes?
3. Feb 25, 2013
### ralden
A bulk material, having a length range from 300nm-1700nm. I'm studying titanium dioxide nanotubes,
4. Feb 25, 2013
### DrDu
Ok.
Generally, absorption also affects reflectivity. I would not consider this important for TiO2 in the visible. What wavelengths are you interested in?
5. Feb 25, 2013
### ralden
actually i'm interested on the reflectance of increasing length of TiO2 nanotubes. since tiO2 have no reflection at 400nm-500nm wavelength because of its band gap that absorbed those em waves, then i focused on the wavelengths around 600nm-1100nm that gives refletance, what you think?
6. Feb 25, 2013
### DrDu
I would be astonished if TiO2 had strong absorptions in this range. After all, it is used as a white pigment, so it can't be absorptive in the visible (say 800 to 400 nm). I also see no mechanism for absorption in the near IR. Absorption would have to be rather strong to change reflectivity.
7. Feb 25, 2013
### ralden
ok. so what is the relationship between the reflectance and dielectric constant of the sample? is it varies with length of nanotubes, same to the absorption that varies also with thickness?
8. Feb 26, 2013
### DrDu
Reflectance is given by the Fresnel equations http://en.wikipedia.org/wiki/Fresnel_equations
and $n=\sqrt{\epsilon}$, so the first part of your question should be clear.
The real part of the dielectric constant n and the absorbance κ are related by Kramers Kronig relations http://en.wikipedia.org/wiki/Kramers-Kronig_relation, which is rather involved.
My first guess why epsilon increases with increasing length would be an increase in sample density. After all, a single slab of TiO2 has a larger density than two close packed slabs of half the length.
9. Mar 16, 2013
### Alkim
Ralden, this is a case of collective scattering and its treatment is quite complex. I know it from experience since I have worked with scattering by metallic spherical nanoparticles. Each fiber can be treated as a cylinder for which solving Maxwell equations is relatively easy but the problem comes when you have to consider the interaction between the waves scattered by each cylinder. The problem can be treated analytically using so called effective medium theories, but from my experience they are not very precise. You can think of an effective refractive index which increases with volume fraction of the high refractive index medium and therefore reflectance too (you just substitute the effective index in Fresnel equations already cited). The other possibility is EM numerical simulation, using e.g. FDTD, which is the best method since it solves "exactly" Maxwell equations. In fact, exactness depends only on your computer power available.
10. Mar 16, 2013
### ralden
Hi Alkim, may you give me links of articles and studies about the theoretical simulation of the collective scattering, or the FDTD? thanks :)
11. Mar 16, 2013
### Alkim
Hi Ralden, there is a lot of bibliography, you can start with wikipedia and articles, books and codes cited therein:
http://en.wikipedia.org/wiki/Scattering
http://en.wikipedia.org/wiki/Light_scattering_by_particles
http://en.wikipedia.org/wiki/Effective_medium_approximations
http://en.wikipedia.org/wiki/Finite-difference_time-domain_method
Also, a quick search about scattering by cylinders rendered some interesting links, see e.g.: http://inis.jinr.ru/sl/vol2/Physics...DBOOK_of_OPTICS/HANDBOOK_of_OPTICS/v1ch06.pdf This guy is a capacity in the field.
this may also give you some orientation:
http://eos.wdcb.ru/transl/izva/9404/pap06.htm
And this too:
|
2017-11-19 15:16:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.659989595413208, "perplexity": 1481.8830388996419}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805649.7/warc/CC-MAIN-20171119134146-20171119154146-00277.warc.gz"}
|
https://gibuu.hepforge.org/trac/wiki/TracIni
|
gibuu is hosted by Hepforge, IPPP Durham
GiBUU
# The Trac Configuration File
#### Contents
Trac is configured through the trac.ini file, located in the <projectenv>/conf directory. The trac.ini configuration file and its parent directory should be writable by the web server.
Trac monitors the timestamp of the file to trigger an environment reload when the timestamp changes. Most changes to the configuration will be reflected immediately, though changes to the [components] or [logging] sections will require restarting the web server. You may also need to restart the web server after creating a global configuration file when none was previously present.
## Global Configuration
Configuration can be shared among environments using one or more global configuration files. Options in the global configuration will be merged with the environment-specific options, with local options overriding global options. The global configuration file is specified as follows:
[inherit]
file = /path/to/global/trac.ini
Multiple files can be specified using a comma-separated list.
Note that you can also specify a global option file when creating a new project, by adding the option --inherit=/path/to/global/trac.ini to trac-admin's initenv command. If you specify --inherit but nevertheless intend to use a global option file with your new environment, you will have to go through the newly generated conf/trac.ini file and delete the entries that will otherwise override those in the global file.
There are two more options in the [inherit] section, templates_dir for sharing global templates and plugins_dir, for sharing plugins. Those options can be specified in the shared configuration file, and in fact, configuration files can even be chained if you specify another [inherit] file there.
Note that the templates found in the templates/ directory of the TracEnvironment have precedence over those found in [inherit] templates_dir. In turn, the latter have precedence over the installed templates, so be careful about what you put there. Notably, if you override a default template, refresh your modifications when you upgrade to a new version of Trac. The preferred way to perform TracInterfaceCustomization is to write a custom plugin doing an appropriate ITemplateStreamFilter transformation.
## Reference for settings
This is a reference of available configuration options, and their default settings.
Documentation improvements should be discussed on the trac-dev mailing list or described in a ticket. Even better, submit a patch against the docstrings in the code.
### [attachment]
max_size Maximum allowed file size (in bytes) for attachments. 262144 max_zip_size Maximum allowed total size (in bytes) for an attachment list to be downloadable as a .zip. Set this to -1 to disable download as .zip. (since 1.0) 2097152 render_unsafe_content Whether attachments should be rendered in the browser, or only made downloadable. Pretty much any file may be interpreted as HTML by the browser, which allows a malicious user to attach a file containing cross-site scripting attacks. For public sites where anonymous users can create attachments it is recommended to leave this option disabled. disabled
### [authz_policy]
authz_file Location of authz policy configuration file. Non-absolute paths are relative to the Environment conf directory. (no default)
ignore_pattern Resource names that match this pattern will not be added to the breadcrumbs trail. (no default) label Text label to show before breadcrumb list. If empty, 'Breadcrumbs:' is used as default. (no default) max_crumbs Indicates maximum number of breadcrumbs to store per user. 6 paths List of URL paths to allow breadcrumb tracking. Globs are supported. /wiki/,/ticket/,/milestone/
### [browser]
color_scale Enable colorization of the age column. This uses the same color scale as the source code annotation: blue is older, red is newer. enabled downloadable_paths List of repository paths that can be downloaded. Leave this option empty if you want to disable all downloads, otherwise set it to a comma-separated list of authorized paths (those paths are glob patterns, i.e. "*" can be used as a wild card). In a multi-repository environment, the path must be qualified with the repository name if the path does not point to the default repository (e.g. /reponame/trunk). Note that a simple prefix matching is performed on the paths, so aliases won't get automatically resolved. /trunk,/branches/*,/tags/* hide_properties Comma-separated list of version control properties to hide from the repository browser. svk:merge intermediate_color (r,g,b) color triple to use for the color corresponding to the intermediate color, if two linear interpolations are used for the color scale (see intermediate_point). If not set, the intermediate color between oldest_color and newest_color will be used. (no default) intermediate_point If set to a value between 0 and 1 (exclusive), this will be the point chosen to set the intermediate_color for interpolating the color value. (no default) newest_color (r,g,b) color triple to use for the color corresponding to the newest color, for the color scale used in blame or the browser age column if color_scale is enabled. (255, 136, 136) oldest_color (r,g,b) color triple to use for the color corresponding to the oldest color, for the color scale used in blame or the browser age column if color_scale is enabled. (136, 136, 255) oneliner_properties Comma-separated list of version control properties to render as oneliner wiki content in the repository browser. trac:summary render_unsafe_content Whether raw files should be rendered in the browser, or only made downloadable. Pretty much any file may be interpreted as HTML by the browser, which allows a malicious user to create a file containing cross-site scripting attacks. For open repositories where anyone can check-in a file, it is recommended to leave this option disabled. disabled wiki_properties Comma-separated list of version control properties to render as wiki content in the repository browser. trac:description
### [changeset]
max_diff_bytes Maximum total size in bytes of the modified files (their old size plus their new size) for which the changeset view will attempt to show the diffs inlined. 10000000 max_diff_files Maximum number of modified files for which the changeset view will attempt to show the diffs inlined. 0 wiki_format_messages Whether wiki formatting should be applied to changeset messages. If this option is disabled, changeset messages will be rendered as pre-formatted text. enabled
### [components]
This section is used to enable or disable components provided by plugins, as well as by Trac itself. The component to enable/disable is specified via the name of the option. Whether its enabled is determined by the option value; setting the value to enabled or on will enable the component, any other value (typically disabled or off) will disable the component.
The option name is either the fully qualified name of the components or the module/package prefix of the component. The former enables/disables a specific component, while the latter enables/disables any component in the specified package/module.
Consider the following configuration snippet:
[components]
trac.ticket.report.ReportModule = disabled
acct_mgr.* = enabled
The first option tells Trac to disable the report module. The second option instructs Trac to enable all components in the acct_mgr package. Note that the trailing wildcard is required for module/package matching.
To view the list of active components, go to the Plugins page on About Trac (requires CONFIG_VIEW permissions).
### [header_logo]
alt Alternative text for the header logo. (please configure the [header_logo] section in trac.ini) height Height of the header logo image in pixels. -1 link URL to link to, from the header logo. (no default) src URL of the image to use as header logo. It can be absolute, server relative or relative. If relative, it is relative to one of the /chrome locations: site/your-logo.png if your-logo.png is located in the htdocs folder within your TracEnvironment; common/your-logo.png if your-logo.png is located in the folder mapped to the htdocs_location URL. Only specifying your-logo.png is equivalent to the latter. site/your_project_logo.png width Width of the header logo image in pixels. -1
### [http-headers]
The header name must conform to RFC7230 and the following reserved names are not allowed: content-type, content-length, location, etag, pragma, cache-control, expires.
### [inherit]
htdocs_dir Path to the shared htdocs directory. Static resources in that directory are mapped to /chrome/shared under the environment URL, in addition to common and site locations. This can be useful in site.html for common interface customization of multiple Trac environments. Non-absolute paths are relative to the Environment conf directory. (since 1.0) (no default) plugins_dir Path to the shared plugins directory. Plugins in that directory are loaded in addition to those in the directory of the environment plugins, with this one taking precedence. Non-absolute paths are relative to the Environment conf directory. (no default) templates_dir Path to the shared templates directory. Templates in that directory are loaded in addition to those in the environments templates directory, but the latter take precedence. Non-absolute paths are relative to the Environment conf directory. (no default)
### [intertrac]
This section configures InterTrac prefixes. Options in this section whose name contain a . define aspects of the InterTrac prefix corresponding to the option name up to the .. Options whose name don't contain a . define an alias.
The .url is mandatory and is used for locating the other Trac. This can be a relative URL in case that Trac environment is located on the same server.
The .title information is used for providing a useful tooltip when moving the cursor over an InterTrac link.
Example configuration:
[intertrac]
# -- Example of setting up an alias:
t = trac
# -- Link to an external Trac:
trac.title = Edgewall's Trac for Trac
trac.url = http://trac.edgewall.org
### [interwiki]
Every option in the [interwiki] section defines one InterWiki prefix. The option name defines the prefix. The option value defines the URL, optionally followed by a description separated from the URL by whitespace. Parametric URLs are supported as well.
Example:
[interwiki]
MeatBall = http://www.usemod.com/cgi-bin/mb.pl?
PEP = http://www.python.org/peps/pep-$1.html Python Enhancement Proposal$1
tsvn = tsvn: Interact with TortoiseSvn
### [logging]
log_file If log_type is file, this should be a path to the log-file. Relative paths are resolved relative to the log directory of the environment. trac.log log_format Custom logging format. If nothing is set, the following will be used: Trac[$(module)s]$(levelname)s: $(message)s In addition to regular key names supported by the Python logger library one could use: $(path)s the path for the current environment $(basename)s the last path component of the current environment $(project)s the project name Note the usage of $(...)s instead of %(...)s as the latter form would be interpreted by the ConfigParser itself. Example: ($(thread)d) Trac[$(basename)s:$(module)s] $(levelname)s:$(message)s (no default) log_level Level of verbosity in log. Should be one of (CRITICAL, ERROR, WARNING, INFO, DEBUG). DEBUG log_type Logging facility to use. Should be one of (none, file, stderr, syslog, winlog). none
Configures the main navigation bar, which by default contains Wiki, Timeline, Roadmap, Browse Source, View Tickets, New Ticket, Search and Admin.
The label, href, and order attributes can be specified. Entries can be disabled by setting the value of the navigation item to disabled.
The following example renames the link to WikiStart to Home, links the View Tickets entry to a specific report and disables the Search entry.
[mainnav]
wiki.label = Home
tickets.href = /report/24
search = disabled
managed_menus List of menus to be controlled by the Menu Manager mainnav,metanav serve_ui_files enabled
### [metanav]
Configures the meta navigation entries, which by default are Login, Logout, Preferences, Help/Guide and About Trac. The allowed attributes are the same as for [mainnav]. Additionally, a special entry is supported - logout.redirect is the page the user sees after hitting the logout button. For example:
[metanav]
logout.redirect = wiki/Logout
### [milestone]
default_group_by Default field to use for grouping tickets in the grouped progress bar. (since 1.2) component default_retarget_to Default milestone to which tickets are retargeted when closing or deleting a milestone. (since 1.1.2) (no default) stats_provider Name of the component implementing ITicketGroupStatsProvider, which is used to collect statistics on groups of tickets for display in the milestone views. DefaultTicketGroupStatsProvider
### [milestone-groups]
As the workflow for tickets is now configurable, there can be many ticket states, and simply displaying closed tickets vs. all the others is maybe not appropriate in all cases. This section enables one to easily create groups of states that will be shown in different colors in the milestone progress bar.
Note that the groups can only be based on the ticket status, nothing else. In particular, it's not possible to distinguish between different closed tickets based on the resolution.
Example configuration with three groups, closed, new and active (the default only has closed and active):
# the 'closed' group correspond to the 'closed' tickets
closed = closed
# .order: sequence number in the progress bar
closed.order = 0
# .query_args: optional parameters for the corresponding
# query. In this example, the changes from the
# default are two additional columns ('created' and
# 'modified'), and sorting is done on 'created'.
closed.query_args = group=resolution,order=time,col=id,col=summary,col=owner,col=type,col=priority,col=component,col=severity,col=time,col=changetime
# .overall_completion: indicates groups that count for overall
# completion percentage
closed.overall_completion = true
new = new
new.order = 1
new.css_class = new
new.label = new
# Note: one catch-all group for other statuses is allowed
active = *
active.order = 2
# .css_class: CSS class for this interval
active.css_class = open
# .label: displayed label for this group
active.label = in progress
The definition consists in a comma-separated list of accepted status. Also, '*' means any status and could be used to associate all remaining states to one catch-all group.
The CSS class can be one of: new (yellow), open (no color) or closed (green). Other styles can easily be added using custom CSS rule: table.progress td.<class> { background: <color> } to a site/style.css file for example.
### [mimeviewer]
max_preview_size Maximum file size for HTML preview. 262144 mime_map List of additional MIME types and keyword mappings. Mappings are comma-separated, and for each MIME type, there's a colon (":") separated list of associated keywords or file extensions. text/x-dylan:dylan,text/x-idl:ice,text/x-ada:ads:adb mime_map_patterns List of additional MIME types associated to filename patterns. Mappings are comma-separated, and each mapping consists of a MIME type and a Python regexp used for matching filenames, separated by a colon (":"). (since 1.0) text/plain:README(?!\.rst)|INSTALL(?!\.rst)|COPYING.* pygments_default_style The default style to use for Pygments syntax highlighting. trac pygments_modes List of additional MIME types known by Pygments. For each, a tuple mimetype:mode:quality has to be specified, where mimetype is the MIME type, mode is the corresponding Pygments mode to be used for the conversion and quality is the quality ratio associated to this conversion. That can also be used to override the default quality ratio used by the Pygments render. (no default) tab_width Displayed tab width in file preview. 8 treat_as_binary Comma-separated list of MIME types that should be treated as binary data. application/octet-stream,application/pdf,application/postscript,application/msword,application/rtf
### [notification]
admit_domains Comma-separated list of domains that should be considered as valid for email addresses (such as localdomain). (no default) ambiguous_char_width Width of ambiguous characters that should be used in the table of the notification mail. If single, the same width as characters in US-ASCII. This is expected by most users. If double, twice the width of US-ASCII characters. This is expected by CJK users. (since 0.12.2) single batch_subject_template Like ticket_subject_template but for batch modifications. (since 1.0) ${prefix} Batch modify:${tickets_descr} default_format.email Default format to distribute email notifications. text/plain email_address_resolvers Comma separated list of email resolver components in the order they will be called. If an email address is resolved, the remaining resolvers will not be called. SessionEmailResolver email_sender Name of the component implementing IEmailSender. This component is used by the notification system to send emails. Trac currently provides SmtpEmailSender for connecting to an SMTP server, and SendmailEmailSender for running a sendmail-compatible executable. (since 0.12) SmtpEmailSender ignore_domains Comma-separated list of domains that should not be considered part of email addresses (for usernames with Kerberos domains). (no default) message_id_hash Hash algorithm to create unique Message-ID header. (since 1.0.13) md5 mime_encoding Specifies the MIME encoding scheme for emails. Supported values are: none, the default value which uses 7-bit encoding if the text is plain ASCII or 8-bit otherwise. base64, which works with any kind of content but may cause some issues with touchy anti-spam/anti-virus engine. qp or quoted-printable, which works best for european languages (more compact than base64) if 8-bit encoding cannot be used. none sendmail_path Path to the sendmail executable. The sendmail program must accept the -i and -f options. (since 0.12) sendmail smtp_always_bcc Comma-separated list of email addresses to always send notifications to. Addresses are not public (Bcc:). (no default) smtp_always_cc Comma-separated list of email addresses to always send notifications to. Addresses can be seen by all recipients (Cc:). (no default) smtp_default_domain Default host/domain to append to addresses that do not specify one. Fully qualified addresses are not modified. The default domain is appended to all username/login for which an email address cannot be found in the user settings. (no default) smtp_enabled Enable email notification. disabled smtp_from Sender address to use in notification emails. At least one of smtp_from and smtp_replyto must be set, otherwise Trac refuses to send notification mails. trac@localhost smtp_from_author Use the author of the change as the sender in notification emails (e.g. reporter of a new ticket, author of a comment). If the author hasn't set an email address, smtp_from and smtp_from_name are used instead. (since 1.0) disabled smtp_from_name Sender name to use in notification emails. (no default) smtp_password Password for authenticating with SMTP server. (no default) smtp_port SMTP server port to use for email notification. 25 smtp_replyto Reply-To address to use in notification emails. At least one of smtp_from and smtp_replyto must be set, otherwise Trac refuses to send notification mails. trac@localhost smtp_server SMTP server hostname to use for email notifications. localhost smtp_subject_prefix Text to prepend to subject line of notification emails. If the setting is not defined, then [$project_name] is used as the prefix. If no prefix is desired, then specifying an empty option will disable it. __default__ smtp_user Username for authenticating with SMTP server. (no default) ticket_subject_template A Genshi text template snippet used to get the notification subject. The template variables are documented on the TracNotification page. ${prefix} #${ticket.id}:${summary} use_public_cc Addresses in the To and Cc fields are visible to all recipients. If this option is disabled, recipients are put in the Bcc list. disabled use_short_addr Permit email address without a host/domain (i.e. username only). The SMTP server should accept those addresses, and either append a FQDN or use local delivery. See also smtp_default_domain. Do not use this option with a public SMTP server. disabled use_tls Use SSL/TLS to send notifications over SMTP. disabled
### [notification-subscriber]
The notifications subscriptions are controlled by plugins. All INotificationSubscriber components are in charge. These components may allow to be configured via this section in the trac.ini file.
Available subscribers:
SubscriberDescription
AlwaysEmailSubscriber
CarbonCopySubscriberTicket that I'm listed in the CC field is modified
NewTicketSubscriberAny ticket is created
TicketOwnerSubscriberTicket that I own is created or modified
TicketPreviousUpdatersSubscriberTicket that I previously updated is modified
TicketReporterSubscriberTicket that I reported is modified
TicketUpdaterSubscriberI update a ticket
### [project]
admin E-Mail address of the project's administrator. (no default) admin_trac_url Base URL of a Trac instance where errors in this Trac should be reported. This can be an absolute or relative URL, or '.' to reference this Trac instance. An empty value will disable the reporting buttons. . descr Short description of the project. My example project footer Page footer text (right-aligned). Visit the Trac open source project at
http://trac.edgewall.org/ icon URL of the icon of the project. common/trac.ico name Name of the project. My Project url URL of the main project web site, usually the website in which the base_url resides. This is used in notification e-mails. (no default)
### [pygments-lexer]
Configure Pygments lexer options.
For example, to set the PhpLexer options startinline and funcnamehighlighting:
[pygments-lexer]
php.startinline = True
php.funcnamehighlighting = True
The lexer name is derived from the class name, with Lexer stripped from the end. The lexer short names can also be used in place of the lexer name.
### [query]
default_anonymous_query The default query for anonymous users. The query is either in query language syntax, or a URL query string starting with ? as used in query: Trac links. status!=closed&cc~=$USER default_query The default query for authenticated users. The query is either in query language syntax, or a URL query string starting with ? as used in query: Trac links. status!=closed&owner=$USER items_per_page Number of tickets displayed per page in ticket queries, by default. 100 ticketlink_query The base query to be used when linkifying values of ticket fields. The query is a URL query string starting with ? as used in query: Trac links. (since 0.12) ?status=!closed
### [report]
items_per_page Number of tickets displayed per page in ticket reports, by default. 100 items_per_page_rss Number of tickets displayed in the rss feeds for reports. 0
### [repositories]
One of the alternatives for registering new repositories is to populate the [repositories] section of the trac.ini.
This is especially suited for setting up convenience aliases, short-lived repositories, or during the initial phases of an installation.
See TracRepositoryAdmin for details about the format adopted for this section and the rest of that page for the other alternatives.
(since 0.12)
### [revisionlog]
default_log_limit Default value for the limit argument in the TracRevisionLog. 100 graph_colors Comma-separated list of colors to use for the TracRevisionLog graph display. (since 1.0) #cc0,#0c0,#0cc,#00c,#c0c,#c00
### [roadmap]
stats_provider Name of the component implementing ITicketGroupStatsProvider, which is used to collect statistics on groups of tickets for display in the roadmap views. DefaultTicketGroupStatsProvider
### [search]
default_disabled_filters Specifies which search filters should be disabled by default on the search page. This will also restrict the filters for the quick search function. The filter names defined by default components are: wiki, ticket, milestone and changeset. For plugins, look for their implementation of the ISearchSource interface, in the get_search_filters() method, the first member of returned tuple. Once disabled, search filters can still be manually enabled by the user on the search page. (since 0.12) (no default) min_query_length Minimum length of query string allowed when performing a search. 3
### [sqlite]
extensions Paths to sqlite extensions. The paths may be absolute or relative to the Trac environment. (since 0.12) (no default)
### [svn]
authz_file The path to the Subversion authorization (authz) file. To enable authz permission checking, the AuthzSourcePolicy permission policy must be added to [trac] permission_policies. Non-absolute paths are relative to the Environment conf directory. (no default) authz_module_name The module prefix used in the authz_file for the default repository. If left empty, the global section is used. (no default) branches Comma separated list of paths categorized as branches. If a path ends with '*', then all the directory entries found below that path will be included. Example: /trunk, /branches/*, /projectAlpha/trunk, /sandbox/* trunk,branches/* eol_style End-of-Line character sequences when svn:eol-style property is native. If native, substitute with the native EOL marker on the server. Otherwise, if LF, CRLF or CR, substitute with the specified EOL marker. (since 1.0.2) native tags Comma separated list of paths categorized as tags. If a path ends with '*', then all the directory entries found below that path will be included. Example: /tags/*, /projectAlpha/tags/A-1.0, /projectAlpha/tags/A-v1.1 tags/*
### [svn:externals]
The TracBrowser for Subversion can interpret the svn:externals property of folders. By default, it only turns the URLs into links as Trac can't browse remote repositories.
However, if you have another Trac instance (or an other repository browser like ViewVC) configured to browse the target repository, then you can instruct Trac which other repository browser to use for which external URL. This mapping is done in the [svn:externals] section of the TracIni.
Example:
[svn:externals]
1 = svn://server/repos1 http://trac/proj1/browser/$path?rev=$rev
2 = svn://server/repos2 http://trac/proj2/browser/$path?rev=$rev
3 = http://theirserver.org/svn/eng-soft http://ourserver/viewvc/svn/$path/?pathrev=25914 4 = svn://anotherserver.com/tools_repository http://ourserver/tracs/tools/browser/$path?rev=\$rev
With the above, the svn://anotherserver.com/tools_repository/tags/1.1/tools external will be mapped to http://ourserver/tracs/tools/browser/tags/1.1/tools?rev= (and rev will be set to the appropriate revision number if the external additionally specifies a revision, see the SVN Book on externals for more details).
Note that the number used as a key in the above section is purely used as a place holder, as the URLs themselves can't be used as a key due to various limitations in the configuration file parser.
Finally, the relative URLs introduced in Subversion 1.5 are not yet supported.
### [ticket]
allowed_empty_fields Comma-separated list of select fields that can have an empty value. (since 1.1.2) milestone,version default_cc Default cc: list for newly created tickets. (no default) default_component Default component for newly created tickets. (no default) default_description Default description for newly created tickets. (no default) default_keywords Default keywords for newly created tickets. (no default) default_milestone Default milestone for newly created tickets. (no default) default_owner Default owner for newly created tickets. < default > default_priority Default priority for newly created tickets. major default_resolution Default resolution for resolving (closing) tickets. fixed default_severity Default severity for newly created tickets. (no default) default_summary Default summary (title) for newly created tickets. (no default) default_type Default type for newly created tickets. defect default_version Default version for newly created tickets. (no default) max_comment_size Maximum allowed comment size in characters. 262144 max_description_size Maximum allowed description size in characters. 262144 max_summary_size Maximum allowed summary size in characters. (since 1.0.2) 262144 preserve_newlines Whether Wiki formatter should respect the new lines present in the Wiki text. If set to 'default', this is equivalent to 'yes' for new environments but keeps the old behavior for upgraded environments (i.e. 'no'). default restrict_owner Make the owner field of tickets use a drop-down menu. Be sure to understand the performance implications before activating this option. See Assign-to as Drop-Down List. Please note that e-mail addresses are not obfuscated in the resulting drop-down menu, so this option should not be used if e-mail addresses must remain protected. disabled workflow Ordered list of workflow controllers to use for ticket actions. ConfigurableTicketWorkflow
### [ticket-custom]
In this section, you can define additional fields for tickets. See TracTicketsCustomFields for more details.
### [ticket-workflow]
The workflow for tickets is controlled by plugins. By default, there's only a ConfigurableTicketWorkflow component in charge. That component allows the workflow to be configured via this section in the trac.ini file. See TracWorkflow for more details.
### [timeline]
abbreviated_messages Whether wiki-formatted event messages should be truncated or not. This only affects the default rendering, and can be overriden by specific event providers, see their own documentation. enabled changeset_collapse_events Whether consecutive changesets from the same author having exactly the same message should be presented as one event. That event will link to the range of changesets in the log view. disabled changeset_long_messages Whether wiki-formatted changeset messages should be multiline or not. If this option is not specified or is false and wiki_format_messages is set to true, changeset messages will be single line only, losing some formatting (bullet points, etc). disabled changeset_show_files Number of files to show (-1 for unlimited, 0 to disable). This can also be location, for showing the common prefix for the changed files. 0 default_daysback Default number of days displayed in the Timeline, in days. 30 max_daysback Maximum number of days (-1 for unlimited) displayable in the Timeline. 90 newticket_formatter Which formatter flavor (e.g. 'html' or 'oneliner') should be used when presenting the description for new tickets. If 'oneliner', the [timeline] abbreviated_messages option applies. oneliner ticket_show_component Enable the display of component of tickets in the timeline. (since 1.1.1) disabled ticket_show_details Enable the display of all ticket changes in the timeline, not only open / close operations. disabled
### [trac]
anonymous_session_lifetime Lifetime of the anonymous session, in days. Set the option to 0 to disable purging old anonymous sessions. (since 1.0.17) 90 auth_cookie_domain Auth cookie domain attribute. The auth cookie can be shared among multiple subdomains by setting the value to the domain. (since 1.2) (no default) auth_cookie_lifetime Lifetime of the authentication cookie, in seconds. This value determines how long the browser will cache authentication information, and therefore, after how much inactivity a user will have to log in again. The value of 0 makes the cookie expire at the end of the browsing session. (since 0.12) 0 auth_cookie_path Path for the authentication cookie. Set this to the common base path of several Trac instances if you want them to share the cookie. (since 0.12) (no default) auto_preview_timeout Inactivity timeout in seconds after which the automatic wiki preview triggers an update. This option can contain floating-point values. The lower the setting, the more requests will be made to the server. Set this to 0 to disable automatic preview. (since 0.12) 2.0 auto_reload Automatically reload template files after modification. disabled backup_dir Database backup location db base_url Reference URL for the Trac deployment. This is the base URL that will be used when producing documents that will be used outside of the web browsing context, like for example when inserting URLs pointing to Trac resources in notification e-mails. (no default) check_auth_ip Whether the IP address of the user should be checked for authentication. disabled database Database connection string for this project sqlite:db/trac.db debug_sql Show the SQL queries in the Trac log, at DEBUG level. disabled default_charset Charset to be used when in doubt. utf-8 default_date_format The date format. Valid options are 'iso8601' for selecting ISO 8601 format, or leave it empty which means the default date format will be inferred from the browser's default language. (since 1.0) (no default) default_dateinfo_format The date information format. Valid options are 'relative' for displaying relative format and 'absolute' for displaying absolute format. (since 1.0) relative default_handler Name of the component that handles requests to the base URL. Options include TimelineModule, RoadmapModule, BrowserModule, QueryModule, ReportModule, TicketModule and WikiModule. WikiModule default_language The preferred language to use if no user preference has been set. (since 0.12.1) (no default) default_timezone The default timezone to use (no default) genshi_cache_size The maximum number of templates that the template loader will cache in memory. You may want to choose a higher value if your site uses a larger number of templates, and you have enough memory to spare, or you can reduce it if you are short on memory. 128 htdocs_location Base URL for serving the core static resources below /chrome/common/. It can be left empty, and Trac will simply serve those resources itself. Advanced users can use this together with trac-admin ... deploy to allow serving the static resources for Trac directly from the web server. Note however that this only applies to the /htdocs/common directory, the other deployed resources (i.e. those from plugins) will not be made available this way and additional rewrite rules will be needed in the web server. (no default) ignore_auth_case Whether login names should be converted to lower case. disabled jquery_location Location of the jQuery JavaScript library (version 1.11.3). An empty value loads jQuery from the copy bundled with Trac. Alternatively, jQuery could be loaded from a CDN, for example: http://code.jquery.com/jquery-1.11.3.min.js, http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.11.3.min.js or https://ajax.googleapis.com/ajax/libs/jquery/1.11.3/jquery.min.js. (since 1.0) (no default) jquery_ui_location Location of the jQuery UI JavaScript library (version 1.11.4). An empty value loads jQuery UI from the copy bundled with Trac. Alternatively, jQuery UI could be loaded from a CDN, for example: https://ajax.googleapis.com/ajax/libs/jqueryui/1.11.4/jquery-ui.min.js or http://ajax.aspnetcdn.com/ajax/jquery.ui/1.11.4/jquery-ui.min.js. (since 1.0) (no default) jquery_ui_theme_location Location of the theme to be used with the jQuery UI JavaScript library (version 1.11.4). An empty value loads the custom Trac jQuery UI theme from the copy bundled with Trac. Alternatively, a jQuery UI theme could be loaded from a CDN, for example: https://ajax.googleapis.com/ajax/libs/jqueryui/1.11.4/themes/start/jquery-ui.css or http://ajax.aspnetcdn.com/ajax/jquery.ui/1.11.4/themes/start/jquery-ui.css. (since 1.0) (no default) never_obfuscate_mailto Never obfuscate mailto: links explicitly written in the wiki, even if show_email_addresses is false or the user doesn't have EMAIL_VIEW permission. disabled permission_policies List of components implementing IPermissionPolicy, in the order in which they will be applied. These components manage fine-grained access control to Trac resources. ReadonlyWikiPolicy,DefaultPermissionPolicy,LegacyAttachmentPolicy permission_store Name of the component implementing IPermissionStore, which is used for managing user and group permissions. DefaultPermissionStore request_filters Ordered list of filters to apply to all requests. (no default) resizable_textareas Make
### [versioncontrol]
allowed_repository_dir_prefixes Comma-separated list of allowed prefixes for repository directories when adding and editing repositories in the repository admin panel. If the list is empty, all repository directories are allowed. (since 0.12.1) (no default) default_repository_type Default repository connector type. This is used as the default repository type for repositories defined in the repositories section or using the "Repositories" admin panel. (since 0.12) svn
### [wiki]
default_edit_area_height Default height of the textarea on the wiki edit page. (Since 1.1.5) 20 ignore_missing_pages Enable/disable highlighting CamelCase links to missing pages. disabled max_size Maximum allowed wiki page size in characters. 262144 render_unsafe_content Enable/disable the use of unsafe HTML tags such as
|
2018-10-23 10:01:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2285871058702469, "perplexity": 8395.497354308489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516123.97/warc/CC-MAIN-20181023090235-20181023111735-00029.warc.gz"}
|
http://cms.math.ca/cmb/msc/22D10?fromjnl=cmb&jnl=CMB
|
location: Publications → journals
Search results
Search: MSC category 22D10 ( Unitary representations of locally compact groups )
Expand all Collapse all Results 1 - 3 of 3
1. CMB 2013 (vol 57 pp. 357)
Lauret, Emilio A.
Representation Equivalent Bieberbach Groups and Strongly Isospectral Flat Manifolds Let $\Gamma_1$ and $\Gamma_2$ be Bieberbach groups contained in the full isometry group $G$ of $\mathbb{R}^n$. We prove that if the compact flat manifolds $\Gamma_1\backslash\mathbb{R}^n$ and $\Gamma_2\backslash\mathbb{R}^n$ are strongly isospectral then the Bieberbach groups $\Gamma_1$ and $\Gamma_2$ are representation equivalent, that is, the right regular representations $L^2(\Gamma_1\backslash G)$ and $L^2(\Gamma_2\backslash G)$ are unitarily equivalent. Keywords:representation equivalent, strongly isospectrality, compact flat manifoldsCategories:58J53, 22D10
2. CMB 2005 (vol 48 pp. 505)
Bouikhalene, Belaid
On the Generalized d'Alembert's and Wilson's Functional Equations on a Compact group Let $G$ be a compact group. Let $\sigma$ be a continuous involution of $G$. In this paper, we are concerned by the following functional equation $$\int_{G}f(xtyt^{-1})\,dt+\int_{G}f(xt\sigma(y)t^{-1})\,dt=2g(x)h(y), \quad x, y \in G,$$ where $f, g, h \colonG \mapsto \mathbb{C}$, to be determined, are complex continuous functions on $G$ such that $f$ is central. This equation generalizes d'Alembert's and Wilson's functional equations. We show that the solutions are expressed by means of characters of irreducible, continuous and unitary representations of the group $G$. Keywords:Compact groups, Functional equations, Central functions, Lie, groups, Invariant differential operators.Categories:39B32, 39B42, 22D10, 22D12, 22D15
3. CMB 2004 (vol 47 pp. 215)
Jaworski, Wojciech
Countable Amenable Identity Excluding Groups A discrete group $G$ is called \emph{identity excluding\/} if the only irreducible unitary representation of $G$ which weakly contains the $1$-dimensional identity representation is the $1$-dimensional identity representation itself. Given a unitary representation $\pi$ of $G$ and a probability measure $\mu$ on $G$, let $P_\mu$ denote the $\mu$-average $\int\pi(g) \mu(dg)$. The goal of this article is twofold: (1)~to study the asymptotic behaviour of the powers $P_\mu^n$, and (2)~to provide a characterization of countable amenable identity excluding groups. We prove that for every adapted probability measure $\mu$ on an identity excluding group and every unitary representation $\pi$ there exists and orthogonal projection $E_\mu$ onto a $\pi$-invariant subspace such that $s$-$\lim_{n\to\infty}\bigl(P_\mu^n- \pi(a)^nE_\mu\bigr)=0$ for every $a\in\supp\mu$. This also remains true for suitably defined identity excluding locally compact groups. We show that the class of countable amenable identity excluding groups coincides with the class of $\FC$-hypercentral groups; in the finitely generated case this is precisely the class of groups of polynomial growth. We also establish that every adapted random walk on a countable amenable identity excluding group is ergodic. Categories:22D10, 22D40, 43A05, 47A35, 60B15, 60J50
|
2015-05-25 03:36:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9370934367179871, "perplexity": 498.29919483198137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928350.51/warc/CC-MAIN-20150521113208-00138-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://physics.stackexchange.com/questions/217690/neutrino-oscillation-and-mass?noredirect=1
|
# Neutrino oscillation and mass [closed]
Neutrino oscillations indicate that neutrino have little bit mass. Among three neutrinos - electron, muon and tau neutrino - which is heavier? What is the mass range of these neutrino?
|
2020-09-24 22:32:21
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9765060544013977, "perplexity": 2079.8570702622355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400220495.39/warc/CC-MAIN-20200924194925-20200924224925-00149.warc.gz"}
|
https://socratic.org/questions/if-a-4-b-5-and-c-6-how-do-you-solve-the-triangle
|
# If a = 4, b = 5, and c = 6, how do you solve the triangle?
May 18, 2015
Use identity:
${c}^{2} = {a}^{2} + {b}^{2} - 2 a b . \cos C$
36 = 25 + 16 - 40.cos C --> cos C = 5/40 = 0.125 --> C = 82.82 deg
sin^20 C = 1 - cos^2 C = 0.98 -> sin C = 0.99
Next, use trig identity:$\sin \frac{A}{a} = \sin \frac{C}{c}$
$\sin A = 4 \left(\frac{0.99}{6}\right) =$0.66 -> A = 41.30
B = 180 - 41.30 - 82.82 = 55.88
|
2019-01-20 03:06:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4425811171531677, "perplexity": 1153.1149280810578}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583690495.59/warc/CC-MAIN-20190120021730-20190120043730-00073.warc.gz"}
|
https://www.physicsforums.com/threads/gravitational-acceleration-in-gr.310397/
|
# Gravitational acceleration in GR
1. ### kvantti
83
I tried to search everywhere but couldn't find an answer, so here it goes.
In Newtonian mechanics, the gravitational acceleration g at a distance r from the gravitating object is given by
Does this equation apply in general relativity aswel? If not, what is the equivalent in GR, ie. how do you calculate the acceleration due to gravity in GR?
2. ### HallsofIvy
40,542
Staff Emeritus
No, not precisely. There is no simple version of that equation in General Relativity. In relativity, there is an equation involving the "Gravitation tensor" which depends upon the metric tensor of the space. From that, you can (theoretically!) find the metric tensor and use that to find the Geodesics in 4-space. All free falling objects move along geodesics with constant "4- speed" which, if you try to force it to a flat 3-space, appears as an acceleration.
3. ### DrGreg
1,941
There is an equation in GR
$$g = - \frac{Gm}{r^2 \sqrt{1 - 2Gm/rc^2}}$$
However, the r in that equation is not "radius" in the sense of something you could measure with a stationary ruler next to a black hole. In fact you can't measure such a radius, because any ruler that approached the hole would fall to pieces. r is the circumference of an orbiting circle divided by $2 \pi$, which in GR is not the same thing a ruler-measured radius.
Actually proving the formula is no easy thing.
Reference: Woodhouse, N M J (2007), General Relativity, Springer, London, ISBN 978-1-84628-486-1, page 99
4. ### A.T.
5,778
And would this formula give the acceleration as measured by a local clock, or a distant clock in flat space-time?
5. ### George Jones
6,396
Staff Emeritus
This is the acceleration as measured by the local clocks and rulers of an observer hovering at position r. If such an observer dropped a stone, the observer would measure this (initial) acceleration for the stone.
Note that in the limit Gm/rc^2 is small compared to 1, this acceleration is the same as the Newtonian value.
5,778
Thanks!
7. ### Passionflower
However the proper distance can be calculated in the Schwarzschild solution for any observer.
The graph below shows the proper distance to the singularity for both a free falling (from infinity) observer and a stationary observer (but note that passed the event horizon there cannot be any stationary observers, but we can still calculate the proper distance passed the event horizon):
Last edited: Sep 27, 2011
8. ### pervect
7,951
Staff Emeritus
Note that Pasionflower's notion of "the proper distance" appears to be something that he defined and calculated himself (at least I've never seen him quote a reference deriving it or defining it), and it doesn't appear to agree with, for instance, the Fermi normal distance.
The apparent reason for the disagreement is that the notion of simultaneity for the Fermi normal distance is different than Passionflowers.
On the plus side, the numbers are a lot easier to crunch using Passionflower's definition, it's not a particularly bad approach to measuring distance, it's just not unique. The other big drawback is that it's not computed via a Born-rigid set of observers.
9. ### Passionflower
What integral do you use for Fermi normal distance?
My integrals are for 'stationary':
$$\int _{{\it ri}}^{{\it ro}}\!{\frac {1}{\sqrt { \left| 1-{\frac {{\it rs}}{r}} \right| }}}{dr}$$
And free falling (from infinity)
$$\int _{{\it ri}}^{{\it ro}}\!\sqrt { \left| 1-{\frac {{\it rs}}{r}} \right| }{\frac {1}{\sqrt { \left| 1-{\frac {{\it rs}}{{\it r}}} \right| }}}{dr}$$
which obviously becomes: ro-ri
Last edited: Sep 27, 2011
|
2015-05-29 18:28:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8896896839141846, "perplexity": 782.9340948885126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930259.97/warc/CC-MAIN-20150521113210-00182-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://www.gradesaver.com/textbooks/science/chemistry/chemistry-12th-ed/chapter-4-atomic-structure-4-1-defining-the-atom-4-1-lesson-check-page-104/7
|
# Chapter 4 - Atomic Structure - 4.1 Defining the Atom - 4.1 Lesson Check - Page 104: 7
1.05 x $10^{-22}$ g Cu
#### Work Step by Step
Use the information given to create a conversion factor to find the mass of the atom.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2018-10-21 13:24:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5465943217277527, "perplexity": 1469.4579382159372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514005.65/warc/CC-MAIN-20181021115035-20181021140535-00273.warc.gz"}
|
http://math.stackexchange.com/questions/102259/probability-greatest-roll-of-z-dice-exceeding-greatest-roll-of-h-dice
|
# Probability greatest roll of z dice exceeding greatest roll of h dice?
Let's assume there are two players in a dice game; zombie and hero.
The Zombie rolls z fair 6-sided dice.
The Hero rolles h fair 6-sided dice.
If the heroes greatest dice roll is larger than the zombie's greatest dice roll, the hero wins. Otherwise, the zombie wins.
How can I calculate the probability of the hero winning as a function of z and h (without just enumerating the answers).
Example 1: z = 1, h = 2
Zombie rolls (4), hero rolls (1,5). Hero has a higher dice roll and wins.
Example 2: z = 2, h = 2
Zombie rolls (4,4), hero rolls (1,4). Hero does not have a higher dice roll and loses.
-
Let $Z$ denote the greatest dice roll of Zombie and $H$ the greatest dice roll of Hero. Assume Zombie's rolls and Hero's rolls are independent
Then $\mathrm P(Z\leqslant n)=(n/6)^z$ for every $1\leqslant n\leqslant 6$ hence $\mathrm P(Z= n)=(n/6)^z-((n-1)/6)^z$. Likewise, $\mathrm P(H\leqslant n)=(n/6)^h$ hence $\mathrm P(H\gt n)=1-(n/6)^h$ for every $1\leqslant n\leqslant 6$. This yields $$\mathrm P(H\gt Z)=\sum_{n=1}^6\mathrm P(Z=n)\mathrm P(H\gt n)=\sum_{n=1}^6((n/6)^z-((n-1)/6)^z(1-(n/6)^h),$$ that is, $$\mathrm P(H\gt Z)=1-\frac1{6^{z+h}}\sum_{n=1}^6(n^z-(n-1)^z)n^h.$$
The RHS of your final equation should begin with $1$, not $6^{−h}$. – Byron Schmuland Jan 25 '12 at 16:16 @Byron: Indeed it should. Thanks. – Did Jan 25 '12 at 16:38
|
2013-05-18 23:53:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7382633090019226, "perplexity": 288.35466174383424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382989/warc/CC-MAIN-20130516092622-00090-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://math.libretexts.org/Bookshelves/Pre-Algebra/Book%3A_Prealgebra_(OpenStax)/7%3A_The_Properties_of_Real_Numbers/7.2%3A_Commutative_and_Associative_Properties_(Part_1)
|
# 7.2: Commutative and Associative Properties (Part 1)
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$
$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$
Skills to Develop
• Use the commutative and associative properties
• Evaluate expressions using the commutative and associative properties
• Simplify expressions using the commutative and associative properties
be prepared!
Before you get started, take this readiness quiz.
1. Simplify: 7y + 2 + y + 13. If you missed this problem, review Example 2.22.
2. Multiply: $$\frac{2}{3} \cdot 18$$. If you missed this problem, review Example 4.28.
3. Find the opposite of 15. If you missed this problem, review Example 3.3.
In the next few sections, we will take a look at the properties of real numbers. Many of these properties will describe things you already know, but it will help to give names to the properties and define them formally. This way we’ll be able to refer to them and use them as we solve equations in the next chapter.
## Use the Commutative and Associative Properties
$$\begin{split} 5 &+ 3 \qquad 3 + 5 \\ &\; 8 \qquad \qquad 8 \end{split}$$
The results are the same. 5 + 3 = 3 + 5
Notice, the order in which we add does not matter. The same is true when multiplying 5 and 3.
$$\begin{split} 5 &\cdot 3 \qquad \; 3 \cdot 5 \\ & 15 \qquad \quad 15 \end{split}$$
Again, the results are the same! 5 • 3 = 3 • 5. The order in which we multiply does not matter. These examples illustrate the commutative properties of addition and multiplication.
Definition: Commutative Properties
Commutative Property of Addition: if a and b are real numbers, then a + b = b + a
Commutative Property of Multiplication: if a and b are real numbers, then a • b = b • a
The commutative properties have to do with order. If you change the order of the numbers when adding or multiplying, the result is the same.
Example 7.5:
Use the commutative properties to rewrite the following expressions: (a) −1 + 3 = _____ (b) 4 • 9 = _____
Solution
(a) −1 + 3 = _____
Use the commutative property of addition to change the order. −1 + 3 = 3 + (−1)
(b) 4 • 9 = _____
Use the commutative property of multiplication to change the order. 4 • 9 = 9 • 4
Exercise 7.9:
Use the commutative properties to rewrite the following expressions: (a) −4 + 7 = _____ (b) 6 • 12 = _____
Exercise 7.10:
Use the commutative properties to rewrite the following expressions: (a) 14 + (-2) = _____ (b) 3(-5) = _____
What about subtraction? Does order matter when we subtract numbers? Does 7 − 3 give the same result as 3 − 7?
$$\begin{split} 7 &- 3 \qquad 3 - 7 \\ &\; 4 \qquad \quad -4 \\ & \quad 4 \neq -4 \end{split}$$
The results are not the same. 7 − 3 ≠ 3 − 7
Since changing the order of the subtraction did not give the same result, we can say that subtraction is not commutative. Let’s see what happens when we divide two numbers. Is division commutative?
$$\begin{split} 12 &\div 4 \qquad 4 \div 12 \\ & \frac{12}{4} \qquad \quad \frac{4}{12} \\ &\; 3 \qquad \qquad \frac{1}{3} \\ &\quad \; 3 \neq \frac{1}{3} \end{split}$$
The results are not the same. So 12 ÷ 4 ≠ 4 ÷ 12
Since changing the order of the division did not give the same result, division is not commutative.
Addition and multiplication are commutative. Subtraction and division are not commutative.
Suppose you were asked to simplify this expression.
$$7 + 8 + 2$$
Some people would think 7 + 8 is 15 and then 15 + 2 is 17. Others might start with 8 + 2 makes 10 and then 7 + 10 makes 17.
Both ways give the same result, as shown in Figure 7.3. (Remember that parentheses are grouping symbols that indicate which operations should be done first.)
Figure 7.3
When adding three numbers, changing the grouping of the numbers does not change the result. This is known as the Associative Property of Addition.
The same principle holds true for multiplication as well. Suppose we want to find the value of the following expression:
$$5 \cdot \frac{1}{3} \cdot 3$$
Changing the grouping of the numbers gives the same result, as shown in Figure 7.4.
Figure 7.4
When multiplying three numbers, changing the grouping of the numbers does not change the result. This is known as the Associative Property of Multiplication.
If we multiply three numbers, changing the grouping does not affect the product. You probably know this, but the terminology may be new to you. These examples illustrate the Associative Properties.
Definition: Associative Properties
Associative Property of Addition: if a, b, and c are real numbers, then (a + b) + c = a + (b + c)
Associative Property of Multiplication: if a, b, and c are real numbers, then (a • b) • c = a • (b • c)
Example 7.6:
Use the associative properties to rewrite the following: (a) (3 + 0.6) + 0.4 = __________ (b) $$\left(−4 \cdot \dfrac{2}{5}\right) \cdot 15$$ = __________
Solution
(a) (3 + 0.6) + 0.4 = __________
Change the grouping. (3 + 0.6) + 0.4 = 3 + (0.6 + 0.4)
Notice that 0.6 + 0.4 is 1, so the addition will be easier if we group as shown on the right.
(b) $$\left(−4 \cdot \dfrac{2}{5}\right) \cdot 15$$ = __________
Change the grouping. (3 + 0.6) + 0.4 = 3 + (0.6 + 0.4)
Notice that $$\frac{2}{5} \cdot 15$$ is 6. The multiplication will be easier if we group as shown on the right.
Exercise 7.11:
Use the associative properties to rewrite the following: (a) (1 + 0.7) + 0.3 = __________ (b) (−9 • 8) • $$\frac{3}{4}$$ = __________
Exercise 7.12:
Use the associative properties to rewrite the following: (a) (4 + 0.6) + 0.4 = __________ (b) (−2 • 12) • $$\frac{5}{6}$$ = __________
Besides using the associative properties to make calculations easier, we will often use it to simplify expressions with variables.
Example 7.7:
Use the Associative Property of Multiplication to simplify: 6(3x).
Solution
Change the grouping. (6 • 3)x Multiply in the parentheses. 18x
Notice that we can multiply 6 • 3, but we could not multiply 3 • x without having a value for x.
Exercise 7.13:
Use the Associative Property of Multiplication to simplify the given expression: 8(4x).
Exercise 7.14:
Use the Associative Property of Multiplication to simplify the given expression: −9(7y).
## Evaluate Expressions using the Commutative and Associative Properties
The commutative and associative properties can make it easier to evaluate some algebraic expressions. Since order does not matter when adding or multiplying three or more terms, we can rearrange and re-group terms to make our work easier, as the next several examples illustrate.
Example 7.8:
Evaluate each expression when x = $$\frac{7}{8}$$. (a) x + 0.37 + (− x) (b) x + (− x) + 0.37
Solution
(a) x + 0.37 + (− x)
Substitute $$\frac{7}{8}$$ for x. $$\textcolor{red}{\frac{7}{8}} + 0.37 + \left(- \textcolor{red}{\dfrac{7}{8}}\right)$$ Convert fractions to decimals. 0.875 + 0.37 + (-0.875) Add left to right. 1.245 - 0.875 Subtract. 0.37
(b) x + (− x) + 0.37
Substitute $$\frac{7}{8}$$ for x. $$\textcolor{red}{\frac{7}{8}} + \left(- \textcolor{red}{\dfrac{7}{8}}\right) + 0.37$$ Add opposites first. 0.37
What was the difference between part (a) and part (b)? Only the order changed. By the Commutative Property of Addition, x + 0.37 + (− x) = x + (− x) + 0.37. But wasn’t part (b) much easier?
Exercise 7.15:
Evaluate each expression when y = $$\frac{3}{8}$$: (a) y + 0.84 + (− y) (b) y + (− y) + 0.84.
Exercise 7.16:
Evaluate each expression when f = $$\frac{17}{20}$$: (a) f + 0.975 + (− f) (b) f + (− f) + 0.975.
Let’s do one more, this time with multiplication.
Example 7.9:
Evaluate each expression when n = 17. (a) $$\frac{4}{3} \left(\dfrac{3}{4} n\right)$$ (b) $$\left(\dfrac{4}{3} \cdot \dfrac{3}{4}\right) n$$
Solution
(a) $$\frac{4}{3} \left(\dfrac{3}{4} n\right)$$
Substitute 17 for n. $$\frac{4}{3} \left(\dfrac{3}{4} \cdot \textcolor{red}{17} \right)$$ Multiply in the parentheses first. $$\frac{4}{3} \left(\dfrac{51}{4}\right)$$ Multiply again. $$17$$
(b) $$\left(\dfrac{4}{3} \cdot \dfrac{3}{4}\right) n$$
Substitute 17 for n. $$\left(\dfrac{4}{3} \cdot \dfrac{3}{4}\right) \textcolor{red}{\cdot 17}$$ Multiply. The product of reciprocals is 1. $$(1) \cdot 17$$ Multiply again. $$17$$
What was the difference between part (a) and part (b) here? Only the grouping changed. By the Associative Property of Multiplication, $$\frac{4}{3} \left(\dfrac{3}{4} n\right) = \left(\dfrac{4}{3} \cdot \dfrac{3}{4}\right) n$$. By carefully choosing how to group the factors, we can make the work easier.
Exercise 7.17:
Evaluate each expression when p = 24. (a) $$\frac{5}{9} \left(\dfrac{9}{5} p\right)$$ (b) $$\left(\dfrac{5}{9} \cdot \dfrac{9}{5}\right) p$$
Exercise 7.18:
Evaluate each expression when q = 15. (a) $$\frac{7}{11} \left(\dfrac{11}{7} q\right)$$ (b) $$\left(\dfrac{7}{11} \cdot \dfrac{11}{7}\right) q$$
|
2019-08-20 10:51:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7406117916107178, "perplexity": 713.1954596312061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315321.52/warc/CC-MAIN-20190820092326-20190820114326-00528.warc.gz"}
|
https://www.nextgurukul.in/wiki/concept/cbse/class-9/science/structure-of-the-atom/atomic-models/3958031
|
Notes On Atomic Models - CBSE Class 9 Science
When two objects are rubbed together they get electrically charged. The origin of this charge can be explained, if it is assumed that the constituent particles of matter, the "atoms" are divisible and consist of charged particles. Sub atomic particles: Atom is defined as the smallest particle of the element that can exist independently and retain all its chemical properties. According modern atomic theory atoms are composed of particles. The three main sub-atomic particles are proton, neutron and electron. Cathode rays- Discovery of electrons: In 1878 William Crooks carried out discharge tube experiments and discovered new radiations and called them cathode rays. Since these rays travel from the cathode towards anode. Later J.J Thomson studied on characteristics of cathode rays and concluded that cathode rays are negatively charged particles, now called electrons. The name electron was given by Johnson Stoney. Discharge tube experiment Canal rays Or Anode rays: In 1886, E. Goldstein carried out discharge tube experiments and discovered new radiations and called them canal rays. These rays were made up of positively charged particles and led to the discovery of proton. Properties of electron, proton and neutron: Parameters Electron Proton Neutron Position Present outside the nucleus and revolves in the orbits Present inside the nucleus Present inside the nucleus Mass 9.108X10-28 g 1.67X10-24g 1.67X10-24 g Charge -1.602 X10-19 coulombs 1.602 X10-19 coulombs Zero Representation e- p+ n Atomic models: Atomic models proposed by scientists show the arrangement of the various sub-atomic particles in an atom. Thomson’s atomic model: J.J. Thomson was the first to put forward a model to explain the structure of an atom. Thomson's atomic model is also called water melon model or Christmas pudding model. He compared the electrons with the raisins in the Spherical Christmas pudding and to seeds in a watermelon. Postulates of Thomson’s Model: • An atom consists of a positively charged sphere, with electrons set within the sphere. • An atom is electrically neutral as the positive and negative charges within it are equal. Draw backs of Thomson’s Model • It could not explain the stability of an atom, i.e how a positive charge in the atom holds the negatively charged electrons. • It could not explain the position of nucleus in an atom. • It could not explain the scattering of alpha particles Rutherford’s Experiment: Thomson’s student, Ernest Rutherford conducted an experiment using gold foil which disproved Thomson’s model. To study the structure of atom, Rutherford performed a thin gold foil scattering experiment. For his experiments Rutherford used a gold foil. He made a narrow beam of alpha particles to fall on the gold foil. Observations made from the alpha ray scattering experiment • Most of the alpha particles passed straight through the gold foil without getting deflected. • A small fraction of the alpha particles were deflected through small angles. • A few alpha particles bounced back. Based on his observations, Rutherford proposed the nuclear model of an atom. Postulates of Rutherford nuclear model: • Positive charge is concentrated in the center of the atom, called nucleus. • Electrons revolve around the nucleus in circular paths called orbits. • The nucleus is much smaller in size than the atom. Drawbacks of Rutherford’s Model • The orbital revolution of the electron is not expected to be stable. • According to Rutherford’s model, the electrons, while moving in their orbits, would give up energy. This would make them slow down, gradually and move towards the nucleus. The electrons will follow a spiral path and then fall into the nucleus. Ultimately, the atom would collapse. But in reality the atom is stable. Bohr’s Model: Keeping the shortcomings of the Rutherford’s model in mind Niels Bohr formed his postulates about the structure of an atom as the following. Postulates of Bohr’s Model: • Electrons revolve in discrete orbits called shells. • Electrons revolve in their orbits without radiating energy. Within a particular orbit, the energy of an electron is constant. This is why orbits are called stationary orbits or stationary shells. • Orbits or shells are also known as energy levels. • These orbits or shells are represented by the letters K, L, M, N,… or the numbers n=1, 2, 3, 4,…. Drawbacks of Bohr’s Model: • Bohr’s model did not apply to elements like helium and lithium and the higher elements containing more than one electron. • The model was also unable to explain the structure of chemical bonds. The discovery of neutrons: Consider the element helium (He24). It was found that helium has two protons and two electrons however its mass was found to be four times that of Hydrogen. Similarly, the masses of some other elements were also found to be double or more than double the number of protons. This problem was solved on the discovery of another particle ‘neutron’ by James Chadwick In 1932 by bombarding beryllium with alpha particles. Be49 + He24 → C612 + n01 ‘Neutron’ is a neutral particle with a mass equal to that of a proton and is present in the nucleus along with a proton.
#### Summary
When two objects are rubbed together they get electrically charged. The origin of this charge can be explained, if it is assumed that the constituent particles of matter, the "atoms" are divisible and consist of charged particles. Sub atomic particles: Atom is defined as the smallest particle of the element that can exist independently and retain all its chemical properties. According modern atomic theory atoms are composed of particles. The three main sub-atomic particles are proton, neutron and electron. Cathode rays- Discovery of electrons: In 1878 William Crooks carried out discharge tube experiments and discovered new radiations and called them cathode rays. Since these rays travel from the cathode towards anode. Later J.J Thomson studied on characteristics of cathode rays and concluded that cathode rays are negatively charged particles, now called electrons. The name electron was given by Johnson Stoney. Discharge tube experiment Canal rays Or Anode rays: In 1886, E. Goldstein carried out discharge tube experiments and discovered new radiations and called them canal rays. These rays were made up of positively charged particles and led to the discovery of proton. Properties of electron, proton and neutron: Parameters Electron Proton Neutron Position Present outside the nucleus and revolves in the orbits Present inside the nucleus Present inside the nucleus Mass 9.108X10-28 g 1.67X10-24g 1.67X10-24 g Charge -1.602 X10-19 coulombs 1.602 X10-19 coulombs Zero Representation e- p+ n Atomic models: Atomic models proposed by scientists show the arrangement of the various sub-atomic particles in an atom. Thomson’s atomic model: J.J. Thomson was the first to put forward a model to explain the structure of an atom. Thomson's atomic model is also called water melon model or Christmas pudding model. He compared the electrons with the raisins in the Spherical Christmas pudding and to seeds in a watermelon. Postulates of Thomson’s Model: • An atom consists of a positively charged sphere, with electrons set within the sphere. • An atom is electrically neutral as the positive and negative charges within it are equal. Draw backs of Thomson’s Model • It could not explain the stability of an atom, i.e how a positive charge in the atom holds the negatively charged electrons. • It could not explain the position of nucleus in an atom. • It could not explain the scattering of alpha particles Rutherford’s Experiment: Thomson’s student, Ernest Rutherford conducted an experiment using gold foil which disproved Thomson’s model. To study the structure of atom, Rutherford performed a thin gold foil scattering experiment. For his experiments Rutherford used a gold foil. He made a narrow beam of alpha particles to fall on the gold foil. Observations made from the alpha ray scattering experiment • Most of the alpha particles passed straight through the gold foil without getting deflected. • A small fraction of the alpha particles were deflected through small angles. • A few alpha particles bounced back. Based on his observations, Rutherford proposed the nuclear model of an atom. Postulates of Rutherford nuclear model: • Positive charge is concentrated in the center of the atom, called nucleus. • Electrons revolve around the nucleus in circular paths called orbits. • The nucleus is much smaller in size than the atom. Drawbacks of Rutherford’s Model • The orbital revolution of the electron is not expected to be stable. • According to Rutherford’s model, the electrons, while moving in their orbits, would give up energy. This would make them slow down, gradually and move towards the nucleus. The electrons will follow a spiral path and then fall into the nucleus. Ultimately, the atom would collapse. But in reality the atom is stable. Bohr’s Model: Keeping the shortcomings of the Rutherford’s model in mind Niels Bohr formed his postulates about the structure of an atom as the following. Postulates of Bohr’s Model: • Electrons revolve in discrete orbits called shells. • Electrons revolve in their orbits without radiating energy. Within a particular orbit, the energy of an electron is constant. This is why orbits are called stationary orbits or stationary shells. • Orbits or shells are also known as energy levels. • These orbits or shells are represented by the letters K, L, M, N,… or the numbers n=1, 2, 3, 4,…. Drawbacks of Bohr’s Model: • Bohr’s model did not apply to elements like helium and lithium and the higher elements containing more than one electron. • The model was also unable to explain the structure of chemical bonds. The discovery of neutrons: Consider the element helium (He24). It was found that helium has two protons and two electrons however its mass was found to be four times that of Hydrogen. Similarly, the masses of some other elements were also found to be double or more than double the number of protons. This problem was solved on the discovery of another particle ‘neutron’ by James Chadwick In 1932 by bombarding beryllium with alpha particles. Be49 + He24 → C612 + n01 ‘Neutron’ is a neutral particle with a mass equal to that of a proton and is present in the nucleus along with a proton.
#### Activities
Activity1: Goalfinder.com has developed an interactive simulation about the atom and about the matter. It also involves histroy of atoms and Democritus vision on atoms. Melvills discovery about the difference in colours for different substances on putting flame. And view of atom in an electron miccroscopic. Including different atomic models Go to Activity Activity 2: Science.sbcc has developed an interactive simulation based Bohr's model of the hydrogen atom. It explanided about the energy associated with different levels and jumping of the electron into higher energy level by gaining energy. Followed by emission spectra of hydrogen. Go to Activity
Previous
|
2023-02-08 03:53:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42083194851875305, "perplexity": 2114.8634432591775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00571.warc.gz"}
|
https://mattermodeling.stackexchange.com/questions/9/in-molecular-mechanics-how-are-van-der-waals-forces-modelled
|
# In molecular mechanics, how are van der Waals forces modelled?
In terms of energy, how are van der Waals forces modelled (are there formulas/laws that govern these)?
• There are a plethora of ways. LJ and EXP6 are merely two of the most cliche ways. They are both very simple and easy to use, but neither is all that great. LJ is too steep at close distances and EXP6 is unphysical at small distances i.e., swings back around and plunges to negative $\infty$. – Charlie Crown May 6 at 17:41
First I will try to directly answer this question:
In terms of energy, how are van der Waals forces modelled (are there formulas that govern these)?
The most common way to model the potential energy between two ground state (S-state) atoms that are far apart, is by the London dispersion formula:
$$V(r) = -\frac{C_6}{r^6}$$
where $$C_6$$ depends on the dipole polarizabilities ($$\alpha_1$$ and $$\alpha_2$$) of the two atoms. One decent approximation is the Slater-Kirkwood formula:
$$C_6 \approx \frac{3}{2}\frac{I_1I_2}{I_1+I_2}\alpha_1\alpha_2$$ where $$I_1$$ and $$I_2$$ are the first ionization potentials of the atoms.
However the London dispersion potential is not the only one:
• The Casimir-Polder potential is used in the relativistic regime, and it's often closer to $$C_7/r^7$$
• The resonance dipole-dipole potential: $$C_3/r^3$$ used between S-state and P-state atoms
• If one particle is charged you can get: $$C_4/r^4$$ as in Eq. 2 of this paper of mine.
In molecular mechanics, how are van der Waals forces modelled?
Most often the $$C_6/r^6$$ formula is used, which is reasonable unless dealing with ions, or excited states, or extremely long-range (relativistic) situations. However this formula is for two particles that are very far apart, and we need a force to go the other way when particles are too close to each other, and that is where the 1924 Lennard-Jones potential enters (it has already been written by AloneProgrammer, but in a different way):
$$V(r) = \frac{C_{12}}{r^{12}}-\frac{C_6}{r^6}$$
While the $$r^6$$ has rigorous theoretical foundations, the $$r^{12}$$ does not, but in molecular mechanics calculations, this function might need to be evaluated billions of times, so it is convenient that once you've calculated temp=r^6 in your computer program, you can just do temp2=temp*temp to get $$r^{12}$$. This might sound crazy now, but the earliest computers were so slow that being able to reuse the calculation $$r^6$$ in order to take a short cut to calculate $$r^{12}$$, actually made a big difference, and the most high-performance codes, even today, still use this short-cut.
However, now we have to address the comment of Charlie Crown:
LJ and EXP6 are merely two of the most cliche ways. They are both very simple and easy to use, but neither is all that great. LJ is too steep at close distances and EXP6 is unphysical at small distances
This is exactly what I told you: $$C_6/r^6$$ is only valid when the atoms are very far apart, and $$C_{12}/r^{12}$$ has no physical foundation at all (it is simply convenient since $$(r^6)^2=r^{12}$$.
AloneProgrammer gave the Morse potential (from 1929) which is actually really good when the atoms are closer together:
$$V(r) = D_e\left(1 - e^{\beta(r-r_e)}\right)^2$$
where $$r_e$$ is the inter-atomic distance at equilibrium, $$D_e$$ is the "Depth at equilibrium" and $$\beta$$ controls the shape. While this is good at short-range, it is bad at long-range, because if you work out the asymptotic behaviour as $$r\rightarrow \infty$$ you will see that it decays exponentially, when in fact we know it should decay with an inverse-power (proportional to $$1/r^6$$), and exponentials behave very differently from inverse-power functions.
The solution is the Morse/long-range function or MLR, which was introduced by Bob LeRoy and myself in 2009
It looks exactly like the Morse potential when $$r$$ is close to $$r_e$$ (when the system is close to equilibrium). But if you calculate the form of the function as $$\lim\limits_{r\rightarrow \infty}$$, you literally get $$V(r) \rightarrow -u(r)$$ where $$u(r)$$ can be anything you want: $$C_6/r^6$$, $$C_3/r^3$$, $$C_4/r^4$$, etc.
Therefore the MLR potential is Morse-like near equilibrium, and LJ-like when far from equilibrium, which is exactly what Charlie Crown said was problematic if you use pure Morse or pure Lennard-Jones.
The MLR potential isn't used in mainstream molecular mechanics calculations, because evaluating the function would be slower than simply using the $$(r^6)^2=r^{12}$$ trick (which makes calculations very fast when using the LJ potential). The MLR potential is more accurate though, and solves the problem of the LJ being wrong at equilibrium and the Morse being wrong when far from equilibrium, so it can give more accurate results. Often there's so many approximations going on in molecular mechanics that it doesn't hurt to use the LJ potential which both other answers mentioned already. The MLR tends to be used for high-precision spectroscopy more than for molecular mechanics, but it's an option if one wanted more accurate results.
You are looking for Lennard-Jones potential. Basically, the interatomic interaction is modeled by this formula:
$$U(r) = 4 \epsilon \Bigg [ \Big ( \frac{\sigma}{r} \Big )^{12} - \Big ( \frac{\sigma}{r} \Big )^{6} \Bigg ]$$
Particularly the term $$r^{-6}$$ in the above formula describes long-range attraction force based on van der Waals theory.
Update:
I'll elaborate a bit more about my answer here. As Lucas said there is no universal model for capturing the behavior of van der Waals forces and you could generalize the Lennard-Jones potential as:
$$U(r) = 4 \epsilon \Bigg [ \Big ( \frac{\sigma}{r} \Big )^{m} - \Big ( \frac{\sigma}{r} \Big )^{n} \Bigg ]$$
As you see, due to the fact that always molecular dynamics boxes are finite in terms of length, no matter how big is your box and how far you molecules are $$U(r) \neq 0$$. This causes a problem when you put a periodic boundary condition and the image of an atom interacts with itself, which is obviously incorrect. You could modify this Lennard-Jones potential and define a cut off value to reduce $$U(r)$$ to zero for $$r > r_{c}$$ where $$r_{c}$$ is the cut off radius, but still induces some other trouble due to non-continuity of force (first derivative of the potential). Another common model to capture van der Waals forces is called soft-potential, defined as:
$$U(r) = D_{e} \Big( 1 - \exp{(-a(r-r_{e}))} \Big)^{2}$$
This potential comes from the solution of a quantum harmonic oscillator. $$D_{e}$$ is the height of the potential well, and $$a$$ controls its width.
• I wonder which reference calls this "soft-potential" ? I only knew it by the Morse potential. I also wonder, why do you say it comes from the solution of a quantum harmonic oscillator? It is a very anharmonic potential. – Nike Dattani May 11 at 0:41
Additional to Alone Programmer's answer. The Lennard-Jones Potential (LJ 12-6) is the standard, but is not unique, in some cases the factor of 6 is changed to 8 to simulate better the hydrogen bonds. Also, there is the Buckingham potential, where the repulsion part ($$r^{12}$$ term) is modified to the exponential term. But the attractive long-range term ($$r^{6}$$) is the same.
LJ 12-6 fits very well for the potential energy surface of a noble gas dimer. The epsilon term is the maximum depth of the curve, and sigma is the radius where the potential energy is zero (short distance). When the LJ is used to simulate the interaction of different atomic species, there is not a rule to determine the sigma and epsilon terms... and there are geometrical and arithmetic averages, using the values for the same-species interaction for each atom.
|
2020-07-10 13:09:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 47, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7817471027374268, "perplexity": 536.6292458066885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655908294.32/warc/CC-MAIN-20200710113143-20200710143143-00430.warc.gz"}
|
https://mathhelpboards.com/threads/answered-latex-package-request.3569/
|
### Welcome to our community
#### dwsmith
##### Well-known member
Can we have the package wasysym added?
#### Jameson
Staff member
Re: LaTeX package request
It appears we cannot. What symbols are you looking for within that package?
#### dwsmith
##### Well-known member
Re: LaTeX package request
It appears we cannot. What symbols are you looking for within that package?
\planet where planet is any planet name
#### dwsmith
##### Well-known member
Re: LaTeX package request
Testing
Code:
\MakeUppercase{\text{\romannumeral 1}}
$\MakeUppercase{\text{\romannumeral 1}}$
Why doesn't this work on any forum?
Can we have roman numerals added?
#### Jameson
Staff member
Re: LaTeX package request
Testing
Code:
\MakeUppercase{\text{\romannumeral 1}}
$\MakeUppercase{\text{\romannumeral 1}}$
Why doesn't this work on any forum?
Can we have roman numerals added?
I'll ask the Mathjax support group about it but you're wrapping it in the \text{} tag, so why would it render a roman numeral?
#### dwsmith
##### Well-known member
Re: LaTeX package request
I'll ask the Mathjax support group about it but you're wrapping it in the \text{} tag, so why would it render a roman numeral?
View attachment 336 Homework 6.pdf
I don't know why it would but it does. Look here.
#### Jameson
Staff member
Re: LaTeX package request
#### Klaas van Aarsen
##### MHB Seeker
Staff member
Re: LaTeX package request
What is the reason to want roman numerals?
It's easy enough to create them as text.
#### dwsmith
##### Well-known member
Re: LaTeX package request
What is the reason to want roman numerals?
It's easy enough to create them as text.
When I type up questions, if the convention is to use roman numerals for identifying items, I will use them.
If you see the pdf I uploaded, the eigenvalues are denoted sigma subscript roman numerals in Continuum Mechanics.
|
2021-06-18 05:51:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8393985629081726, "perplexity": 10646.115219915015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487635724.52/warc/CC-MAIN-20210618043356-20210618073356-00149.warc.gz"}
|
https://www.josehu.com/technical/2021/01/01/ssd-to-optane.html
|
# Guanzhou (Jose) Hu
Guanzhou's personal storage for lecture notes, blog posts, & good mood.
Email guanzhou (dot) hu (at) wisc (dot) edu josehu (at) cs (dot) wisc (dot) edu
To all my loved ones, my friends, and Annie ♥
Hosted on GitHub Pages — Theme by orderedlist
# Modern Storage Hierarchy: From NAND SSD to 3D XPoint (Optane) PM
01 Jan 2021 - Guanzhou Hu
As minimization and cell density of traditional 2D NAND SSDs reach a manufacturing bottleneck, 3D NAND SSDs come on the market. They push block capacity a little bit forward, but suffer from severer write amplification and are more expensive, thus are not a perfect solution. Intel 3D XPoint (official brand name as Optane), a hybrid design sitting in-between DRAM and NAND flash storage, adds a new possibility in the storage hierarchy.
## Non-Volatile Memory (NVM) & The Modern Storage “Hierarchy”
The name non-volatile memory (NVM) may refer to different ranges of things under different contexts.
1. Broadly speaking, NVM refers to all kinds of persistent storage that maintains information “even after having been power cycled”1;
2. Narrowly speaking, NVM refers to semiconductor memory chips without mechanical structures, including flash memory (such as flash chips, SSD) and ROM;
3. Recently, NVM may refer to memory chips that are both persistent and byte-addressable. One example is Intel 3D XPoint. This category is often referred to as persistent memory (PM), NVRAM, or NVDIMM.
In the context of storage systems research, when people say NVM, they often mean the third definition. Designing storage policies and building file systems for novel NVM hardware is currently a hot topic. The storage “hierarchy” has now become an entangled pyramid where different types of devices have complex & overlapping performance characteristics. To be strict, it is not a hierarchy any more.
## 2D NAND Flash Architecture
A traditional 2D NAND SDD consists of a bunch of NAND flash packages together with an on-device controller and an on-device DRAM cache.
Figure by Emmanuel.
A NAND flash package is a set of planar NAND blocks, each having the following architecture:
Figure from this post.
Every intersection of a word line and a bit line is a cell. A word line controls a page (for example 4KB), and a bit line connects a string. A block can contain, for example, 128 pages. Initially, all cells are not charged and represents a “1”. When a cell is broken-down and charged, it represents a “0”.
The smallest unit of reading is a page. Reading a page follows the procedure:
1. Set selected page’s control gates to 0V, allowing cells to naturally leak voltage;
2. Pre-charge all the bit lines, then wait for the cells to naturally leak voltage;
3. Cells charged with negative charge will have weaker leak current:
• $$\rightarrow$$ will have higher voltage after a short period of time
• $$\rightarrow V_{ref} < V_{bit}$$ after a short period of time
• $$\rightarrow$$ reads logical “0”
4. Accordingly, uncharged cells will have $$V_{ref} > V_{bit}$$, thus reads logical “1”.
The smallest unit of writing is a block, unless writing on continuous free pages at the end of block. Writing a block follows the procedure:
1. Leak all the negative charges to erase the whole block to “1”s;
2. For each page in order, select its page line, giving high voltage at control gates:
• for cells that need to written “0”, ground their bit lines
• so that the cell will be broken-down and charged (i.e., written “0”).
Figures from the book《大话存储》Chapter 3, by 张冬, 2015.
The fact that NAND SSDs must erase a whole block before updating it is a significant drawback. NAND SSD controllers must equip themselves with the following two functionalities in order to be useful and robust2:
• Wear leveling: on updates, we cannot simply read out the whole block into controller, erase the whole block, and then write back the updated block, because that will involve too many breaking-downs of cells. A cell has very limited life of break-downs and cannot afford one re-charge per update to its residing block. Thus, SSDs do redirecting on writes (RoW) - append the updated content somewhere cold and redirect subsequent reads to that new location, in order to make all blocks evenly wore. However, what if the new block is not empty and contains data? That data needs to be collected and moved somewhere else, resulting in extras writes.
• Garbage collection: if some pages of a block is invalidated (say, the file occupying them is deleted, or redirected) while other pages are still valid, we cannot simply treat the invalidated pages as free space and do new writes to them. Thus, those pages are logically freed but cannot physically serve as free space. They become “garbage”. SSDs do periodic background garbage collection, gather valid pages from several garbaged blocks, combine them and write them to a new block, and then erase (free) the garbaged blocks to clean out free space. These are extra writes as well.
When a write arrives at the drive, wear leveling and garbage collection cause a lot more data (than the size of the original write) to be actually moved and written. This is the notorious effect called write amplification. Typical write-amplification factor (WAF) on NAND SSDs ranges from 5x - 20x. This makes NAND SSDs perform dramatically poor at workloads involving many random writes.
Well…, is it actually? Maybe we should avoid using terms like “random writes” as they are designed for HDDs! Check out this paper.
Figure from this post.
NOR flash has independent bit lines and is byte-addressable, but is much larger in size and not very practical.
## 3D NAND Flash Design
As the manufacturing of 2D NAND SSDs reaches its limit, people start to jump out of pure planar design and explore a third dimension.
• From SLC to MLC, TLC, & QLC: instead of single-level cells that can only represent “0”/”1”, we further divide the voltage level and thus every cell can represent 4 levels (2 bits, multi-level cell, MLC), 8 levels (3 bits, triple-level cell, TLC), or even 16 levels (4 bits, quad-level cell, QLC). The downside is that the cells become less robust and have significantly shorter life cycle.
• V-NAND flash: physically vertically stack the planar flash blocks.
These 3D designs give larger capacity to each flash block. However, that also makes write-amplification worse. Manufacturing cost also becomes a lot higher. It is believed that 3D NAND SSDs will not fully replace 2D NAND SSDs3.
## 3D XPoint (Optane) Technology
Intel proposes a new design of solid-state storage hardware called 3D XPoint on 2015 and release it to market under the brand name Optane on 20174. Through several years of development, this design yields the fastest SSDs available on market, and is often thought of as the next-generation state-of-the-art persistent storage hardware.
This technology is a successful example of phase-change memory (PCM) hardware - one of the most promising directions towards building non-volatile RAM. As the name 3D XPoint describes, memory cells are put at cross points of a 3D grid. It truly makes persistent storage “3-dimensional”.
The most appealing property of 3D XPoint is that it is persistent meanwhile byte-addressable. This means that it sits in between current NAND SSDs and DRAM volatile memory on the storage hierarchy (has smaller capacity than NAND flash but comparable capacity than DRAMs; has lower speed than DRAMs but faster speed than NAND flash; and it is durable). It can be treated as either, depending on the workload.
The name sometimes refers to Optane DIMM / Optane SSDs specifically. Optane DIMMs connect to the memory bus and is directly controlled by the processor cache system. Details about its internals can be found in this recent paper 5. Optane SSD products use the same PCM media technology, but expose a traditional NVMe SSD interface.
## Optane DIMM Performance & Consistency
As this paper pointed out, the current state of Optane DIMM exposes some interesting performance characteristics that lie in the middle of SSDs and DRAM:
• Latency performance approaches DRAM, but has larger variation;
• Though the whole device appears to be byte-addressable, small random accesses matter - they will bring down performance due to the 256B actual media granularity;
• DRAM is serial, SSDs have high internal parallelism across packages, and Optane DIMM sits in between - it has limited degree of internal parallelism and hence degraded performance under high concurrency;
• Ordering of temporal accesses to the same memory address is important due to consistency issues.
Figure from the Yang, et al. paper, Figure 4.
Whenever there is caching across volatile/non-volatile media, there are consistency issues. Imagine two user requests: ① appending a new element to a data structure, followed by ② incrementing a counter in data structure header. It is possible that both requests hit in cache and, at some time later, the update ② gets evicted earlier than ①. If the system crashes at this point, the state is left inconsistent. After recovery, the user may check the header counter and may believe that the newly appended index contains valid data, while it is not - that data has not yet been persisted on storage media - so the user may read out some garbage.
For traditional disk-based FS, the volatile cache is the in-memory buffer cache, and the persistent storage is the disk drive. We do journaling with fsync()’s to maintain the ordering of requests. For NVDIMM, the volatile cache is the CPU cache, and the persistent storage is the NVDIMM chip on memory bus. NVM systems do journaling with mfence & clflush instructions to maintain such ordering. Some ad-hoc data structures (e.g., B-trees) running over NVM may directly deploy their own ordering constraints w/o the help of a system layer.
Designing storage systems and building file systems for NVM is currently a very hot topic in storage systems research. This is a good example of how an evolution in hardware leads system software research. I believe this technology adds a new possibility in building storage systems and will make future storage systems design more flexible and more efficient.
## References
• My reading record of《大话存储》: HERE
• My blog post on I/O interfaces: HERE
Please comment below anything you wanna say! 😉
|
2021-05-15 22:43:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19159196317195892, "perplexity": 5209.776873329334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991488.53/warc/CC-MAIN-20210515223209-20210516013209-00090.warc.gz"}
|
https://emanote.srid.ca/demo/orgmode
|
# Org Mode
Note: Emanote documentation is a work-in-progress
Emanote provides first-class support for Markdown. But it also supports secondary formats (albeit not necessarily with the same level of support) beginning with Org Mode. See Pandoc's Org section for information on controlling the parsing.
WARNING: This is a 🧪 beta 🧪 feature.
Org Mode has no notion of wiki-links, but you can use file: hyperlinks to link to other files such as Markdown files. If you wish to link to a .org file from a Markdown file, however, regular wiki-links ought to work.
## Syntax
Here is a handpicked selection of syntatic features of Org Mode as particularly known to work on Emanote.
### Code blocks
See Syntax Highlighting for general information.
fac 0 = 1
fac n = n * fac (n-1)
### LaTeX
See Math for general information.
The radius of the sun is Rsun = 6.96 x 108 m. On the other hand, the radius of Alpha Centauri is RAlpha Centauri = 1.28 x Rsun.
$$% arbitrary environments, x=\sqrt{b} % even tables, figures$$
If $$a^2=b$$ and $$b=2$$, then the solution must be either $$a=+\sqrt{2}$$ or $$a=-\sqrt{2}$$
## Limitations
• While #+TITLE is recognized, other metadata are not recognized (yet). Org Mode has no notion of a "frontmatter", therefore you must store file-associated metadata in a separate YAML file.
|
2022-07-05 12:20:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5411700010299683, "perplexity": 5959.829760229592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104576719.83/warc/CC-MAIN-20220705113756-20220705143756-00011.warc.gz"}
|
https://code.tutsplus.com/tutorials/the-essentials-of-creating-laravel-bundles--net-25820
|
# The Essentials of Creating Laravel Bundles
The Laravel PHP framework offers its bundles system to allow developers to redistribute useful packages of code, or to organize applications into several "bundles" of smaller applications.
In this tutorial, we will learn the ins and outs of creating and distributing bundles from scratch.
A Laravel bundle has access to all of the features that the framework offers to its host application, including routing, migrations, tests, views and numerous other useful features.
Here's a little secret, between us: the application folder of the Laravel source package is also a bundle, which Laravel refers to as the DEFAULT_BUNDLE.
## When to Create a Bundle?
Before writing a new piece of code, I like to ask myself a few simple questions to determine whether it is appropriate for a bundle. Let me share this technique with you.
### Could this code be useful to others?
If the answer to this question is yes, then I would first ensure that someone has not already created a similar bundle or package. Other than for learning purposes, it's pointless to recreate the wheel. If the other package is of a high enough standard to be used in your project, then use that instead and save yourself the time.
Secondly, I think about the code and decide whether or not it might be useful to users of other frameworks, or people not using a framework at all. If the code is not related to the Laravel framework and does not need to make use of Laravel's core classes, then I would create a Composer package instead. Composer packages are widely becoming the standard for sharing code that is not restricted to a single framework or project.
If the code could be useful to others, and is dependent upon the Laravel framework, then you have a good reason to create a new bundle.
### Will I have to write this code again?
DRY is the name of the game.
If the code provides functionality that you write frequently, then it makes sense to create a bundle. DRY (Don't repeat yourself!) is the name of the game.
### Could this code be considered a stand-alone application?
For example, you may be building a simple site that, amongst other features, has a blog component. The blog could be considered a separate application to be contained in a bundle for much greater organization of your project.
Another example would be an administrative section, or 'back-end' for your website. This section could easily be considered a separate component from the main application, and could instead be organized into one or more bundles.
### Would this code fit into a single class?
If this is the case, you might consider writing a 'Library' instead. A library is a single class that contains reusable code. It can be added to a Laravel project easily by dropping the class into the application/libraries/ directory, which is auto loaded by default.
## Creating a Bundle
Let's create a simple plug-in that interacts with the Gravatar service to offer a simple method for generating avatars of various sizes within our main application. We will also add the necessary functionality to enter an email address and avatar size, and preview the associated gravatar on the page.
Let's get started by creating a new directory within the /bundles directory of our project. We will call the directory and our bundle gravvy. Not gravy... gravvy.
Let's add gravvy to the bundles array within application/bundles.php so that we can test it as we go along. We will add an 'auto' => true option to the array so that the bundle will be started automatically, and any autoloader mappings we create will be available to the whole of Laravel.
1 return array( 2 'docs' => array('handles' => 'docs'), 3 'gravvy' => array( 4 'auto' => true 5 ) 6 );
First, we will need to create a small library that will retrieve a user's avatar, using an email address. Create a new file within the root of the bundle, named gravvy.php. Let's create a class, called Gravvy with a static method, make(), to replicate the naming scheme used by Laravel's own libraries.
The make() method will accept two parameters: an email address and an integer to represent the size of the avatar to retrieve.
1 7 */ 8 class Gravvy 9 { 10 /** 11 * Create a new image element from an email address. 12 * @param string $email The email address. 13 * @param integer$size The avatar size. 14 * @return string The source for an image element. 15 */ 16 public static function make($email,$size = 32) 17 { 18 // convert our email into an md5 hash 19 $email = md5($email); 20 21 // return the image element 22 return '
Bundle root directories aren't auto-loaded, so let's write a mapping so that Laravel knows where to find the 'Gravvy' class when it needs it.
When starting a bundle, Laravel looks for a file, named start.php, and executes it. So let's create one within our new bundle's directory to hold our auto-load mappings.
1 path('bundles').'/gravvy/gravvy.php' 7 ));
Now Laravel will knows where to find the definition for our Gravvy class, and will load the source when it first needs it. Very efficient!
The path() method is a helper function, which returns the absolute path to useful folders used by Laravel. In this case, we are using it to retrieve the absolute path to the bundles directory.
Now that the we have our working Gravvy class, we could attempt to use it from within a controller to see if we get the expected output, but I think it would be more appropriate to write a unit test.
Just like the host application, unit tests are available from within the bundle. Let's create a tests folder within the bundle, and add a new file, called general.test.php.
1 assertEquals(Gravvy::make('thepunkfan@gmail.com'), 17 ''); 18 } 19 20 /** 21 * Test that an avatars output appears as expected when 22 * specifying a custom avatar size. 23 * 24 * @return void 25 */ 26 public function testAvatarImageIsGeneratedWithSize() 27 { 28 // start the gravvy bundle 29 Bundle::start('gravvy'); 30 31 // check that the output matches the expected 32 $this->assertEquals(Gravvy::make('thepunkfan@gmail.com', 64), 33 ''); 34 } 35 36 } Above, we've written two PHPUnit tests: one to test the output of generating an avatar using an email, and another that also specifies an avatar size in pixels. You will notice that we call Bundle::start('gravvy') to manually start the bundle. This is because Laravel does not auto load bundles through the command line interface at present. As a core team member, I'd like to point out that we intend to resolve this in a future version! Let's use Artisan to run our PHPUnit tests by typing the test command and using the bundle name, gravvy, as a parameter. 1 php artisan test gravvy Great! Our tests have run successfully on the first try, and our ego hast grown - just a little! Now that our Gravvy class has been tested, people can use it in their own applications! Let's take the bundle a step further and create a couple of simple pages to generate and preview gravatars. We can use this example to learn how the routing system handles bundles. To begin, let's create a new 'preview' controller for our bundle. We will need to create a controllers directory within the bundle, and, within it, we'll add a new file: preview.php. 1 The controller name must be prefixed with the bundle name, and appended with _Controller - as with normal controllers. We could create some routes to map our controller actions to sensible URIs, but wouldn't it be better if we could let the user of our bundle decide on the base URI to use? It would? Let's do that then! By adding a 'handles' => 'gravvy' key-value pair to the bundles configuration array, we can allow the user to change it without altering the code of the bundle itself. Here's the resulting configuration in application/bundles.php. 1 return array( 2 3 'docs' => array('handles' => 'docs'), 4 'gravvy' => array( 5 'auto' => true, 6 'handles' => 'gravvy' 7 ) 8 9 ); Now we can use the (:bundle) place-holder in our routes, which will be replaced with the value of the handles option. Let's create a routes.php file within the root of our bundles and add some routes. 1 Route::get('(:bundle)/form', 'gravvy::preview@form'); 2 Route::post('(:bundle)/preview', 'gravvy::preview@preview'); We have the route GET gravvy/form which is mapped to the form action of the Preview controller, and POST gravvy/preview which is mapped to the preview action of the Preview controller. Let's create the associated views for our controller actions; you can make them as complex and pretty as you like, but I am going to keep them simple. First, create a views folder within the bundle, just like with the application directory. 1 2 3 4 5 6 7 8 9 Now that we have a form that will submit an email and size field to the preview@preview controller/action pair, let's create a preview page for the generated avatar; we'll use an attribute, named $element, to hold its source.
1 2 3
{{ $element }} 4 {{ HTML::link\_to\_action('gravvy::preview@form', '< Go Back!') }} Now we must alter the preview action to make use of the data submitted from the form. 1 /** 2 * Show the resulting avatar. 3 */ 4 public function action_preview() 5 { 6 // get data from our form 7 $email = Input::get('email'); 8 $size = Input::get('size'); 9 10 // generate the avatar 11 $avatar = Gravvy::make($email,$size); 12 13 // load the preview view 14 return View::make('gravvy::preview') 15 ->with('element', \$avatar); 16 }
We retrieve the POST data and use it to create our avatar. We must also add a with() method to the View::make() chain to allow for the element to be used within the view.
We can finally test our avatar previewing system! Take a look at the /gravvy/form URI and give it a go! Everything works as expected.
This may not be the best way to organize your bundle, but it does highlight some of the useful things that are possible. Have fun creating your own bundles, and be sure to consider publishing them on the bundles website.
## Publishing a Bundle
Once your bundle is in a functional state, you may want to consider listing it within the Laravel Bundles Directory. Let's run through the process of submitting a new bundle.
First, you will need to have a GitHub account, and have your bundle versioned within a public repository. GitHub offers free accounts with an unlimited number of public repositories; you will find their sign up form here.
If you are new to version control with Git, I suggest reading the great series of Git articles right here on Nettuts+.
Once you have your account and code in order, make sure that the latest version of your bundle can be found within the 'master' branch, and that the root of your bundle (where the start.php would be) is the root of the repository, rather than a subdirectory.
Now click the 'Submit a Bundle' button, select your bundle repository from the drop down menu and hit the 'Continue' button.
The sign up form is quite straight forward, but here are some 'gotchas' that you may not spot.
Name
Name is a the lowercase keyword that is used to install your application. It needs to be a short but accurate word to describe your bundle.
Summary / Description
These fields can contain markdown format content. So feel free to copy the content from your GitHub README.md file.
Dependencies / Tags
Use the comma button on your keyboard to separate tags and dependencies. The dependencies field should contain the short install keyword for the bundle that exists as a dependency for the bundle you are submitting.
Active
The Active field simply determines whether or not the bundle will be displayed to other users. You are still able to install inactive bundles by their install keyword for testing purposes. Set this field to 'Yes' only when you are happy for other people to use your bundle.
Once you click the 'Save' button, your bundle has been submitted, and, if marked as 'Active', will appear in the bundle listings. You can always edit your bundle listing at a later date.
## Finding Bundles
Bundles that have been shared with the Laravel community are listed in the Bundles directory at http://bundles.laravel.com.
You can browse bundles by category, or use the search feature to find the bundle you're looking for. Once you have found a bundle that meets your requirements, take a look at the 'Installation' tab of the bundle's profile to find the install keyword.
## Installing a Bundle
Once you have the install keyword for a bundle, you can install it from the base of your project using the 'Artisan' command line interface, and it's bundle:install command. For example..
1 php artisan bundle:install bob
Artisan will consult the bundles API to retrieve the path to the bundles GitHub repository, and the repositories of all its dependencies. It will then download source packages directly from GitHub, and extract them to the /bundles directory for you.
You will need to manually add the bundle name to the array within application/bundles.php for the bundle to become enabled.
1 return array( 2 'docs' => array('handles' => 'docs'), 3 'bob' 4 );
In some situations, you might need to add extra information to this array entry to facilitate auto starting, or directing certain routes to the bundle. The author will have provided this extra information in the bundles description, if that is the case.
|
2023-03-21 04:41:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.217318594455719, "perplexity": 1756.5764204749414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943625.81/warc/CC-MAIN-20230321033306-20230321063306-00421.warc.gz"}
|
https://homework.cpm.org/category/CCI_CT/textbook/cc4/chapter/8/lesson/8.1.2/problem/8-27
|
### Home > CC4 > Chapter 8 > Lesson 8.1.2 > Problem8-27
8-27.
Wade and Dwayne were working together writing an equation for the sequence $12, 36, 108, 324, …$ Wade wrote $t(n) = 4 · 3^n$ and Dwayne wrote $t(n) = 12 · 3^{n−1}$. Homework Help ✎
1. Make a table for the first four terms of each of their sequences. What do you notice?
Are there any similarities between the two sequences?
Are there any differences?
2. How do you think Dwayne explained his method of writing the equation to Wade?
The coefficient is the first term of the sequence and the exponent is $n −1$.
3. For the sequence $10.3, 11.5, 12.7, …$, Wade wrote $t(n) = 9.1 + 1.2n$ while Dwayne wrote $t(n) = 10.3 + 1.2(n − 1)$. Make a table for the first four terms of each of their sequences. Are both forms of the equation correct?
This is similar to part (a).
Are the two sequences similar or different?
What does that tell you about the two forms of the sequence?
4. Read the Math Notes box about standard form of a sequence in this lesson. Dwayne’s equations are based on the first term of each sequence, not on the zeroth term. Why does Dwayne subtract one in both situations?
Look at Dwayne's sequence.
Which term is the coefficient?
How would this effect the way he defines his sequence?
|
2019-12-14 05:58:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 7, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7728962898254395, "perplexity": 1096.00794385779}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540584491.89/warc/CC-MAIN-20191214042241-20191214070241-00386.warc.gz"}
|
https://www.vedantu.com/commerce/movement-along-the-demand-curve-and-shift-of-the-demand-curve
|
# Movement Along The Demand Curve and Shift of The Demand Curve
The economy continually keeps changing, and so does the demand in the current market. Every business and even the industries keep a record of how this demand changes. The factors affecting it cause fluctuations in the market. For better analysis and understanding, usually, a demand curve is created. However, what is a demand curve, and how does it help? Is there any difference between the movement and shift along the demand curve? Let us have a better insight into what it exactly is.
## What is the Demand Curve?
It is defined as the graphical representation between the demand and price of commodities and how the graph transforms with a change in their values. The demand curve comes as a result of the law of demand and the law of supply.
According to the law of demand, with increases in prices, the demand decreases. If put in mathematical terms, demand is an inverse of prices.
According to the law of supply, with an increase in prices, the quantity supply also increases.
Both these laws help in understanding the interaction of market prices with the demand for goods and their supply. It is not just the price and quantity that affect the demand curve but there are also several other impactful factors.
## Movement and Shift along the Demand Curve
For all the supplies that a company provides, there are changes in the demand curves; but based on what factors does this happen? You can expect the changes in the demand curve based on the following two factors.
• A change in the demand for goods.
• A change in the number of goods.
This leads to the movement and shift along the demand curve. What is the difference between the two? How does it affect the curve?
## Movement in the Demand Curve
Are you wondering what causes a movement along the demand curve?
Movement along the demand curve happens because of the change in the price of commodities. This further affects the quantity demanded. All other factors remain unchanged. Under such a scenario, the graph moves along the Y-axis, as the price is plotted against it. At the same time, the other axis remains constant.
So, in such a scenario, with an increase in price, the demand decreases, and with a decrease in price, the demand increases.
The movement happens in a contraction and expansion format. Consider the following example.
Contraction of the curve: For instance, if the price increases from $10 to$12 for a commodity, then the supply decreases from 100 to 80. This is called a contraction of the demand curve.
Expansion of the curve: For instance, if the price decreases from $10 to$8 for a commodity, then the supply increases from 100 to 120. This is called an expansion of the demand curve.
There is no shift in the position of the curve, just an increase or decrease in the slope.
Then, what causes a shift in the demand curve?
## Shift in the Demand Curve
This happens when there is a change in any other factor apart from the price. It could be due to the quantity, consumer income, or several other factors on which the demand curve is based. Under this, even the price can vary. This leads to left or right shift in the demand curve.
The factors leading to a shift in the curve are as follows.
• Increase in demand quantity of the products due to popularity
• Increase in the price of a competitive good
• A rise in the income of consumers
• Seasonal factors
It leads to a shift in the demand curve, depending on the factors.
A movement and shift can also occur in the same curve over a longer time period. Initially, an increase in price for a certain commodity could lead to a movement in the curve. However, with time, it could lead to a shift in the same curve, depending on other factors.
1. Give examples of factors other than price that lead to a shift in the Curve?
Answer: The various factors determining a shift in the curve are as follows.
• Buyer’s Income: If the price remains constant and the income increases, the consumer can buy more goods. This results in a shift in the demand curve to the right.
• Trends of Consumers: The market and buying trends change tremendously. For example, during the winter season, the demand for cold beverages decreases. No matter what the price is. It results in a leftward shift in the curve.
• Future Price Expectations: In some scenarios, the customers feel that there will be inflation. Due to this, they end up stocking the goods. This leads to a rightward shift in the curve.
• Potential Buyers: The demand increases invariably with an increase in customers. There are times when due to good promotion strategies, the demand shoots up for a particular product. This again leads to a rightward shift.
2. How does the graph change in the case of movement in the Demand Curve?
Answer: The movement in the demand curve happens due to a change in the prices. This leads to an upward and downward shift in the demand curve. This also happens because all other factors remain constant, leading to such changes.
3. Which factors affect the supply and demand in the Economy?
Answer: The factors which affect the supply of goods are:
• Capacity of production
• Cost of production of goods including the labour and raw-materials
• Number of competitors and their levels
• Some fluctuating factors include weather, availability of raw materials, etc.
The factors which affect the demand for goods are:
• Preferences of the consumers
• Changes in the pricing of the complementary commodities
• Substitutes that are available for the same product
|
2021-10-17 23:29:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46158722043037415, "perplexity": 596.2818558168452}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585183.47/warc/CC-MAIN-20211017210244-20211018000244-00052.warc.gz"}
|
https://quant.stackexchange.com/questions/18049/bond-yield-is-it-martingale-with-respect-to-risk-neutral-probability-measure-of
|
Bond yield: is it martingale with respect to risk-neutral probability measure of some numeraire?
Let $t$ mean current time, let $T_0, T_n$ mean two times such that $T_0\le T_n$, and let $y_t[T_0, T_n]$ mean the forward swap rate of a swap starting at $T_0$ and ending at $T_n$. (I am ignoring $T_0+2$ issues, and assume that the swap starts at $T_0$.)
Then under the annuity numeraire $N_t = P_t[T_0, T_n]$, the forward swap rate $y_t[T_0, T_n]$ is a martingale under the risk-neutral measure associated with $N_t$. This follows from the fact that $y_t$ is a ratio of a portfolio of assets by $P_t[T_0, T_n]$. Indeed, $$y_t[T_0, T_n] = (Z(t,T_0)-Z(t,T_n))/P_t[T_0,T_n],$$ where $Z(t, T_i)$ means the zero-coupon bond from $t$ to $T_i$.
Is there a corresponding numeraire for the yield of a bond?
I am guessing the answer is no under some mild assumptions because bonds are tradeable and their price has a non-zero second derivative with respect to yields, but cannot hack through the thicket of results at the moment.
Thanks in advance, any help appreciated!
• Note that, a yield can be defined in many ways, and some use the yield as the coupon rate. If you can define the yield specifically, people may be able to explore from there. – Gordon Aug 7 '15 at 18:22
|
2020-02-17 13:12:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9436589479446411, "perplexity": 331.8904175977263}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875142323.84/warc/CC-MAIN-20200217115308-20200217145308-00009.warc.gz"}
|
https://socratic.org/questions/how-do-you-find-the-derivative-of-tanx-1
|
# How do you find the derivative of (tanx)^-1?
##### 1 Answer
May 29, 2016
I would rewrite the function before differentiating
#### Explanation:
${\left(\tan x\right)}^{-} 1 = \frac{1}{\tan} x = \cot x$
Now use the memorized $\frac{d}{\mathrm{dx}} \left(\cot x\right) = - {\csc}^{2} x$ or rewrite another step
$\cot x = \cos \frac{x}{\sin} x$ and use the quotient rule to get
$\frac{d}{\mathrm{dx}} \left(\cot x\right) = \frac{- {\sin}^{2} x - {\cos}^{2} x}{\sin} ^ 2 x = \frac{- 1}{\sin} ^ 2 x = - {\csc}^{2} x$
|
2021-07-29 09:51:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8776408433914185, "perplexity": 1953.0702316516272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153854.42/warc/CC-MAIN-20210729074313-20210729104313-00219.warc.gz"}
|
https://www.semanticscholar.org/paper/Smoking-Gun-of-the-Dynamical-Processing-of-the-Liu/f59e5297e9aabde2cb4014ba935c96fc3605cb06
|
# Smoking Gun of the Dynamical Processing of the Solar-type Field Binary Stars
@article{Liu2019SmokingGO,
title={Smoking Gun of the Dynamical Processing of the Solar-type Field Binary Stars},
author={C. Liu},
journal={Monthly Notices of the Royal Astronomical Society},
year={2019},
volume={490},
pages={550-565}
}
• C. Liu
• Published 2019
• Physics
• Monthly Notices of the Royal Astronomical Society
• We investigate the binarity properties in field stars using more than 50\,000 main-sequence stars with stellar mass from 0.4 to 0.85\,$M_\odot$ observed by LAMOST and {\emph Gaia} in the solar neighborhood. By adopting a power-law shape of the mass-ratio distribution with power index of $\gamma$, we conduct a hierarchical Bayesian model to derive the binary fraction ($f_{b}$) and $\gamma$ for stellar populations with different metallicities and primary masses ($m_1$). We find that $f_b$ is… CONTINUE READING
#### References
##### Publications referenced by this paper.
SHOWING 1-4 OF 4 REFERENCES
2017) applied this approach and well determined the reddening of the LAMOST data. We follow the same method and adopt that the 5% percentiles
• 2017
2018) and the formula of extinction coefficients can be written as kX =c1
• 2018
The uncertainties of Teff , log g, and [Fe/H] are σT = 110 K, σG = 0.2 dex, and σZ = 0.15 dex
• 2015
respectively, provided in Table 1 of Gaia Collaboration et al. (2018)
• 2018
|
2020-09-26 23:36:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41707512736320496, "perplexity": 4699.140525888304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400249545.55/warc/CC-MAIN-20200926231818-20200927021818-00248.warc.gz"}
|
https://math.stackexchange.com/questions/2059287/how-to-avoid-mixing-up-ratios-when-comparing-two-values
|
# How to avoid mixing up ratios when comparing two values?
In the following question for example:
$1$ Neptune day = $18$ hours.
$1$ Earth day = $24$ hours.
How many Neptune days = $10$ Earth days?
My intial reaction was to do the following: $$\frac{18..Hours (on.Neptune)} {24..Hours (on.Earth)}=\frac{x..Neptune.Days} {10..Earth.Days}$$
Solving for $x$ gives $7.5$
By the answer is actually given by $$\frac{24..Hours (on.Earth)} {18..Hours (on.Neptune)}=\frac{x..Neptune.Days} {10..Earth.Days}$$
Solving for $x$ giving $12.33$
The confusing part is that on the Left, it is Earth/Neptune, but on the right, it it Neptune/Earth. This lead me to write the equation wrong intially as I would have thought it would be Neptune/Earth=Neptune/Earth.
$$\frac{24..Hours (on.Earth)} {18..Hours (on.Neptune)}=\frac{x..Neptune.Days} {10..Earth.Days}$$
Or in more simple terms:
$1$ Neptune day = $18$ hours.
$1$ Earth day = $24$ hours.
In $1$ Earth day ($24$ Hours), we have $24/18$ Neptune Days. In $10$ Earth days, we have $10*24/18=12.3$ Neptune Days.
This way is quite clear and I wouldn't usually make a mistake in it. But I would like to be able to do it using the ratios to mechanise the process (as I need to do it in exam conditions where time is limited).
But I often end up getting the ratios the wrong way around.
$$\begin{eqnarray}10 \text{ Earth days} & = & 10 \text{ Earth days} \times 1 \times 1 \\ & = & 10 \text{ Earth days} \times \frac{24 \text{ hours}}{1 \text{ Earth day}} \times \frac{1 \text{ Neptune day}}{18 \text{ hours}}\\ & = & \frac{10 \text{ Earth days}}{1 \text{ Earth day}} \times \frac{24 \text{ hours}}{18 \text{ hours}} \times 1 \text{ Neptune day}\\ & = & 12.33 \text{ Neptune days}\end{eqnarray}$$
|
2019-09-23 10:59:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8696973919868469, "perplexity": 706.0185134528475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576355.92/warc/CC-MAIN-20190923105314-20190923131314-00555.warc.gz"}
|
https://www.physicsforums.com/threads/angular-momentum-ball-problem.272003/
|
# Homework Help: Angular momentum ball problem
1. Nov 15, 2008
### boredaxel
1. The problem statement, all variables and given/known data
A 5.00-kg ball is dropped from a height of 12.0 m above
one end of a uniform bar that pivots at its center. The bar has mass
8.00 kg and is 4.00 m in length. At the other end of the bar sits
another 5.00-kg ball, unattached to the bar. The dropped ball sticks
to the bar after the collision. How high will the other ball go after
the collision?
2. Relevant equations
L= r X p
L = I $$\omega$$
mgh =1/2 mv^2
3. The attempt at a solution
The solution seems to require the conservation of angular momentum. But i am not sure how angular momentum could be conserved? Doesnt the weight of the falling ball provide a net external torque to the system?
2. Nov 15, 2008
### tiny-tim
Welcome to PF!
Hi boredaxel! Welcome to PF!
Yes, it does provide a torque, but that doesn't matter, because it isn't an external torque …
at least, if you consider the whole system together, it isn't external!
Angular momentum, like momentum, is always conserved.
Hint: treat this exactly the same way as you would if the ball was hitting a block with another ball directly the other side …
you'd use conservation of (linear) momentum, and v1f = v2f, wouldn't you?
Well, do the same, except … "angularly"!
|
2018-12-17 09:32:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4940406382083893, "perplexity": 1342.3650034461182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828501.85/warc/CC-MAIN-20181217091227-20181217113227-00614.warc.gz"}
|