url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://ncatlab.org/nlab/show/equivariant+Whitehead+theorem | Contents
### Context
#### Stable Homotopy theory
stable homotopy theory
Introduction
# Contents
#### Representation theory
representation theory
geometric representation theory
# Contents
## Idea
The equivariant Whitehead theorem is the generalization of the Whitehead theorem from (stable) homotopy to (stable) equivariant homotopy theory.
$G$-Homotopy equivalences $f \colon X \longrightarrow Y$ between G-CW complexes are equivalent to maps that induce weak homotopy equivalences $f^H \colon X^H \longrightarrow Y^H$ on all fixed point spaces for all closed subgroups $H \hookrightarrow G$ (Matumoto 71, Waner 80, theorem 3.4, for review see Blumberg 17, corollary 1.2.14).
For maps $F \colon E \longrightarrow F$ between genuine G-spectra, they are weak equivalences (isomorphisms in the equivariant stable homotopy category) if they induce isomorphisms on all equivariant homotopy group Mackey functors $\pi_n(f)\colon \pi_n(E) \longrightarrow \pi_n(F)$ (e. g. Greenlees-May 95, theorem 2.4, Bohmann, theorem 3.2).
## References
The original proof seems to be due to
• T. Matumoto, Equivariant K-theory and Fredholm operators, J. Fac. Sci. Tokyo 18 (1971/72), 109-112 (jairo)
streamlined in
• Stefan Waner, Equivariant Homotopy Theory and Milnor’s Theorem, Transactions of the American Mathematical Society Vol. 258, No. 2 (Apr., 1980), pp. 351-368 (JSTOR)
and reviewed in
For the stable case:
Last revised on April 13, 2018 at 09:09:48. See the history of this page for a list of all contributions to it. | 2019-04-26 04:29:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6514208316802979, "perplexity": 2579.703766178271}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578759182.92/warc/CC-MAIN-20190426033614-20190426055614-00494.warc.gz"} |
https://www.mersenneforum.org/showpost.php?s=9ca1f17b6589ace9e9637566e9ddb89f&p=92856&postcount=1 | View Single Post
2006-11-30, 07:11 #1 em99010pepe Sep 2004 2×5×283 Posts Archive 2 for Other results (>155) Code: n=999, kmin=625T, kmax=650T, version=6.0, here T=10^12 Starting the sieve... Using the first 9 primes to reduce the size of the sieve array The sieving is complete. Number of Prp tests=577300 Time=3049 sec. n=999, kmin=650T, kmax=675T, version=6.0, here T=10^12 Starting the sieve... Using the first 9 primes to reduce the size of the sieve array The sieving is complete. Number of Prp tests=579438 Time=3058 sec. n=999, kmin=675T, kmax=700T, version=6.0, here T=10^12 Starting the sieve... Using the first 9 primes to reduce the size of the sieve array The sieving is complete. Number of Prp tests=577773 Time=3045 sec. n=999, kmin=700T, kmax=725T, version=6.0, here T=10^12 Starting the sieve... Using the first 9 primes to reduce the size of the sieve array The sieving is complete. Number of Prp tests=577990 Time=3045 sec. n=999, kmin=725T, kmax=750T, version=6.0, here T=10^12 Starting the sieve... Using the first 9 primes to reduce the size of the sieve array The sieving is complete. Number of Prp tests=578931 Time=3061 sec. n=999, kmin=750T, kmax=775T, version=6.0, here T=10^12 Starting the sieve... Using the first 9 primes to reduce the size of the sieve array The sieving is complete. Number of Prp tests=578431 Time=3047 sec. n=999, kmin=775T, kmax=800T, version=6.0, here T=10^12 Starting the sieve... Using the first 9 primes to reduce the size of the sieve array The sieving is complete. Number of Prp tests=578148 Time=3045 sec. n=999, kmin=800T, kmax=825T, version=6.0, here T=10^12 Starting the sieve... Using the first 9 primes to reduce the size of the sieve array The sieving is complete. Number of Prp tests=579241 Time=3051 sec. n=999, kmin=825T, kmax=850T, version=6.0, here T=10^12 Starting the sieve... Using the first 9 primes to reduce the size of the sieve array The sieving is complete. Number of Prp tests=578090 Time=3046 sec. n=999, kmin=850T, kmax=875T, version=6.0, here T=10^12 Starting the sieve... Using the first 9 primes to reduce the size of the sieve array The sieving is complete. Number of Prp tests=577544 Time=3076 sec. Last fiddled with by ValerieVonck on 2007-03-25 at 00:32 | 2022-07-05 00:02:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7609163522720337, "perplexity": 11812.637804042717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104506762.79/warc/CC-MAIN-20220704232527-20220705022527-00288.warc.gz"} |
https://eprint.iacr.org/2021/1528 | ## Cryptology ePrint Archive: Report 2021/1528
An Alternative Approach for Computing Discrete Logarithms in Compressed SIDH
Kaizhan Lin, Weize Wang, Lin Wang, and Chang-An Zhao
Abstract: Currently, public-key compression of supersingular isogeny Diffe-Hellman (SIDH) and its variant, supersingular isogeny key encapsulation (SIKE) involve pairing computation and discrete logarithm computation. In this paper, we propose novel methods to compute only 3 discrete logarithms instead of 4, in exchange for computing a lookup table effciently. The algorithms also allow us to make a trade-off between memory and effciency. Our implementation shows that the effciency of our algorithms is close to that of the previous work, and our algorithms perform better in some special cases.
Category / Keywords: public-key cryptography / Isogeny-based Cryptography, SIDH, SIKE, Public-key Compression, Discrete Logarithms
Date: received 18 Nov 2021, last revised 22 Nov 2021
Contact author: zhaochan3 at mail sysu edu cn
Available format(s): PDF | BibTeX Citation
Short URL: ia.cr/2021/1528
[ Cryptology ePrint archive ] | 2021-12-07 02:25:20 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8092025518417358, "perplexity": 8252.29553194406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363332.1/warc/CC-MAIN-20211207014802-20211207044802-00416.warc.gz"} |
https://chemistry.stackexchange.com/questions/101262/how-do-you-handle-manganese-nitrate-hexahydrate | # How do you handle manganese nitrate hexahydrate?
So I wanna do an experiment using manganese nitrate and I bought manganese (II) nitrate hexahydrate $\ce{Mn(NO3)2.6H2O}$. But when I received it, it was kinda watery already (most part is still crystallized but there was also liquid in the bottle). After inspecting the bottle, it says that I need to refrigerate at 5 degree Celsius. But after I refrigerate it becomes icy hard, I cannot take anything out. So I am now wondering how to handle this compound. I am afraid if I use the liquid part it will have various concentration, and it is barely impossible to use the solid part because it's so hard. Thank you.
• As long as you can you DON't use nitrates of multicharged cations, period. Most of them are extremely hygroscopic and cannot be weighted easily. If you really need to use nitrate solution, the best way is to prepare a solution and obtain the exact concentration using titration. Than said solution with known concentration can be stored in a closed bottle/flask and used as needed. – permeakra Sep 3 '18 at 15:05
Manufacturers of chemicals can make life interesting for first-time users. What a boon the CRC Handbook is! Without even listing $\ce{Mn(NO3)2.6H2O}$, the CRC Handbook can solve your problem. It gives the melting point of $\ce{Mn(NO3)2.4H2O}$ as $\pu{25.8^\circ C}$. Undoubtedly, the melting point of the hexahydrate is even lower, but maybe not by much. In hot water, the tetrahydrate is "infinitely" soluble.
Heat your jar of hexahydrate to 30C in warm water until all the crystals are melted and withdraw what you need with a pipette. You might plan ahead and fill several small bottles with the hexahydrate for later use. I doubt that the material needs to be stored cold.
Manganese nitrate hydrate can be converted into anhydrous manganese nitrate by using a dehydrating agent like phosphorus pentoxide or dinitrogen pentoxide. First gently heat the hexahydrate around 100-110°C in vacuum to form the dihydrate and then heating around 80-90°C with the dehydrating agent to get the anhydrous product.
\begin{align} \ce{Mn(NO3)2•6H2O &->[110°C,vacuo] &&Mn(NO3)2•2H2O + 4H2O}\\ \ce{Mn(NO3)2•2H2O + N2O5 &->[80-100°C,vacuo] &&Mn(NO3)2 + 2HNO3 + H2O} \end{align}
The mechanism of this reaction is found in this e-book:
Anhydrous nitrate is mostly prepared by dehydration of solid hydrate in vacuum dessicator at room temperature over phosphorus pentoxide or dinitrogen pentoxide.[...], a solvate is formed $\ce{Mn(NO3)2.N2O4}$ and by heating at 90°C, the dinitogen tetroxide can be removed to give the unsolvated compound. Thermal decomposition of the hydrated manganese(II) nitrate in vacuo also yields the anhydrous salts[...]
But you should be careful if you want to direct heat the hexahydrate. Overheating may cause the hexahydrate to fully decompose into manganese dioxide. | 2019-12-15 17:07:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5265515446662903, "perplexity": 3274.77210919601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541308604.91/warc/CC-MAIN-20191215145836-20191215173836-00193.warc.gz"} |
https://notes.reasoning.page/html/integrating-factors | # A calculus of the absurd
#### 14.2 Integrating Factors
$$e^x$$ shows up a lot in differential equations, because it has properties that are helpful when we differentiate it. One way in which it helps us is in solving first-order linear differential equations, which are equations of the form
$\frac {dy}{dx} + p(x)y = q(x)$
This can be solved using the product rule. If we define a function $$f(x)$$, we can write by the product rule that the derivative of $$y e^{f(x)}$$ is
$$\frac {dy}{dx} e^{f} + e^{f}\frac {df}{dx}y$$
This doesn’t immediately look like our equation, but if we multiply through by $$e^{f}$$, we get that
$$\frac {dy}{dx} e ^ {f(x)} + p(x) e^{f(x)} y = q(x) e ^{f(x)}$$
What we can do here is write that the left hand side is equal to the derivative of $$ye^{f(x)}$$. This only works, however, if the derivative of $$f(x)$$ is equal to $$p(x)$$. 9494 This is because \begin {aligned} \frac {d}{dx} \left [ ye^{f(x)} \right ] &= \frac {d}{dx}{y} e^{f(x)} + y \frac {d}{dx} \left [ e^{f(x)} \right ] \\ &= \frac {dy}{dx} e^{f(x)} + y \frac {d}{dx} \left [f(x)\right ] e^{f(x)} \\ \end {aligned} And if \begin {aligned} f(x) = \int p(x) dx \end {aligned} then \begin {aligned} \frac {d}{dx} \left [ f(x) \right ] = p(x) \end {aligned} And thus $$\frac {d}{dx} \left [ ye^{f(x)} \right ] = \frac {dy}{dx} e ^ {f(x)} + p(x) e^{f(x)} y$$ which is just the left-hand side of the equation. If it is, we can write that
$$\frac {d}{dx} \left [ ye^{f(x)} \right ] = q(x) e ^{f(x)}$$
And thus we can solve the equation by integrating. | 2022-11-27 04:15:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.999993085861206, "perplexity": 434.1786887070754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710192.90/warc/CC-MAIN-20221127041342-20221127071342-00308.warc.gz"} |
https://electronics.stackexchange.com/questions/285580/analyzing-discrete-time-signals | # Analyzing discrete time signals
I am given a signal $x[n]$ that have the following properties:
1. Real and odd
2. Period of $N=8$ and Fourier coefficients $a_k$
3. $a_9 = 6j$
4. The sum of $|x[n]|^2$ from $n=0$ to $n=7$ is $576$.
I want to solve for $a_k$ and $x[n]$. What I have are the following: $$x[n] = \sum_{k=0}^{N-1}\alpha_k e^{jkn(\frac{2\pi}{N})}$$ $$a_k = \frac{1}{N}\sum_{n=0}^{N-1}x[n]e^{-jkn(\frac{2\pi}{N})}$$ $$\sum_{n=0}^{7}|x[n]|^2 = 576$$ I expanded the series for $a_k$ and ended up with something that looks like this: $$a_k = \frac{1}{8}(x[0] + x[1]e^{-jk(\frac{2\pi}{8})} + x[2]e^{-jk(\frac{2\pi}{4})}+..)$$ However, before I proceed any further, I know there must be a technique I should be using to simplify this problem, especially using the fact that this is an odd function. However, an odd function will simplify terms be helping me cancel out terms on either side of the number line. However, in this case, since I'm only summing up on the positive side, I'm not sure how to simplify this equation. How should I proceed and how do I best use the extra information given?
Since the function is periodic, it is also defined for negative arguments, e.g. $x[-1]=x[7]$ or $x[-2]=x[6]$, etc. That said, the notion of being odd makes sense.
Note that also the $a_k$ are periodic (in partiular $a_9=a_1$), and you get the following constraints:
• being odd implies $a_k$ are imaginary
• being real implies $a_k=a^*_{-k}=a^*_{N-k}$
Btw. for real DFTs, one normally takes advantage of the latter, see e.g. the FFTW docs: http://www.fftw.org/fftw3_doc/One_002dDimensional-DFTs-of-Real-Data.html#One_002dDimensional-DFTs-of-Real-Data
[Edited after comment below, thanks!]
• But how would the negative co-efficients help me in the summation since I'm only summing over a range of positive values of $n$, not even $k$. A little confused on that part. Also, why is $a_k = 0$ for 0, 2, etc? Is it only provable that it is 0 for k = 0,8, etc? – Jonathan Feb 13 '17 at 6:59
• Sure, sorry, my mistake! Symmetry (odd or even) and being purely real or imaginary are Fourrier-Inverses. (This 2k argument came from a different memory layout, sorry again; also fixed a capital N). But again: if you know some negative coefficient, you also know some positive, as there is periodicity. Hint: do you want to prove that your solution is uniqe, give the set of all solutions, or what one solution suffice? – magnustron Feb 13 '17 at 8:03 | 2021-04-14 02:32:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9445880055427551, "perplexity": 249.99831364352562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038076454.41/warc/CC-MAIN-20210414004149-20210414034149-00516.warc.gz"} |
https://math.stackexchange.com/questions/3488267/towards-a-little-proof-of-fermats-last-theorem | # Towards a Little proof of Fermat's last theorem
Question: can you check if my reasoning below makes sense and has no major flaws?
Update: I fixed an issue in my definition of $$G$$: we must exclude $$u=w$$ and $$v=w$$. This has impacts on the charts too, with the new definition of $$G$$.
I don't claim to have a proof here, just a potential path to a proof, and it is by no means elementary if one wants to make my arguments mathematically rigorous. It might look like what Fermat could have written when saying "my proof is too long to fit in the margin of my letter". Certainly, Fermat did not get a proof either. At best, I think you can (maybe) derive from my discussion below, that the number of solutions (if any) is bounded in certain ways -- a much weaker result than Andrew Wiles' final solution to this problem. But I don't think there are flaws in my reasoning, contrarily to most would-be "simple proofs" regularly published and based on high-school arithmetic, such as here. Hopefully, my perspective here brings some new light on this 300-old problem, and the methodology could be applied to other Diophantine equations.
Anyway, here is how it goes. We are interested in solving $$u^n + v^n = w^n$$ where $$u, v, w > 0$$ are integers, and $$n>2$$ is an integer.
$$G_M(x) = \frac{1}{M^\alpha}\sum_{0
It is still unclear to me if $$\alpha$$ should be $$0$$, I am still doing research on this. This function has a Taylor series expansion $$G_M(x) = \sum_{k=0}^\infty h_k x^{k^2},$$ where $$h_k$$ is the number of ways (combinations of $$u, v, w$$) that $$k$$ can be written as $$k=u^n + v^n - w^n$$. We all know that if $$n>2$$, then $$h_0 = 0$$ regardless of $$M$$ (that's Fermat's Last Theorem.) If $$n=3,\alpha=0$$ and $$M=100$$, then $$h_1=4$$, as we have
• $$(6^3 + 8^3 - 9^3)^2 = 1$$
• $$(8^3 + 6^3 - 9^3)^2 = 1$$
• $$(9^3 + 10^3 - 12^3)^2 = 1$$
• $$(10^3 + 9^3 - 12^3)^2 = 1$$
If $$n=3,\alpha=0$$ and $$M=200$$, then $$h_1=12$$: in addition to the four previous solutions, we also have
• $$(64^3 + 94^3 - 103^3)^2 = 1$$
• $$(94^3 + 64^3 - 103^3)^2 = 1$$
• $$(71^3 + 138^3 - 144^3)^2 = 1$$
• $$(138^3 + 71^3 - 144^3)^2 = 1$$
• $$(73^3 + 144^3 - 150^3)^2 = 1$$
• $$(144^3 + 73^3 - 150^3)^2 = 1$$
• $$(138^3 + 175^3 - 172^3)^2 = 1$$
• $$(175^3 + 138^3 - 172^3)^2 = 1$$
If $$h_1\rightarrow\infty$$ as $$M\rightarrow\infty$$ and the growth follows a power law ($$h_1 \sim M^\alpha$$), then we must have $$\alpha\neq 0$$. Note that $$h_2$$ could follow a power low with a different $$\alpha$$, this is a tricky problem. But at first glance, there seems to be enough smoothness in the way growth occurs among $$h_0, h_1, h_2$$ and so on, so that it is possible to find a suitable candidate for $$\alpha$$. Indeed a simple rule consists in choosing $$\alpha$$ such that $$G_M(\frac{1}{2}) = 1$$, always.
Table for the coefficients $$h_k$$
Assuming $$n=3, \alpha=0$$.
The table reads as follows (example):
$$G_{800}(x) = 24 x + 10x^4 + x^9 + 7 x^{36} + 4 x^{49}+30 x^{64}+\cdots$$
Main fact: There is no solution to $$u^n+v^n=w^n$$ (with $$0) if and only if $$G_M(0) = 0$$. This result is trivial.
Here $$n$$ is assumed to be fixed. Of course we are interested in $$G(x) = \lim_{M\rightarrow\infty} G_M(x), \mbox{ for } |x|<1.$$
First, note that the case $$n=2$$ leads to a singularity, and $$G$$ does not exist if $$n=2$$, at least not with $$\alpha=0$$ (but maybe with $$\alpha=1$$). Also $$n$$ can be a real number, but it must be larger than $$2$$. For instance, it seems that $$n=2.5$$ works, in the sense that it does not lead to a singularity for $$G$$. Also, we are interested in $$x$$ close to zero, say $$-0.5\leq x \leq 0.5$$. Finally, $$G(x)$$ is properly defined (to be proved, may not be easy!) if $$|x|<1$$ and $$n>2$$. If $$n$$ is not an integer, there is no Taylor approximation for $$G_M$$, as the successive powers in the Taylor expansion would be positive real numbers, but not integers (in that case it means $$G_M(x)$$ is defined only for $$0\leq x <1$$.)
Below is the plot for $$G_M(x)$$ with $$-0.5 and $$M=200$$.
Note that as $$M\rightarrow\infty$$, the function $$G_M$$ tends to a straight line around $$x=0$$, with $$G(0)=0$$. This suggests that if there are solutions to $$u^n + v^n = z^n$$, with $$n=3$$, then the number of solutions must be $$o(M)$$. The same is true if you plot the same chart for any $$n>2$$. Of course, this assumes that $$G$$ does not have a singularity at $$x=0$$. Also, if some $$(u,v,w)$$ is a solution, any multiple is also a solution: so the number of solutions should be at least $$O(M)$$. This suggests that indeed, no solution exists.
By contrast, the plot below corresponds to $$n=2, \alpha=0, M = 200$$. Clearly, $$G_M(0) > 0$$, proving that $$u^2 + v^2 = w^2$$ has many, many solutions, even for $$0.
Below is the source code (Perl) used to compute $$G_M$$. It is easy to implement it in a distributed environment.
$$M=200;$$n=2;
$alpha=0; for ($$u=1;$$u<=$$M;$$u++) { for ($$v=1;$$v<=$$M;$$v++) { for ($$w=1;$$w<=$$M;$$w++) { if (($$u !=$$w) && ($$v !=$$w)) { $$z=($$u**$$n+$$v**$$n-$$w**$$n)**2;$$hash{$z}++;
}
}
}
}
open(OUT,">fermat.txt");
for ($$x=-0.5;$$x<=0.5; $$x+=0.01) { G=0; foreach z (keys(%hash)) { if (z<20) { G+=hash{z}*(x**z); } } G=G/(M**alpha); print OUT "x\t$$G\n";
}
close(OUT);
This code is running very slowly because it generates a huge hash table. If we are only interested in the first few coefficients $$h_k$$'s, then the following change in the triple loop significantly improves the speed of the calculations:
for ($$u=1;$$u<=$$M;$$u++) {
for ($$v=1;$$v<=$$M;$$v++) {
for ($$w=1;$$w<=$$M;$$w++) {
if (($$u !=$$w) && ($$v !=$$w)) {
$$z=($$u**$$n+$$v**$$n-$$w**$$n)**2; if ($$z < 2000) {
$$hash{$$z}++;
}
}
}
}
}
Note: I did this work not because of my interest in Fermat's last theorem, but as I was exploring generating functions for sums of squares. The methodology is similar in both cases, though a little simpler for sums of squares.
• I'm voting to close this question as off-topic because this does not appear to be a question. Dec 26, 2019 at 18:10
• $G$ may fail to exist if, for some $k\ne0$, there are infinitely many solutions to $x^n+y^n-z^n=k$. Dec 26, 2019 at 18:36
• It is not true that you get a generating function for the solutions of the Fermat equation from the $\theta$ function, you need the version $\Theta_k (q) = \sum_n q^{n^k}$ whose Mellin transform is $\Gamma(s)\zeta(ks)$ which lacks a functional equation for $k\ne 2$, ie. $\Theta_k$ is not a modular form and it looses the nice theory that $M_1(\Gamma_1(4))$ (the space containing $\Theta_2^2$) is finite dimensional from which we obtain a (multiplicative) closed-form for the coefficients of $\Theta_2^2$. Dec 26, 2019 at 19:00
• The close and down votes seem to me to be based on some sort of religious prejudice against questions that even remotely touch on such famous conjectures or former-conjectures like FLT, instead of a problem with the question itself. Dec 26, 2019 at 23:42
• @YiFan: It's OK, I don't take it personally. It will be posted on my own blog in two weeks or so when a few things about this article get settled down. It will probably generate 5,000 views. I am really sad that my own platform is terrible for writing math equations, and frankly, this is the main reason I post here until I get our vendor to fix this issue. I guess people on this platform prefer rudimentary material, like homework questions. Even ridiculous proofs of the LFT get 7,000 page views here and plenty of answers. On my platform, these posts would be rejected right away Dec 27, 2019 at 3:22
I recall that Rosser in the '40s had shown the smallest exponent without a resolved status was $$>100\,000\,000$$. I recall a number of results of the shape "if the exponent has $$r$$ distinct prime factors (a subset of) $$x$$, $$y$$, and $$z$$ have more than $$r$$ prime factors". This suggests that $$M$$ must be stupendously very much a lot larger than $$200$$ before a graph of $$G_M$$ suggests anything significant, even using partial results from 70 years ago.
I don't see any attempt to bound $$|G_M - G_\infty|$$ here, so the graph of $$G_\infty$$ need not be anywhere near the graph of $$G_{200}$$ that is shown. This is a showstopper for me because we expect to have a very slowly growing function in $$M$$. All a $$G_M(0) = 0$$ can show is that we haven't gone far enough out along the $$M$$ axis (... and we have to go out to infeasibly large $$M$$ to reach what was terra incognita many decades ago).
• It may very well be that instead of dividing by $M^\alpha$, a slower growth, say dividing by $\log \log M$, might work. The growth rate might also depend on $n$. Dec 27, 2019 at 7:18
• I added a table that sheds some light on the growth rate for the $h_k$'s. Dec 27, 2019 at 18:31
• @VincentGranville : Until $k$ is vastly larger than $10^8$, you haven't shown anything that might charitably be called suggestive. Dec 27, 2019 at 20:36
• You could push the limit much higher indeed, maybe $10^{10^{10}}$, some math stuff show a trend that only reverses for extremely high values of $n$. But you need to start with something. Dec 27, 2019 at 22:52 | 2022-07-04 07:04:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 137, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.676288902759552, "perplexity": 413.6247093148213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104354651.73/warc/CC-MAIN-20220704050055-20220704080055-00485.warc.gz"} |
https://www.jicce.org/journal/view.html?uid=1189&vmd=Full | Journal of information and communication convergence engineering 2022; 20(3): 212-218
Published online September 30, 2022
https://doi.org/10.56977/jicce.2022.20.3.212
© Korea Institute of Information and Communication Engineering
## A 4K-Capable Hardware Accelerator of Haze Removal Algorithm using Haze-relevant Features
Seungmin Lee and Bongsoon Kang* , Member, KIICE
Department of Electronics Engineering, Dong-A University, Busan 49315, Korea
Correspondence to : *Bongsoon Kang (E-mail: bongsoon@dau.ac.kr, Tel: +82-51-200-7703)
Department of Electronics Engineering, Dong-A University, Busan 49315, Korea
Received: January 3, 2022; Revised: January 3, 2022; Accepted: August 17, 2022
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
The performance of vision-based intelligent systems, such as self-driving cars and unmanned aerial vehicles, is subject to weather conditions, notably the frequently encountered haze or fog. As a result, studies on haze removal have garnered increasing interest from academia and industry. This paper hereby presents a 4K-capable hardware implementation of an efficient haze removal algorithm with the following two improvements. First, the depth-dependent haze distribution is predicted using a linear model of four haze-relevant features, where the model parameters are obtained through maximum likelihood estimates. Second, the approximated quad-decomposition method is adopted to estimate the atmospheric light. Extensive experimental results then follow to verify the efficacy of the proposed algorithm against well-known benchmark methods. For real-time processing, this paper also presents a pipelined architecture comprised of customized macros, such as split multipliers, parallel dividers, and serial dividers. The implementation results demonstrated that the proposed hardware design can handle DCI 4K videos at 30.8 frames per second.
Keywords Field-programmable gate array, Hardware accelerator, Haze removal, Real-time processing
The industrial structure has been changing dramatically due to the Fourth Industrial Revolution (or Industry 4.0), which dominates the mass surveillance and autonomous driving industries. Vision-based intelligent systems, such as self-driving cars and unmanned aerial vehicles, are being rapidly developed. These life-critical systems adopt highlevel object recognition algorithms to sense their environment and operate without human involvement. However, as the performance of these algorithms is subject to weather conditions, poor visibility resulting from adverse weather can trigger a cascading failure that may lead to unfortunate consequences. Therefore, studies on visibility restoration are essential for autonomous vehicles. In this research direction, haze removal (or, equivalently, image dehazing) has garnered growing interest from researchers because haze is seemingly the most frequently encountered weather in practice. In this context, haze refers to the suspended aerosols in the atmosphere. The particle-particle collision of these aerosols and light photons causes the atmospheric scattering phenomenon, reducing the visibility of captured scenes and rendering haze removal research relevant to visibility restoration.
Haze removal algorithms are generally based on the simplified Koschmieder model [1], which describes hazy image formation as follows:
Ix=Jxtx+A1tx,
where I represents the input image, J the scene radiance, t the transmission map, A the atmospheric light, and x the pixel coordinates. Assuming that H and W are the image height and width, respectively, I, J, and A take on values in H×W×3, whereas tH×W. According to (1), recovering J is an ill-posed problem because I is the only observation. Thus, early attempts in haze removal solved this problem by using multiple input images. However, as it is burdensome to acquire such input data, researchers have shifted their interest to single-image haze removal.
According to a recent systematic review [2], this haze removal category can be further partitioned into three subcategories: image processing, machine learning, and deep learning. Concerning the first, the dark channel prior (DCP) proposed by He et al. [3] is typical. The DCP states that outdoor non-sky images exhibit an extremely dark channel, whose intensity approximates zero in local patches around all pixels. They then adopted computationally intensive soft matting to refine the transmission estimate. This method demonstrated good performance in general, but it substantially prolonged the execution time due to the inherent problem in soft matting. Also, it is subject to color distortion when the input image contains a broad sky or shady objects. These limitations brought a large room for improvements, and many follow-up studies have been proposed. For example, Kim et al. [4] reduced the computational complexity by using the modified hybrid median filter—equipped with excellent edge-preserving characteristics—to eliminate the refinement step. This elimination then favored a fast and efficient hardware implementation [4,5].
In the second subcategory, a typical work is the color attenuation prior (CAP) proposed by Zhu et al. [6]. The CAP was also discovered through extensive observations on outdoor images. It states that the scene depth is closely correlated with the difference between the saturation and the value. Zhu et al. [6] modeled this correlation using a linear model, whose parameters were estimated utilizing the maximum likelihood estimates (MLE). The CAP provides a fast and effective haze removal, albeit with color distortion and background noise. In a follow-up study, Ngo et al. [7] addressed these two problems using adaptive weighting and low-pass filtering.
Finally, deep-learning techniques, such as convolutional neural networks (CNNs) and generative adversarial networks (GANs), have also found their applications in haze removal. The pioneering work of Cai et al. [8] can be taken as a prime example. They proposed a well-performed three-layer CNN for estimating the transmission map from a single input image. In subsequent work, Li et al. [9] employed serial multiscale mapping to design a CNN that estimates and refines the transmission map from coarse to fine scales. Although deep-learning-based haze removal methods generally deliver satisfactory performance, they are subject to the domain-shift problem.
This paper presents a machine-learning-based method that improves the CAP by considering two new haze-relevant features in addition to the saturation and value. More precisely, we estimate the scene depth as a linear combination of local entropy, dark channel, saturation, and value. We then present a comparative evaluation with other state-of-the-art benchmark methods to verify the efficacy of the proposed haze removal algorithm. Furthermore, we demonstrate that the software implementation per se cannot satisfy real-time processing requirements. Consequently, we design a 4K-capable hardware accelerator that can handle 4K videos at 30.8 frames per second (fps).
The rest of this paper is structured as follows. Section 2 explores the haze-relevant features and describes the proposed algorithm in detail. Section 3 presents the comparative evaluation with benchmark algorithms, and Section 4 demonstrates the necessity of a hardware accelerator for real-time processing. After that, Section 5 provides a detailed description of the proposed hardware design and interprets the implementation results. Finally, Section 6 concludes the paper.
### A. Haze-relevant Features
Under the single image dehazing approach, most algorithms estimate the transmission map in two major steps: feature extraction and regression. On the one hand, these two are easily noticeable in image-processing and machine-learning- based methods. For example, He et al. [3] calculated the normalized dark channel (feature extraction) and subtracted it from unity (regression) to estimate the transmission map. On the other hand, deep learning-based methods usually introduce the multiscale mapping between these two steps to improve robustness against spatial variance in the input image. This observation demonstrates the fundamental importance of haze-relevant features in haze removal. Recently, Ngo et al. [10] explored and summarized the haze-relevant features hitherto reported in the literature. In addition, they also verified the correlation between those features and the haze distribution using representative hazy and haze-free image patches extracted from well-publicized datasets. Some of the verification results—corresponding to the saturation, value, dark channel, and local entropy— is illustrated in Fig. 1, where Figs. 1(c) and (d) are adopted from [10]. The normalized histograms demonstrate that feature values follow the normal distribution, where the means of the hazy and haze-free distributions are well separated. Also, based on the degree of overlap, it is observed that the dark channel exhibits the strongest correlation with haze distribution, followed by saturation, value, and local entropy.
Fig. 1. Normalized histograms of four haze-relevant features: (a) saturation, (b) value, (c) dark channel, and (d) local entropy.
Inspired by the work of Zhu et al. [6], we also utilize a linear model to estimate the transmission map from the saturation, value, dark channel, and local entropy. The reason for using two additional features comes from the observing normalized histograms in Fig. 1. It is conspicuous that each feature correlates with the haze distribution in a different way. In addition, there are currently no features with a perfect correlation. Saturation, value, dark channel, and local entropy do not represent the haze distribution in particular circumstances. The breakdown of the dark channel in sky regions or shady objects is a prime example. Therefore, using multiple features allows the mutual compensation for their failures. The sky region is haze-free in the previous example, but its all-channel high intensities result in high dark channel values. Based on the dark channel, the sky region is misclassified as densely hazy instead of haze-free. However, as this region is also textureless, its haze condition can be recognized using the local entropy. So, this example demonstrates that the local entropy can compensate for the failure of the dark channel in the sky region.
### B. Scene Depth Estimation
As discussed earlier, we improved the work of Zhu et al. [6] to estimate the scene depth from the saturation, value, dark channel, and local entropy using a linear model. This model is illustrated in (2), where d denotes the scene depth, f1 saturation, f2 value, f3 dark channel, and f4 local entropy. The corresponding parameters are θ1, θ2, θ3, θ4, while θ0 represents the bias. The variable ε denotes the model error, and we assume that it follows the normal distribution with zero mean and σ2 variance. According to the characteristics of the normal distribution, the scene depth is also normally distributed with (θ0 + θ1f1 + θ2f2 + θ3f3 + θ4f4) mean and σ2 variance.
dx=θ0+θ1f1+θ2f2+θ3f3+θ4f4+εx.
Subsequently, we leverage the MLE technique to determine the parameters that maximize the likelihood function [11], wherein the synthetic training dataset is prepared as follows. We utilize the 500IMG dataset [11] whose 500 constituent haze-free images are collected from free image-sharing services. Then, we employ the enhanced equidistribution [11] to create the random depth maps, which serve as the ground-truth references in the training dataset. We also draw the random atmospheric light—whose values range from 0.8 to 1—from the enhanced equidistribution. Given the scene depth, we use the following (3) to calculate the transmission map.
tx=expβscdx,
where βsc is normally one as the atmospheric scattering coefficient. Because the transmission map and atmospheric light are now available, we substitute these two into (1) to produce the hazy synthetic images, whose saturation, value, dark channel, and local entropy serve as the inputs in the training dataset.
We then apply the mini-batch gradient ascent algorithm [11] on the training dataset created above to estimate the parameters. The best estimates that we obtained are θ0 = −0.5570, θ1 = 1.5210, θ2 = 0.9042, θ3 = 0.7543, and θ4 = −0.3685. It is worth noting that this parameter estimation step is performed offline, so it does not affect the run-time of the proposed method.
### C. Atmospheric Light Estimation
Researchers usually adopted the atmospheric light estimation (ALE) method of He et al. [3], which locates the atmospheric light in the “most opaque” region. He et al. [3] defined those pixels whose dark channel value is within the top 0.1% of that region. Then, the pixel with the highest intensity in the red-green-blue color space was selected as the atmospheric light.
In a different approach, Tarel and Hautiere [12] assumed that the atmospheric light was pure white if the input image was correctly white-balanced. However, this ALE method and even that of He et al. [3] are prone to incorrect estimation when the input image contains bright objects, such as white cars or light bulbs. The quad-decomposition algorithm proposed by Park et al. [13] is a good alternative. The input image is now recursively partitioned into quarters based on the average luminance. This partition procedure can eliminate bright objects effectively because of their high contrast to the background. Nevertheless, as the partition requires many frame buffers, the quad-decomposition algorithm is inefficient in memory usage. Therefore, Ngo et al. [11] developed an approximated version that is free of frame buffers. So, in this study, we utilize the approximated quaddecomposition method to estimate atmospheric light.
After that, we substitute the estimates of transmission map and atmospheric light into (1) to recover the scene radiance. Finally, we adopt the adaptive tone remapping method of Cho et al. [14] to post-process the recovered image.
This section compares the performance of the proposed method against four benchmark algorithms, including those proposed by Tarel and Hautiere [12], Zhu et al. [6], Kim et al. [4], and Ngo et al. [7]. Henceforth, we refer to these four as Tarel, Zhu, Kim, and Ngo, respectively. For comparison, we employ three full-reference metrics: structural similarity (SSIM) [15], feature similarity extended to color images (FSIMc) [16], and tone-mapped image quality index (TMQI) [17]. These metrics take on values ranging from zero to unity, wherein higher values signify a better performance. Also, we use two real datasets (I-HAZE [18] and OHAZE [19]) that comprise 30 and 45 pairs of hazy and haze-free images, respectively. Table 1 shows the average SSIM, FSIMc, and TMQI scores on I-HAZE and O-HAZE datasets, and the best results are displayed in bold. It can be observed that the proposed algorithm is the best performing under SSIM and FSIMc, regardless of whether input images are indoor or outdoor. Additionally, the performance gap between the proposed method and Zhu is easily noticeable, attributed to the use of two new haze-relevant features. The saturation, value, dark channel, and local entropy can compensate for one another, boosting performance when saturation and value fail to represent the haze distribution. So, in general, the proposed algorithm can be considered superior to the four benchmark algorithms. Fig. 2 shows hazy images and corresponding dehazing results obtained from the four benchmark methods and the proposed algorithm. The first row shows the dehazing results of a hazy image from the IVC dataset [20], which consists of 25 real hazy images. This dataset was excluded from the quantitative evaluation because it does not contain ground-truth references. In the second and third rows, haze removal was performed on images from the I-HAZE and O-HAZE datasets, respectively. It can be observed that Tarel exhibits excellent performance, but color distortion arises in the sky region. Meanwhile, the results of Zhu hinder object recognition due to excessive haze removal. In the results of Kim, the performance is average, and color distortion also arises in the upper part of the IVC and O-HAZE images. Conversely, results of Ngo are satisfactory without visually unpleasant distortion. However, in the IVC and I-HAZE images, the dehazing power is too strong, leading to the occurrence of black pixels, as witnessed in the dog’s fur and the bottom of the sofa. Finally, the proposed method removes haze effectively and well-preserves the dog’s fur color. In addition, in the I-HAZE and O-HAZE images, the dehazing results are more satisfactory than those of the benchmark methods.
Average structural similarity (SSIM), feature similarity extended to color images (FSIMc), and tone-mapped image quality index (TMQI) scores on I-HAZE and O-HAZE. The Best results are displayed in bold.
DatasetI-HAZEO-HAZE
MethodSSIMFSIMcTMQISSIMFSIMcTMQI
Tarel0.72000.80550.77400.72630.77330.8416
Zhu0.68640.82520.75120.66470.77380.8118
Kim0.64240.78790.70260.47020.68690.6509
Ngo0.76000.84820.78920.73220.82190.8935
Proposed0.76420.86580.78780.73290.89200.8351
Fig. 2. Qualitative comparison with other haze removal methods on the IVC, I-HAZE, and O-HAZE datasets.
### IV. IMPORTANCE OF HARDWARE IMPLEMENTATION
For an image processing algorithm to be deployed in realworld systems, it should handle image data at a minimum rate of 25 fps or greater, depending on whether the color encoding standard is PAL or NTSC [21]. Therefore, we conducted a run-time comparison between several haze removal algorithms and tabulated the results in Table 2. The simulation environment is MATLAB R2019a , running on a host computer with Intel Core i9-9900K CPU, NVIDIA TITAN RTX GPU, and 64GB RAM. It can be observed from Table 3 that none of the algorithms can handle images in real-time. This finding suggests that hardware implementation is essential for coping well with the real-time processing requirement.
Run-time comparison of haze removal algorithms (in seconds) for three image sizes.
Size640 × 4801024 × 7684096 × 2160
Method
He12.6432.37470.21
Tarel0.280.769.02
Zhu0.220.556.39
Kim0.160.434.81
Ngo0.170.445.22
Proposed0.932.3226.95
Hardware implementation result of the proposed hardware design.
DeviceXc7z045-2ffg900
Slice Logic UtilizationAvailableUsedUtilization
Slice Register(#)437,20064,91814.85%
Slice LUT(#)218,60058,12626.59%
RAM36E1s5455810.64%
Minimum Period3.67 ns
Maximum Frequency272.48 MHz
* The EDA tool was supported by the IC Design Education Center (IDEC), Korea.
### V. HARDWARE IMPLEMENTATION FOR REAL-TIME PROCESSING
Fig. 3 presents the hardware architecture of the proposed method, which can be partitioned into memories, logic circuits, and arithmetic circuits. Two 1024 × 32-bit SPRAMs and three 256×8-bit SPRAMs are used for the atmospheric light estimation [11] and adaptive tone remapping [14]. Other memories are used as line memories for 5 × 5 filtering operations. So, it takes time seven image lines from input to output. In addition, logic circuits consist of 10 modules. The system controller in logic circuits is responsible for inputoutput operations of the image/video data. Saturation, value, dark channel, and local entropy are calculated in parallel in the 4-feature module. Furthermore, to improve the maximum frequency, we utilized split multipliers for large multiplications where operands’ word-length is greater than 16 bits.
Fig. 3. Hardware architecture of the proposed haze removal algorithm.
Table 3 summarizes the hardware implementation result in terms of slice registers, LUTs, RAM36E1s, and maximum frequency. Slice registers and LUTs represent the logic areas, whereas RAM36E1s represents the memory area. The proposed design used 64,918 registers, 58,126 LUTs, and 58 RAM36E1s, respectively. The fastest attainable frequency was 272.48 MHz. This information can be then used to obtain the maximum processing speed (MPS):
MPS=fmaxW+HBH+VB,
where fmax denotes the maximum frequency in Table 3; W and H denote the input image’s width and height, respectively; and HB and VB denote the horizontal and vertical blank periods. The hardware was implemented to minimize the number of blank periods corresponding to one pixel and one image line to increase the MPS. It demonstrates that the proposed design can process the DCI 4K video at 30.8 fps, satisfying the realtime processing requirement of 25 fps or greater.
Fig. 4 depicts the C/C++ platform and verification board for the real-world execution. The top and middle thirds of Fig. 4 belong to the platform, whereas the bottom third depicts the system-on-a-chip (SoC) board. Moreover, the upper part of the platform shows side-by-side input-output data for ease of performance verification. The platform control panel is responsible for providing input data to the SoC board.
Fig. 4. Hardware verification using a system-on-a-chip evaluation board.
Meanwhile, the algorithm control provides a convenient graphical user interface for configuring the hardware design running on the board. This C/C++ platform is a convenient means for verifying the real-time processing of the proposed hardware design.
A high-performance haze removal algorithm and its corresponding 4K-capable hardware accelerator were presented in this paper. We proposed using two new haze-relevant features (dark channel and local entropy) to estimate the transmission map, based on the observation that they can effectively compensate for the failures of the CAP. In addition, we adopted a frame-buffer-free version of the quaddecomposition algorithm to estimate atmospheric light to reduce hardware resources. We then provided extensive experimental results to demonstrate the superiority of the proposed method over benchmark algorithms. We also conducted a run-time comparison to show that the software implementation per se was insufficient for real-time processing. Therefore, we presented a 4K-capable hardware design that can handle DCI 4K videos at 30.8 fps, rendering the proposed algorithm highly relevant for high quality, highspeed real-time systems, such as autonomous cars and drones.
This research was funded by research funds from Dong-A University, Busan, Korea.
1. Z. Lee, and S. Shang, Visibility: How applicable is the century-old koschmieder model?, Journal of the Atmospheric Sciences, vol. 73, no. 11, pp. 4573-4581, Nov., 2016. DOI: 10.1175/JAS-D-16-0102.1.
2. D. Ngo, and S. Lee, and T. M. Ngo, and G. -D. Lee, and B. Kang, Visibility restoration: A systematic review and meta-analysis, Sensors, vol. 21, no. 8, p. 2625, Apr., 2021. DOI: 10.3390/s21082625.
3. K. He and J. Sun and X. Tang, Single image haze removal using dark channel prior, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2341-2353, Dec., 2011. Dec. 2011. DOI: 10.1109/TPAMI.2010.168.
4. G. -J. Kim and S. Lee and B. Kang, Single image haze removal using hazy particle maps, IEICE Transactions on Fundamentals of Electronics Communications and Computer Sciences, vol. E101-A, no. 11, pp. 1999-2002, Nov., 2018. DOI: 10.1587/transfun.E101.A.1999.
5. D. Ngo and G. -D. Lee and B. Kang, A 4K-capable FPGA implementation of single image haze removal using hazy particle maps, Applied Sciences, vol. 9, no. 17, p. 3443, Aug., 2019. DOI: 10.3390/app9173443.
6. Q. Zhu and J. Mai and L. Shao, A fast single image haze removal algorithm using color attenuation prior, IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3522-3533, Nov., 2015. DOI: 10.1109/TIP.2015.2446191.
7. D. Ngo and G. -D. Lee and B. Kang, Improved color attenuation prior for single-image haze removal, Applied Sciences, vol. 9, no. 19, p. 4011, Sep., 2019. DOI: 10.3390/app9194011.
8. B. Cai, and X. Xu, and K. Jia, and C. Qing, and D. Tao, DehazeNet: An end-toend system for single image haze removal, IEEE Transactions on Image Processing, vol. 25, no. 11, pp. 5187-5198, Nov., 2016. DOI: 10.1109/TIP.2016.2598681.
9. B. Li, and W. Ren, and D. Fu, and D. Tao, and D. Feng, and W. Zeng, and Z. Wang, Benchmarking single-image dehazing and beyond, IEEE Transactions on Image Processing, vol. 28, no. 1, pp. 492-505, Jan., 2019. DOI: 10.1109/TIP.2018.2867951.
10. D. Ngo and G. -D. Lee and B. Kang, Haziness degree evaluator: A knowledge-driven approach for haze density estimation, Sensors, vol. 21, no. 11, Jun., 2021. DOI: 10.3390/s21113896.
11. D. Ngo, and S. Lee, and G. -D. Lee, and B. Kang, Single-image visibility restoration: A machine learning approach and its 4K-capable hardware accelerator, Sensors, vol. 20, no. 20, p. 5795, Oct., 2020. DOI: 10.3390/s20205795.
12. J. -P. Tarel, and N. Hautière, Fast visibility restoration from a single color or gray level image, in 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, pp. 2201-2208, 2009. DOI: 10.1109/ICCV.2009.5459251.
13. D. Park, and H. Park, and D. K. Han, and H. Ko, Single Image dehazing with image entropy and information fidelity, in 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, pp. 4037-4041, 2014. DOI: 10.1109/ICIP.2014.7025820.
14. H. Cho, and G. -J. Kim, and K. Jang, and S. Lee, and B. Kang, Color image enhancement based on adaptive nonlinear curves of luminance features, Journal of Semiconductor Technology and Science, vol. 15, no. 1, pp. 60-67, Feb., 2015. DOI: 10.5573/JSTS.2015.15.1.060.
15. Z. Wang, and A. C. Bovik, and H. R. Sheikh, and E. P. Simoncelli, Image quality assessment: from error visibility to structural similarity, IEEE Transactions on. Image Processing, vol. 13, no. 4, pp. 600-612, Apr., 2004. DOI: 10.1109/TIP.2003.819861.
16. L. Zhang, and L. Zhang, and X. Mou, and D. Zhang, FSIM: A feature similarity index for image quality assessment, IEEE Transactions on Image Processing, vol. 20, no. 8, pp. 2378-2386, Aug., 2011. DOI: 10.1109/TIP.2011.2109730.
17. H. Yeganeh, and W. Zhou, Objective quality assessment of tonemapped images, IEEE Transactions on Image Processing, vol. 22, no. 2, pp. 657-667, Feb., 2012. DOI: 10.1109/TIP.2012.2221725.
18. C. Ancuti, and C. O. Ancuti, and R. Timofte, and C. D. Vleeschouwer, I-HAZE: A dehazing benchmark with real hazy and haze-free indoor images, in Advanced Concepts for Intelligent Vision Systems, Poitiers, France, pp. 620-631, 2018. DOI: 10.1007/978-3-030-01449-0_52.
19. C. O. Ancuti, and C. Ancuti, and R. Timofte, and C. D. Vleeschouwer, O-HAZE: A dehazing benchmark with real hazy and haze-free outdoor images, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City: UT, USA, pp. 867-875, 2018. DOI: 10.1109/CVPRW.2018.00119.
20. K. Ma and W. Liu and Z. Wang, Perceptual evaluation of single image dehazing algorithms, in 2015 IEEE International Conference on Image Processing (ICIP), Quebec City: QC, Canada, pp. 3600-3604, 2015. DOI: 10.1109/ICIP.2015.7351475.
21. K. Jack, Chapter 9: NTSC and PAL digital encoding and decoding, in Video Demystified, 4th ed, Elsevier India, pp. 394-471, 2004.
Seunmgin Lee
received his B.S. degree and M.S. degree in Electronics Engineering from Dong-A University, Busan, South Korea, in 2016, and 2018, respectively. He is currently pursuing a Ph.D in Electronics Engineering at the Dong-A University. His research interests include image processing and SoC architectures for real-time processing.
Bongsoon Kang
received his B.S. degree in Electronics Engineering from Yonsei University, Seoul, South Korea, in 1985, M.S. degree in Electrical Engineering from University of Pennsylvania, USA, in 1987, and Ph.D degree in Electrical and Computer Engineering from Drexel University, USA, in 1990. His research interests include image processing and SoC architectures for real-time processing.
### Article
Journal of information and communication convergence engineering 2022; 20(3): 212-218
Published online September 30, 2022 https://doi.org/10.56977/jicce.2022.20.3.212
## A 4K-Capable Hardware Accelerator of Haze Removal Algorithm using Haze-relevant Features
Seungmin Lee and Bongsoon Kang* , Member, KIICE
Department of Electronics Engineering, Dong-A University, Busan 49315, Korea
Correspondence to:*Bongsoon Kang (E-mail: bongsoon@dau.ac.kr, Tel: +82-51-200-7703)
Department of Electronics Engineering, Dong-A University, Busan 49315, Korea
Received: January 3, 2022; Revised: January 3, 2022; Accepted: August 17, 2022
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
### Abstract
The performance of vision-based intelligent systems, such as self-driving cars and unmanned aerial vehicles, is subject to weather conditions, notably the frequently encountered haze or fog. As a result, studies on haze removal have garnered increasing interest from academia and industry. This paper hereby presents a 4K-capable hardware implementation of an efficient haze removal algorithm with the following two improvements. First, the depth-dependent haze distribution is predicted using a linear model of four haze-relevant features, where the model parameters are obtained through maximum likelihood estimates. Second, the approximated quad-decomposition method is adopted to estimate the atmospheric light. Extensive experimental results then follow to verify the efficacy of the proposed algorithm against well-known benchmark methods. For real-time processing, this paper also presents a pipelined architecture comprised of customized macros, such as split multipliers, parallel dividers, and serial dividers. The implementation results demonstrated that the proposed hardware design can handle DCI 4K videos at 30.8 frames per second.
Keywords: Field-programmable gate array, Hardware accelerator, Haze removal, Real-time processing
### I. INTRODUCTION
The industrial structure has been changing dramatically due to the Fourth Industrial Revolution (or Industry 4.0), which dominates the mass surveillance and autonomous driving industries. Vision-based intelligent systems, such as self-driving cars and unmanned aerial vehicles, are being rapidly developed. These life-critical systems adopt highlevel object recognition algorithms to sense their environment and operate without human involvement. However, as the performance of these algorithms is subject to weather conditions, poor visibility resulting from adverse weather can trigger a cascading failure that may lead to unfortunate consequences. Therefore, studies on visibility restoration are essential for autonomous vehicles. In this research direction, haze removal (or, equivalently, image dehazing) has garnered growing interest from researchers because haze is seemingly the most frequently encountered weather in practice. In this context, haze refers to the suspended aerosols in the atmosphere. The particle-particle collision of these aerosols and light photons causes the atmospheric scattering phenomenon, reducing the visibility of captured scenes and rendering haze removal research relevant to visibility restoration.
Haze removal algorithms are generally based on the simplified Koschmieder model [1], which describes hazy image formation as follows:
$Ix=Jxtx+A1−tx,$
where I represents the input image, J the scene radiance, t the transmission map, A the atmospheric light, and x the pixel coordinates. Assuming that H and W are the image height and width, respectively, I, J, and A take on values in $ℝH×W×3$, whereas $t∈ℝH×W$. According to (1), recovering J is an ill-posed problem because I is the only observation. Thus, early attempts in haze removal solved this problem by using multiple input images. However, as it is burdensome to acquire such input data, researchers have shifted their interest to single-image haze removal.
According to a recent systematic review [2], this haze removal category can be further partitioned into three subcategories: image processing, machine learning, and deep learning. Concerning the first, the dark channel prior (DCP) proposed by He et al. [3] is typical. The DCP states that outdoor non-sky images exhibit an extremely dark channel, whose intensity approximates zero in local patches around all pixels. They then adopted computationally intensive soft matting to refine the transmission estimate. This method demonstrated good performance in general, but it substantially prolonged the execution time due to the inherent problem in soft matting. Also, it is subject to color distortion when the input image contains a broad sky or shady objects. These limitations brought a large room for improvements, and many follow-up studies have been proposed. For example, Kim et al. [4] reduced the computational complexity by using the modified hybrid median filter—equipped with excellent edge-preserving characteristics—to eliminate the refinement step. This elimination then favored a fast and efficient hardware implementation [4,5].
In the second subcategory, a typical work is the color attenuation prior (CAP) proposed by Zhu et al. [6]. The CAP was also discovered through extensive observations on outdoor images. It states that the scene depth is closely correlated with the difference between the saturation and the value. Zhu et al. [6] modeled this correlation using a linear model, whose parameters were estimated utilizing the maximum likelihood estimates (MLE). The CAP provides a fast and effective haze removal, albeit with color distortion and background noise. In a follow-up study, Ngo et al. [7] addressed these two problems using adaptive weighting and low-pass filtering.
Finally, deep-learning techniques, such as convolutional neural networks (CNNs) and generative adversarial networks (GANs), have also found their applications in haze removal. The pioneering work of Cai et al. [8] can be taken as a prime example. They proposed a well-performed three-layer CNN for estimating the transmission map from a single input image. In subsequent work, Li et al. [9] employed serial multiscale mapping to design a CNN that estimates and refines the transmission map from coarse to fine scales. Although deep-learning-based haze removal methods generally deliver satisfactory performance, they are subject to the domain-shift problem.
This paper presents a machine-learning-based method that improves the CAP by considering two new haze-relevant features in addition to the saturation and value. More precisely, we estimate the scene depth as a linear combination of local entropy, dark channel, saturation, and value. We then present a comparative evaluation with other state-of-the-art benchmark methods to verify the efficacy of the proposed haze removal algorithm. Furthermore, we demonstrate that the software implementation per se cannot satisfy real-time processing requirements. Consequently, we design a 4K-capable hardware accelerator that can handle 4K videos at 30.8 frames per second (fps).
The rest of this paper is structured as follows. Section 2 explores the haze-relevant features and describes the proposed algorithm in detail. Section 3 presents the comparative evaluation with benchmark algorithms, and Section 4 demonstrates the necessity of a hardware accelerator for real-time processing. After that, Section 5 provides a detailed description of the proposed hardware design and interprets the implementation results. Finally, Section 6 concludes the paper.
### A. Haze-relevant Features
Under the single image dehazing approach, most algorithms estimate the transmission map in two major steps: feature extraction and regression. On the one hand, these two are easily noticeable in image-processing and machine-learning- based methods. For example, He et al. [3] calculated the normalized dark channel (feature extraction) and subtracted it from unity (regression) to estimate the transmission map. On the other hand, deep learning-based methods usually introduce the multiscale mapping between these two steps to improve robustness against spatial variance in the input image. This observation demonstrates the fundamental importance of haze-relevant features in haze removal. Recently, Ngo et al. [10] explored and summarized the haze-relevant features hitherto reported in the literature. In addition, they also verified the correlation between those features and the haze distribution using representative hazy and haze-free image patches extracted from well-publicized datasets. Some of the verification results—corresponding to the saturation, value, dark channel, and local entropy— is illustrated in Fig. 1, where Figs. 1(c) and (d) are adopted from [10]. The normalized histograms demonstrate that feature values follow the normal distribution, where the means of the hazy and haze-free distributions are well separated. Also, based on the degree of overlap, it is observed that the dark channel exhibits the strongest correlation with haze distribution, followed by saturation, value, and local entropy.
Figure 1. Normalized histograms of four haze-relevant features: (a) saturation, (b) value, (c) dark channel, and (d) local entropy.
Inspired by the work of Zhu et al. [6], we also utilize a linear model to estimate the transmission map from the saturation, value, dark channel, and local entropy. The reason for using two additional features comes from the observing normalized histograms in Fig. 1. It is conspicuous that each feature correlates with the haze distribution in a different way. In addition, there are currently no features with a perfect correlation. Saturation, value, dark channel, and local entropy do not represent the haze distribution in particular circumstances. The breakdown of the dark channel in sky regions or shady objects is a prime example. Therefore, using multiple features allows the mutual compensation for their failures. The sky region is haze-free in the previous example, but its all-channel high intensities result in high dark channel values. Based on the dark channel, the sky region is misclassified as densely hazy instead of haze-free. However, as this region is also textureless, its haze condition can be recognized using the local entropy. So, this example demonstrates that the local entropy can compensate for the failure of the dark channel in the sky region.
### B. Scene Depth Estimation
As discussed earlier, we improved the work of Zhu et al. [6] to estimate the scene depth from the saturation, value, dark channel, and local entropy using a linear model. This model is illustrated in (2), where d denotes the scene depth, f1 saturation, f2 value, f3 dark channel, and f4 local entropy. The corresponding parameters are θ1, θ2, θ3, θ4, while θ0 represents the bias. The variable ε denotes the model error, and we assume that it follows the normal distribution with zero mean and σ2 variance. According to the characteristics of the normal distribution, the scene depth is also normally distributed with (θ0 + θ1f1 + θ2f2 + θ3f3 + θ4f4) mean and σ2 variance.
$dx=θ0+θ1f1+θ2f2+θ3f3+θ4f4+εx.$
Subsequently, we leverage the MLE technique to determine the parameters that maximize the likelihood function [11], wherein the synthetic training dataset is prepared as follows. We utilize the 500IMG dataset [11] whose 500 constituent haze-free images are collected from free image-sharing services. Then, we employ the enhanced equidistribution [11] to create the random depth maps, which serve as the ground-truth references in the training dataset. We also draw the random atmospheric light—whose values range from 0.8 to 1—from the enhanced equidistribution. Given the scene depth, we use the following (3) to calculate the transmission map.
$tx=exp−βscdx,$
where $βsc$ is normally one as the atmospheric scattering coefficient. Because the transmission map and atmospheric light are now available, we substitute these two into (1) to produce the hazy synthetic images, whose saturation, value, dark channel, and local entropy serve as the inputs in the training dataset.
We then apply the mini-batch gradient ascent algorithm [11] on the training dataset created above to estimate the parameters. The best estimates that we obtained are θ0 = −0.5570, θ1 = 1.5210, θ2 = 0.9042, θ3 = 0.7543, and θ4 = −0.3685. It is worth noting that this parameter estimation step is performed offline, so it does not affect the run-time of the proposed method.
### C. Atmospheric Light Estimation
Researchers usually adopted the atmospheric light estimation (ALE) method of He et al. [3], which locates the atmospheric light in the “most opaque” region. He et al. [3] defined those pixels whose dark channel value is within the top 0.1% of that region. Then, the pixel with the highest intensity in the red-green-blue color space was selected as the atmospheric light.
In a different approach, Tarel and Hautiere [12] assumed that the atmospheric light was pure white if the input image was correctly white-balanced. However, this ALE method and even that of He et al. [3] are prone to incorrect estimation when the input image contains bright objects, such as white cars or light bulbs. The quad-decomposition algorithm proposed by Park et al. [13] is a good alternative. The input image is now recursively partitioned into quarters based on the average luminance. This partition procedure can eliminate bright objects effectively because of their high contrast to the background. Nevertheless, as the partition requires many frame buffers, the quad-decomposition algorithm is inefficient in memory usage. Therefore, Ngo et al. [11] developed an approximated version that is free of frame buffers. So, in this study, we utilize the approximated quaddecomposition method to estimate atmospheric light.
After that, we substitute the estimates of transmission map and atmospheric light into (1) to recover the scene radiance. Finally, we adopt the adaptive tone remapping method of Cho et al. [14] to post-process the recovered image.
### III. Evaluation
This section compares the performance of the proposed method against four benchmark algorithms, including those proposed by Tarel and Hautiere [12], Zhu et al. [6], Kim et al. [4], and Ngo et al. [7]. Henceforth, we refer to these four as Tarel, Zhu, Kim, and Ngo, respectively. For comparison, we employ three full-reference metrics: structural similarity (SSIM) [15], feature similarity extended to color images (FSIMc) [16], and tone-mapped image quality index (TMQI) [17]. These metrics take on values ranging from zero to unity, wherein higher values signify a better performance. Also, we use two real datasets (I-HAZE [18] and OHAZE [19]) that comprise 30 and 45 pairs of hazy and haze-free images, respectively. Table 1 shows the average SSIM, FSIMc, and TMQI scores on I-HAZE and O-HAZE datasets, and the best results are displayed in bold. It can be observed that the proposed algorithm is the best performing under SSIM and FSIMc, regardless of whether input images are indoor or outdoor. Additionally, the performance gap between the proposed method and Zhu is easily noticeable, attributed to the use of two new haze-relevant features. The saturation, value, dark channel, and local entropy can compensate for one another, boosting performance when saturation and value fail to represent the haze distribution. So, in general, the proposed algorithm can be considered superior to the four benchmark algorithms. Fig. 2 shows hazy images and corresponding dehazing results obtained from the four benchmark methods and the proposed algorithm. The first row shows the dehazing results of a hazy image from the IVC dataset [20], which consists of 25 real hazy images. This dataset was excluded from the quantitative evaluation because it does not contain ground-truth references. In the second and third rows, haze removal was performed on images from the I-HAZE and O-HAZE datasets, respectively. It can be observed that Tarel exhibits excellent performance, but color distortion arises in the sky region. Meanwhile, the results of Zhu hinder object recognition due to excessive haze removal. In the results of Kim, the performance is average, and color distortion also arises in the upper part of the IVC and O-HAZE images. Conversely, results of Ngo are satisfactory without visually unpleasant distortion. However, in the IVC and I-HAZE images, the dehazing power is too strong, leading to the occurrence of black pixels, as witnessed in the dog’s fur and the bottom of the sofa. Finally, the proposed method removes haze effectively and well-preserves the dog’s fur color. In addition, in the I-HAZE and O-HAZE images, the dehazing results are more satisfactory than those of the benchmark methods.
Average structural similarity (SSIM), feature similarity extended to color images (FSIMc), and tone-mapped image quality index (TMQI) scores on I-HAZE and O-HAZE. The Best results are displayed in bold..
DatasetI-HAZEO-HAZE
MethodSSIMFSIMcTMQISSIMFSIMcTMQI
Tarel0.72000.80550.77400.72630.77330.8416
Zhu0.68640.82520.75120.66470.77380.8118
Kim0.64240.78790.70260.47020.68690.6509
Ngo0.76000.84820.78920.73220.82190.8935
Proposed0.76420.86580.78780.73290.89200.8351
Figure 2. Qualitative comparison with other haze removal methods on the IVC, I-HAZE, and O-HAZE datasets.
### IV. IMPORTANCE OF HARDWARE IMPLEMENTATION
For an image processing algorithm to be deployed in realworld systems, it should handle image data at a minimum rate of 25 fps or greater, depending on whether the color encoding standard is PAL or NTSC [21]. Therefore, we conducted a run-time comparison between several haze removal algorithms and tabulated the results in Table 2. The simulation environment is MATLAB R2019a , running on a host computer with Intel Core i9-9900K CPU, NVIDIA TITAN RTX GPU, and 64GB RAM. It can be observed from Table 3 that none of the algorithms can handle images in real-time. This finding suggests that hardware implementation is essential for coping well with the real-time processing requirement.
Run-time comparison of haze removal algorithms (in seconds) for three image sizes..
Size640 × 4801024 × 7684096 × 2160
Method
He12.6432.37470.21
Tarel0.280.769.02
Zhu0.220.556.39
Kim0.160.434.81
Ngo0.170.445.22
Proposed0.932.3226.95
Hardware implementation result of the proposed hardware design..
DeviceXc7z045-2ffg900
Slice Logic UtilizationAvailableUsedUtilization
Slice Register(#)437,20064,91814.85%
Slice LUT(#)218,60058,12626.59%
RAM36E1s5455810.64%
Minimum Period3.67 ns
Maximum Frequency272.48 MHz
* The EDA tool was supported by the IC Design Education Center (IDEC), Korea..
### V. HARDWARE IMPLEMENTATION FOR REAL-TIME PROCESSING
Fig. 3 presents the hardware architecture of the proposed method, which can be partitioned into memories, logic circuits, and arithmetic circuits. Two 1024 × 32-bit SPRAMs and three 256×8-bit SPRAMs are used for the atmospheric light estimation [11] and adaptive tone remapping [14]. Other memories are used as line memories for 5 × 5 filtering operations. So, it takes time seven image lines from input to output. In addition, logic circuits consist of 10 modules. The system controller in logic circuits is responsible for inputoutput operations of the image/video data. Saturation, value, dark channel, and local entropy are calculated in parallel in the 4-feature module. Furthermore, to improve the maximum frequency, we utilized split multipliers for large multiplications where operands’ word-length is greater than 16 bits.
Figure 3. Hardware architecture of the proposed haze removal algorithm.
Table 3 summarizes the hardware implementation result in terms of slice registers, LUTs, RAM36E1s, and maximum frequency. Slice registers and LUTs represent the logic areas, whereas RAM36E1s represents the memory area. The proposed design used 64,918 registers, 58,126 LUTs, and 58 RAM36E1s, respectively. The fastest attainable frequency was 272.48 MHz. This information can be then used to obtain the maximum processing speed (MPS):
$MPS=fmaxW+HB⋅H+VB,$
where fmax denotes the maximum frequency in Table 3; W and H denote the input image’s width and height, respectively; and HB and VB denote the horizontal and vertical blank periods. The hardware was implemented to minimize the number of blank periods corresponding to one pixel and one image line to increase the MPS. It demonstrates that the proposed design can process the DCI 4K video at 30.8 fps, satisfying the realtime processing requirement of 25 fps or greater.
Fig. 4 depicts the C/C++ platform and verification board for the real-world execution. The top and middle thirds of Fig. 4 belong to the platform, whereas the bottom third depicts the system-on-a-chip (SoC) board. Moreover, the upper part of the platform shows side-by-side input-output data for ease of performance verification. The platform control panel is responsible for providing input data to the SoC board.
Figure 4. Hardware verification using a system-on-a-chip evaluation board.
Meanwhile, the algorithm control provides a convenient graphical user interface for configuring the hardware design running on the board. This C/C++ platform is a convenient means for verifying the real-time processing of the proposed hardware design.
### VI. CONCLUSION
A high-performance haze removal algorithm and its corresponding 4K-capable hardware accelerator were presented in this paper. We proposed using two new haze-relevant features (dark channel and local entropy) to estimate the transmission map, based on the observation that they can effectively compensate for the failures of the CAP. In addition, we adopted a frame-buffer-free version of the quaddecomposition algorithm to estimate atmospheric light to reduce hardware resources. We then provided extensive experimental results to demonstrate the superiority of the proposed method over benchmark algorithms. We also conducted a run-time comparison to show that the software implementation per se was insufficient for real-time processing. Therefore, we presented a 4K-capable hardware design that can handle DCI 4K videos at 30.8 fps, rendering the proposed algorithm highly relevant for high quality, highspeed real-time systems, such as autonomous cars and drones.
### ACKNOWLEDGMENTS
This research was funded by research funds from Dong-A University, Busan, Korea.
### Fig 1.
Figure 1.Normalized histograms of four haze-relevant features: (a) saturation, (b) value, (c) dark channel, and (d) local entropy.
Journal of Information and Communication Convergence Engineering 2022; 20: 212-218https://doi.org/10.56977/jicce.2022.20.3.212
### Fig 2.
Figure 2.Qualitative comparison with other haze removal methods on the IVC, I-HAZE, and O-HAZE datasets.
Journal of Information and Communication Convergence Engineering 2022; 20: 212-218https://doi.org/10.56977/jicce.2022.20.3.212
### Fig 3.
Figure 3.Hardware architecture of the proposed haze removal algorithm.
Journal of Information and Communication Convergence Engineering 2022; 20: 212-218https://doi.org/10.56977/jicce.2022.20.3.212
### Fig 4.
Figure 4.Hardware verification using a system-on-a-chip evaluation board.
Journal of Information and Communication Convergence Engineering 2022; 20: 212-218https://doi.org/10.56977/jicce.2022.20.3.212
Average structural similarity (SSIM), feature similarity extended to color images (FSIMc), and tone-mapped image quality index (TMQI) scores on I-HAZE and O-HAZE. The Best results are displayed in bold..
DatasetI-HAZEO-HAZE
MethodSSIMFSIMcTMQISSIMFSIMcTMQI
Tarel0.72000.80550.77400.72630.77330.8416
Zhu0.68640.82520.75120.66470.77380.8118
Kim0.64240.78790.70260.47020.68690.6509
Ngo0.76000.84820.78920.73220.82190.8935
Proposed0.76420.86580.78780.73290.89200.8351
Run-time comparison of haze removal algorithms (in seconds) for three image sizes..
Size640 × 4801024 × 7684096 × 2160
Method
He12.6432.37470.21
Tarel0.280.769.02
Zhu0.220.556.39
Kim0.160.434.81
Ngo0.170.445.22
Proposed0.932.3226.95
Hardware implementation result of the proposed hardware design..
DeviceXc7z045-2ffg900
Slice Logic UtilizationAvailableUsedUtilization
Slice Register(#)437,20064,91814.85%
Slice LUT(#)218,60058,12626.59%
RAM36E1s5455810.64%
Minimum Period3.67 ns
Maximum Frequency272.48 MHz
* The EDA tool was supported by the IC Design Education Center (IDEC), Korea..
### References
1. Z. Lee, and S. Shang, Visibility: How applicable is the century-old koschmieder model?, Journal of the Atmospheric Sciences, vol. 73, no. 11, pp. 4573-4581, Nov., 2016. DOI: 10.1175/JAS-D-16-0102.1.
2. D. Ngo, and S. Lee, and T. M. Ngo, and G. -D. Lee, and B. Kang, Visibility restoration: A systematic review and meta-analysis, Sensors, vol. 21, no. 8, p. 2625, Apr., 2021. DOI: 10.3390/s21082625.
3. K. He and J. Sun and X. Tang, Single image haze removal using dark channel prior, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2341-2353, Dec., 2011. Dec. 2011. DOI: 10.1109/TPAMI.2010.168.
4. G. -J. Kim and S. Lee and B. Kang, Single image haze removal using hazy particle maps, IEICE Transactions on Fundamentals of Electronics Communications and Computer Sciences, vol. E101-A, no. 11, pp. 1999-2002, Nov., 2018. DOI: 10.1587/transfun.E101.A.1999.
5. D. Ngo and G. -D. Lee and B. Kang, A 4K-capable FPGA implementation of single image haze removal using hazy particle maps, Applied Sciences, vol. 9, no. 17, p. 3443, Aug., 2019. DOI: 10.3390/app9173443.
6. Q. Zhu and J. Mai and L. Shao, A fast single image haze removal algorithm using color attenuation prior, IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3522-3533, Nov., 2015. DOI: 10.1109/TIP.2015.2446191.
7. D. Ngo and G. -D. Lee and B. Kang, Improved color attenuation prior for single-image haze removal, Applied Sciences, vol. 9, no. 19, p. 4011, Sep., 2019. DOI: 10.3390/app9194011.
8. B. Cai, and X. Xu, and K. Jia, and C. Qing, and D. Tao, DehazeNet: An end-toend system for single image haze removal, IEEE Transactions on Image Processing, vol. 25, no. 11, pp. 5187-5198, Nov., 2016. DOI: 10.1109/TIP.2016.2598681.
9. B. Li, and W. Ren, and D. Fu, and D. Tao, and D. Feng, and W. Zeng, and Z. Wang, Benchmarking single-image dehazing and beyond, IEEE Transactions on Image Processing, vol. 28, no. 1, pp. 492-505, Jan., 2019. DOI: 10.1109/TIP.2018.2867951.
10. D. Ngo and G. -D. Lee and B. Kang, Haziness degree evaluator: A knowledge-driven approach for haze density estimation, Sensors, vol. 21, no. 11, Jun., 2021. DOI: 10.3390/s21113896.
11. D. Ngo, and S. Lee, and G. -D. Lee, and B. Kang, Single-image visibility restoration: A machine learning approach and its 4K-capable hardware accelerator, Sensors, vol. 20, no. 20, p. 5795, Oct., 2020. DOI: 10.3390/s20205795.
12. J. -P. Tarel, and N. Hautière, Fast visibility restoration from a single color or gray level image, in 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, pp. 2201-2208, 2009. DOI: 10.1109/ICCV.2009.5459251.
13. D. Park, and H. Park, and D. K. Han, and H. Ko, Single Image dehazing with image entropy and information fidelity, in 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, pp. 4037-4041, 2014. DOI: 10.1109/ICIP.2014.7025820.
14. H. Cho, and G. -J. Kim, and K. Jang, and S. Lee, and B. Kang, Color image enhancement based on adaptive nonlinear curves of luminance features, Journal of Semiconductor Technology and Science, vol. 15, no. 1, pp. 60-67, Feb., 2015. DOI: 10.5573/JSTS.2015.15.1.060.
15. Z. Wang, and A. C. Bovik, and H. R. Sheikh, and E. P. Simoncelli, Image quality assessment: from error visibility to structural similarity, IEEE Transactions on. Image Processing, vol. 13, no. 4, pp. 600-612, Apr., 2004. DOI: 10.1109/TIP.2003.819861.
16. L. Zhang, and L. Zhang, and X. Mou, and D. Zhang, FSIM: A feature similarity index for image quality assessment, IEEE Transactions on Image Processing, vol. 20, no. 8, pp. 2378-2386, Aug., 2011. DOI: 10.1109/TIP.2011.2109730.
17. H. Yeganeh, and W. Zhou, Objective quality assessment of tonemapped images, IEEE Transactions on Image Processing, vol. 22, no. 2, pp. 657-667, Feb., 2012. DOI: 10.1109/TIP.2012.2221725.
18. C. Ancuti, and C. O. Ancuti, and R. Timofte, and C. D. Vleeschouwer, I-HAZE: A dehazing benchmark with real hazy and haze-free indoor images, in Advanced Concepts for Intelligent Vision Systems, Poitiers, France, pp. 620-631, 2018. DOI: 10.1007/978-3-030-01449-0_52.
19. C. O. Ancuti, and C. Ancuti, and R. Timofte, and C. D. Vleeschouwer, O-HAZE: A dehazing benchmark with real hazy and haze-free outdoor images, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City: UT, USA, pp. 867-875, 2018. DOI: 10.1109/CVPRW.2018.00119.
20. K. Ma and W. Liu and Z. Wang, Perceptual evaluation of single image dehazing algorithms, in 2015 IEEE International Conference on Image Processing (ICIP), Quebec City: QC, Canada, pp. 3600-3604, 2015. DOI: 10.1109/ICIP.2015.7351475.
21. K. Jack, Chapter 9: NTSC and PAL digital encoding and decoding, in Video Demystified, 4th ed, Elsevier India, pp. 394-471, 2004.
Sep 30, 2022 Vol.20 No.3, pp. 143~233 | 2022-12-03 15:44:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40828409790992737, "perplexity": 2573.0764595182563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710933.89/warc/CC-MAIN-20221203143925-20221203173925-00090.warc.gz"} |
https://tex.stackexchange.com/questions/416830/how-can-i-use-lettrine-with-parallel-or-should-i-use-something-else | # How can I use lettrine with parallel? Or should I use something else?
I'm trying to set two versions of a text side-by-side and have worked out how to do it using parallel.
I'd also like to use drop caps on the first line.
When I do, however, parallel seems to stop working for that paragraph. Here is the code I am trying, with it setting first correctly and then not:
\documentclass[11pt]{book}
\usepackage{parallel,lettrine}
\begin{document}
\begin{Parallel}{}{}
\ParallelLText{\noindent\emph{Wycliffe Bible}, 1382}
\ParallelRText{\noindent\emph{Green's Literal Translation}, 1993}
\ParallelPar
\ParallelLText{In the bigynnyng God made of nouyt heuene and erthe.}
\ParallelRText{In the beginning God created the heavens and the earth;}
\ParallelPar
\ParallelLText{$^{2}$Forsothe the erthe was idel and voide, and derknessis weren on the face of depthe; and the Spiryt of the Lord was borun on the watris.}
\ParallelRText{$^{2}$and the earth being without form and empty, and darkness on the face of the deep, and the Spirit of God moving gently on the face of the waters,}
\ParallelPar
\end{Parallel}
\vspace{2 cm}
\begin{Parallel}{}{}
\ParallelLText{\noindent\emph{Wycliffe Bible}, 1382}
\ParallelRText{\noindent\emph{Green's Literal Translation}, 1993}
\ParallelPar
\ParallelLText{\lettrine{I}{n} the bigynnyng God made of nouyt heuene and erthe.}
\ParallelRText{\lettrine{I}{n} the beginning God created the heavens and the earth;}
\ParallelPar
\ParallelLText{$^{2}$Forsothe the erthe was idel and voide, and derknessis weren on the face of depthe; and the Spiryt of the Lord was borun on the watris.}
\ParallelRText{$^{2}$and the earth being without form and empty, and darkness on the face of the deep, and the Spirit of God moving gently on the face of the waters,}
\ParallelPar
\end{Parallel}
\end{document}
And the result of this example:
Can anybody show me where I am going wrong?
\linewidth (used by \lettrine) has not the correct value.
\documentclass[11pt]{book}
\usepackage{parallel}
\usepackage{lettrine}
\begin{document}
\begin{Parallel}{}{}
\ParallelLText{\noindent\emph{Wycliffe Bible}, 1382}
\ParallelRText{\noindent\emph{Green's Literal Translation}, 1993}
\ParallelPar
\ParallelLText{In the bigynnyng God made of nouyt heuene and erthe.}
\ParallelRText{In the beginning God created the heavens and the earth;}
\ParallelPar
\ParallelLText{$^{2}$Forsothe the erthe was idel and voide, and derknessis weren on the face of depthe; and the Spiryt of the Lord was borun on the watris.}
\ParallelRText{$^{2}$and the earth being without form and empty, and darkness on the face of the deep, and the Spirit of God moving gently on the face of the waters,}
\ParallelPar
\end{Parallel}
\vspace{2 cm}
\begin{Parallel}{}{}
\ParallelLText{\noindent\emph{Wycliffe Bible}, 1382}
\ParallelRText{\noindent\emph{Green's Literal Translation}, 1993}
\ParallelPar
\ParallelLText{\setlength{\linewidth}{\hsize}\lettrine{I}{n} the bigynnyng God made of nouyt heuene and erthe. }
\ParallelRText{\setlength{\linewidth}{\hsize}\lettrine{I}{n} the beginning God created the heavens and the earth;}
\ParallelPar
\ParallelLText{$^{2}$Forsothe the erthe was idel and voide, and derknessis weren on the face of depthe; and the Spiryt of the Lord was borun on the watris.}
\ParallelRText{$^{2}$and the earth being without form and empty, and darkness on the face of the deep, and the Spirit of God moving gently on the face of the waters,}
\ParallelPar
\end{Parallel}
\end{document}
• Rather simpler than my fixing of the symptom. +1 – Chris H Feb 23 '18 at 14:09
• Definitely the easiest to implement, thank you - and I think it helps me best understand how I might overcome other problems if I encounter them later on. – Aidan Sproat-Clements Feb 23 '18 at 17:34
Something of a manual workaround, using minipages.
\documentclass[11pt]{book}
\usepackage{parallel,lettrine}
\begin{document}
\begin{Parallel}{}{}
\ParallelLText{\noindent\emph{Wycliffe Bible}, 1382}
\ParallelRText{\noindent\emph{Green's Literal Translation}, 1993}
\ParallelPar
\ParallelLText{In the bigynnyng God made of nouyt heuene and erthe.}
\ParallelRText{In the beginning God created the heavens and the earth;}
\ParallelPar
\ParallelLText{$^{2}$Forsothe the erthe was idel and voide, and derknessis weren on the face of depthe; and the Spiryt of the Lord was borun on the watris.}
\ParallelRText{$^{2}$and the earth being without form and empty, and darkness on the face of the deep, and the Spirit of God moving gently on the face of the waters,}
\ParallelPar
\end{Parallel}
\vspace{2 cm}
\begin{Parallel}{}{}
\ParallelLText{\noindent\emph{Wycliffe Bible}, 1382}
\ParallelRText{\noindent\emph{Green's Literal Translation}, 1993}
\ParallelPar
\ParallelLText{\noindent\begin{minipage}[t]{2.4in}\lettrine{I}{n}
the bigynnyng God made of nouyt heuene and erthe.\end{minipage}}
\ParallelRText{\noindent\begin{minipage}[t]{2.4in}\lettrine{I}{n}
the beginning God created the heavens and the earth;\end{minipage}}
\ParallelPar
\ParallelLText{$^{2}$Forsothe the erthe was idel and voide, and derknessis weren on the face of depthe; and the Spiryt of the Lord was borun on the watris.}
\ParallelRText{$^{2}$and the earth being without form and empty, and darkness on the face of the deep, and the Spirit of God moving gently on the face of the waters,}
\ParallelPar
\end{Parallel}
\end{document}
Edit: I figured it out
The lettrine manual has a hint of a workaround (p.4)
If a list has to be included in a paragraph starting with a ‘lettrine’, it is necessary to add the command \parshape=0 just after the end of the list.
Simply writing
\ParallelLText{\lettrine{I}{n} the bigynnyng God made of nouyt heuene and erthe.\parshape=0}
doesn't quite work, as the drop cap ends up in the margin/gutter:
This looks rather like a bug in lettrine, and nothing to do with parallel, in that adding parshape=0 puts the drop cap in the margin in trivial usage (and setting lhang is ignored with parshape=0):
\lettrine{T}{est} this is some long text that should wrap onto a second line. It will end with \texttt{parshape=0}. \parshape=0
\lettrine{T}{est} this is some long text that should wrap onto a second line. It \emph{does not} end with \texttt{parshape=0}.
Instead we create two new commands: \ParallelLtextL and \ParallelRTextL which accept 3 arguments: the 2 for \Lettrine and the rest of the parallel text. There's also a new length \LWidth used to hold the width of the drop cap.
\newlength{\LWidth}
\newcommand{\ParallelLTextL}[3]{\settowidth{\LWidth}{\LettrineFont{#1}}%
\ParallelLText{\lettrine{#1}{#2}#3\parshape=1 \LWidth \dimexpr\ParallelLWidth - \ParallelMainMidSkip\relax}%
}
\newcommand{\ParallelRTextL}[3]{\settowidth{\LWidth}{\LettrineFont{#1}}%
\ParallelRText{\lettrine{#1}{#2}#3\parshape=1 0pt \dimexpr\ParallelRWidth - \LWidth\relax}%
}
Which we use as follows:
\ParallelLTextL{I}{n}{ the bigynnyng God made of nouyt heuene and erthe.}
\ParallelRTextL{I}{n}{ the beginning God created the heavens and the earth;}
• BTW the case of \parshape=0 causing problems is fixed (for paragraph text) by instead issuing \parshape=1 0pt \linewidth. I haven't tested it for lists. – Chris H Feb 23 '18 at 14:08 | 2019-10-16 09:21:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24536220729351044, "perplexity": 6664.1224193643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986666959.47/warc/CC-MAIN-20191016090425-20191016113925-00430.warc.gz"} |
http://physics.stackexchange.com/tags/bose-einstein-condensate/hot?filter=year | # Tag Info
7
Short answer: Bosons all collapse to the ground state, since there is no restriction on the number of particles that can occupy a given state. You can assign $0$ the ground state energy, or any other number really since you can't really measure its energy, but only differences between energy levels. But it's customary to choose $0$ (the only time you need ...
6
The amount of heat added to the system is the integral of the specific heat wrt temperature: $$Q = \int C(T)dT$$ So in the link you give it's just the area under this graph: Although it's true that the specific heat tends to infinity at the lambda point it does so sufficiently suddenly that the area under the graph remains finite. That means the ...
4
When the chemical potential is 0 the extra free energy needed to add or remove a particle to the system is 0(i.e $\mu=\frac{dA}{dN}=0$. So particles can leave and enter the system without changing the (free) energy. In A BEC all particles have condensed to the ground state of the system. Particles entering or leaving the system will be added to the ground ...
4
The scattering length is basically a crude measure of how much interaction there is, so if you have a cold atomic gas in a trap, and it starts to interact more, then naturally atoms get kicked out of the trap by these interactions. This is then detected by enhanced loss rates. Depending on the setup you can get very weakly bound states (for example Efimov ...
3
TL;DR: The Gross-Pitaevskii equation is only applicable for very weakly-interacting bosons. At $a=\infty$ the gas displays universal physics. Strictly speaking, the Gross-Pitaevskii equation (GPE) is only valid for $$na^3 \ll 1,$$ where $n$ is the density of particles and $a$ is the $s$-wave scattering length. As it is a mean-field theory, one has to look ...
3
At constant pressure the volume of an ideal gas is given by Charles' law: $$V \propto T$$ and this law tells us that when the temperature $T$ falls to zero the volume $V$ also becomes zero. But no gas is ideal and real gases show all sorts of non-ideal behaviour. For example real gases liquify then solidify as the temperatue falls. Real gases deviate ...
3
It seems to me that you just cannot tell the difference between a Bose condensate and nothing in this case. What will change if you add some photons or phonons with zero energy to the system? No characteristics of the system will change. So it seems to me we have no criterion to decide if there is a Bose condensate in this case, and what's more important, it ...
2
Since the non-interacting condensate is a pathological situation (it is not a superfluid), I will assume that by "traditional" you mean a (perhaps extremely) weakly interacting condensate. I will denote the repulsive interaction strength (the T-matrix) by $g>0$. For simplicity, I will describe the situation at very low temperatures. The elementary ...
2
In the context of ultracold Fermi gases, a BEC-BCS crossover means that by tuning the interaction strength (the s-wave scattering length), one goes from a BEC state to a BCS state without encountering a phase transition (thus the word "crossover"). It is also useful to know that the BEC state is a Bose-Einstein condensate of two-atom molecules, while the ...
2
Let us suppose the gas is confined by a harmonic potential. The bosons have, in three dimensions, energy levels $\hbar\omega(n+3/2)$ with degeneracy $n(n+1)/2$. The grand-canonical partition function of level $n$ is (without degeneracy) $$\xi_n=\sum_{p=0}^\infty \left(\mathrm e^{-\beta \hbar\omega(n+3/2)+\beta\mu(T)}\right)^p$$ where $p$ is the number ...
2
When cooled to around 2.18K - the lambda point - liquid helium enters a superfluid phase. This is similar to a BEC, but remember that strictly speaking, BEC deals with bosons in the gas phase. In this case, since the helium is in the liquid phase, there are significant interactions between He atoms not present in the theory of a non-interacting gaseous BEC. ...
2
Some may exclude superfluid 3He from being a Bose-Einstein condensate because it obeys Fermi-Dirac statistics. However, this viewpoint is also not quite clear cut as the 3He form Cooper pairs which then condense. However, even those pairs do not obey Bose-Einstein statistics but nonetheless condense. Therefore this question is a little murky and Wikipedia ...
2
There are several ways to create Bose-Einstein condensates or systems that behave that way, there are ultracold atomic gases, solid state quasiparticles, and even photon condensates. Since you are obviously interested in ultracold atomic gases, I am going to cite Experimental methods of ultracold atomic physics by Kurn and Thywissen: The material must ...
1
"Classical particle", or atoms do condensate. At high temperature $T$, the atoms are far away in the so called gas phase. When the temperature decrease, they will undergo a phase transition and condense to liquid. At even lower temperature, it becomes solid and the atoms are closer together. All of these three phase has clear phase transition temperature. ...
1
There are several ways to destroy a Bose-Einstein condensate. The most common is temperature, which is why BECs are all low-temperature phenomena. For instance, helium becomes superfluid when a large fraction of the atoms enter the same quantum state, which happens around $\mathrm{2\,K = \frac16\,meV}/k$, so apparently the first excited state in fluid helium ...
1
I think the answer should be "no", as they are phenomena happening in two different sectors. That is, Bose-Einstein condensation involves the center-of-mass degrees of freedom of each atom. On the other hand, radioactive decay pertains to the internal interactions among constituent subatomic particles.
1
I think that classical particles won't occupy the same same states, as if they are rigid like billiard balls, they can't occupy the same position. But if you drop this constraint and put them in a potential well, they probably would do something similar to a BEC (all occupy the lowest energy level as you drop temperature). However, the difference in a BEC ...
1
This is a very good question. It turns out that the phase transition occurs precisely when the chemical potential becomes equal to zero (assuming that the ground state energy is at zero). The order parameter in the BEC is the "macroscopic wave function" or rather the square root of the single-particle reduced density matrix. The broken symmetry is usually ...
1
The photons in the experiment are confined in a cavity, which gives effective mass to photons. With that, you could calculate the critical temperature as usual. Although it is non-trivial to distinguish between lasing and Bose-Einstein condensation, they claim that they see "thermalization" via a lot of absorption and emission events with dye molecules, ...
1
All it means is that the mathematics governing both the BEC and BH are similar. So if the BH math predicts Hawking radiation it should come as no surprise that an analog is seen in BECs. It says nothing about what might really happen in a BH.
1
In the field of multi-component condensates, the single mode approximation (SMA) means that different dipole states are assumed to share the same spatial wave function. Thus, there are no dipolar textures. SMA is well justified when the inter-component (e.g., spin-dependent or dipole) interactions are much weaker than the interactions independent of the ...
1
Let me try to formulate the question more precisely and then give my answer. The phase'' $\theta$ of a BEC is introduced as $\langle \psi_N|\hat{a}|\psi_{N+1}\rangle = |\psi|e^{i\theta}$, where $|\psi_N\rangle$ is the ground state being occupied by $N$ bosons. Because the occupation number of the condensed state $|\psi_N\rangle$ is of the order $N$, the ...
1
It means that all atoms are in the ground state, then since potential energy is defined up to a constant you can say ground state has zero energy
Only top voted, non community-wiki answers of a minimum length are eligible | 2015-08-02 12:24:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8283922076225281, "perplexity": 361.3859695688925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989043.35/warc/CC-MAIN-20150728002309-00203-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://gottwurfelt.com/2013/12/ | # 100 years of crosswords
It’s the hundredth anniversary of the publication of the first crossword – check out today’s Google Doodle.
On a related note, crosswords are possible in English (or other natural languages) because a large enough proportion of the possible strings of letters are actual words. I learned this from chapter 18 of Information Theory, Inference, and Learning Algorithms by David Mackay (which you can read online). (Chapter 19 is about why to have sex, from an information-theoretic point of view.) And Dr. Fill is a crossword-solving program by Matthew Ginsberg which did not win the 2012 American Crossword Puzzle Tournament.
This made the rounds last week: Substantiating Fears of Grade Inflation, Dean Says Median Grade at Harvard College Is A-, Most Common Grade Is A, from the Harvard Crimson.
Now, I agree that an A-minus is probably too high here. (Although Jordan Ellenberg says we shouldn’t worry about grade inflation.)
But does it really matter that the most common grade is an A? Consider, say, a situation where there is a “triangular” distribution of grades: 5 A, 4 B, 3 C, 2 D, and 1 F. The most common grade is an A, but the median is a B (and the mean is 2.67 on a 4.0 scale, a B-minus). If there are more grade categories the same thing happens – if we have a triangular distribution of grades such as this, the median grade $1/\sqrt{2} \approx 0.71$ of the way up — about midway between a B-minus and a B on the 4.0 scale usual in the US. The mean grade would be $2/3 \approx 0.67$ of the way up the scale. More generally, say grades are in the interval [0, 1]. If grades are beta-distributed with parameters 1 and $\beta > 1$ (my triangular idea is just the Beta(1, 2) distribution) then the modal grade will be 1 but the mean and median will be a good bit lower, $\beta/(\beta+1)$ and $2^(1/\beta)$ respectively.
(I’m not claiming that grades are beta-distributed, but that’s not a bad model for something that’s often thought of as being roughly normally distributed but has to be contained within an interval.)
Basically, modes don’t tell you much.
# This week’s best statistics joke
This week’s best statistics joke: median rent.
# State-to-state migration in the US
Here’s an interactive visualization showing state-by-state migrations within the US, by Chris Walker.
It’s not possible to reconstruct all migrations between states from this chart. The data are available in a spreadsheet that the American Community Survey (part of the Census Bureau) puts out.
In case you’re wondering, the (ordered) pair of states with the most movement is California to Texas. Tyler Cowen would have forecasted that, but it’s worth pointing out that this is hardly surprising as California and Texas are the states with the largest population. Relative to the population of the target state, Californians are most likely to move to Nevada, Washington, Arizona, and Oregon; Texans are most likely to move to Oklahoma, New Mexico, Louisiana, and Arkansas. For non-American readers, I just said “people are most likely to move to nearby states”, which is the sort of thing that it’s easy to lose track of in my position living in San Francisco and generally surrounded by transplants from far away.
If I could spare the time I’d try to visualize this – which pairs of states have greater flows between them than would be expected from their populations and the distance between them? The prototype here would probably be the flow from the northeastern states to Florida. | 2021-05-17 22:21:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40885141491889954, "perplexity": 1302.16468313289}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991870.70/warc/CC-MAIN-20210517211550-20210518001550-00056.warc.gz"} |
https://bincrafters.github.io/2017/11/10/Updated-Conan-Package-Flow/ | # Updated Conan Package Flow
With the latest version of Conan, Bincrafters had to re-think our common workflows for developing packages. We were a bit confused at first, and had to ask the Conan team for advice to get things streamlined. We wanted to share the current workflow with the community in case other packagers are also struggling to figure out the best flow with the updated command-line options.
## Update 2/27/2018
Much has changed since this post was made, for an updated perspective, please see this Updated Post.
Some of the syntax in this post is definitely outdated. However, it’s worth noting that the overall workflow described in this post is still largely applicable for packagers who are working with libraries/projects where they are maintaining the conanfile.py “in-source” (in the library project). The updated post is mostly focused on the workflow where the conanfile.py is maintained in a different repository (“out-of-source”) such as those packages maintained by Bincrafters.
## Biggest Change - Save conan create for last
Previously, when we reached the point with a new recipe that we thought it was ready to “try” creating, we would go straight to conan create and run that command repeatedly until we had things working. This is no longer the recommended approach and there are some benefits with the new approach. At a high level, the conan create command was doing all its work inside your local cache directories, which was a bit non-trivial to find and browse to. Also, when doing the initial trial-and-error on a package it’s really not fit to go to be stored in the local cache anyway.
## Testing a Recipe - Step by Step
So, the new workflow encourages users to do trial-and-error in a local sub-directory relative to their recipe, much like how developers typically test building their projects with other build tools. Also, the new strategy is to test the conanfile.py methods individually during this phase, which is something that was harder than it should have been in the past. Below are the commands listed in the order we use them now:
### conan source
Now, you will generally want to start off with the conan source command, for example:
$conan source . --source-folder=tmp/source The strategy here is that you’re testing your source method in isolation, and downloading the files to a temporary sub-folder relative to conanfile.py. This just makes it easier to get to the sources and validate them. Once you’ve got your source method right, and it contains the files you expect, you can move on to testing the various attributes and methods relating to the downloading of dependencies. ### conan install Conan has multiple methods and attributes which relate to dependencies (all the ones with the word require in the name). The command conan install activates all them: $ conan install . --install-folder=tmp/build [--profile XXXX]
This also generates conaninfo.txt and conanbuildinfo.xyz (extension depends on generator you’ve used) in the temp folder, which will be needed for the next step. Once you’ve got this command working with no errors, you can move on to testing the build() method.
### conan build
The build method takes a path to a folder that has sources (basically an “input” folder), and a path to a folder where it will perform the build (basically an “output” folder).
$conan build . --source-folder=tmp/source --build-folder=tmp/build This is pretty strightforward, but it does add a very helpful new shortcut for people who are packaging their own library. Now, developers can make changes in their normal source directory and just pass that path as the --source-folder. ### conan package Just as it sounds, this CLI command now simply runs the package() method of a recipe. Like the conan build command, it basically takes “input” and “output” folders. In this case --build_folder and --package_folder: $ conan package . --build_folder=tmp/build --package_folder=tmp/package
### conan create
Now we know we have all the steps of a recipe working. Thus, now is an appropriate time to try to run the recipe all the way through, and put it in the local cache.
$conan create user/channel ### conan test A final followup step in many workflows after the package is creating successfully is to work on the test_package. There is often a need to repeatedly re-run the test, and so the conan test command exists. An example is shown below: $ conan test test_package package/version@user/channel
## Summary
There were other Conan command changes which affect other workflows, but we only wanted to focus on what we felt to be the most common OSS workflow for authoring and testing packages for third-party libraries. We hope this helps, and if you have any shortcuts or tips (or we’ve documented something incorrectly here), please feel free to reach out to us on twitter, email, slack, or github.
Powered by Hydejack v6.6.1 | 2021-04-17 09:49:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26178932189941406, "perplexity": 2064.7682064908945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038118762.49/warc/CC-MAIN-20210417071833-20210417101833-00589.warc.gz"} |
https://crad.ict.ac.cn/EN/Y2009/V46/I8/1241 | ISSN 1000-1239 CN 11-1777/TP
• Paper •
### A Survey of Network Information Content Audit
Sun Qindong1,2, Guan Xiaohong2, and Zhou Yadong2
1. 1(School of Computer Science and Engineering, Xi'an University of Technology, Xi'an 710048) 2(Ministry-of-Education Key Laboratory for Intelligent and Network Security, Xi'an Jiaotong University, Xian 710049)
• Online:2009-08-15
Abstract: Nowadays, large scale spreading of the pornographic and rumour content on the Internet has been a serious problem of network security. The authors give a survey of the main network information content audit techniques, which usually sniffers network packets from key point of network, filters packets to find the contents against security policy, and could effectively prevent the spreading of harmful contents. With the viewpoint from global to local and from bottom to top, the key issues in content auditing are discussed. Firstly, the current works of the auditing model are introduced and the main deficiencies of existing models are presented. Secondly, the techniques for auditing data capturing and load balancing are analyzed. Thirdly, the development of content analysis technologies are introduced, such as, pattern matching, text semantics analysis, hot topic extraction, malicious images recognition, etc. Fourthly, the methods of evaluation and prediction of the content security situation, and technologies of data online processing and blocking are discussed. Through analyzing the open problems and issues, which include streaming video content auditing, key words dynamic updating, analysis of dynamic information flow characteristics, and dynamic propagation prosess of network information, the directions and suggestions for further research are proposed as the concluding remarks. | 2021-08-04 00:55:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2354421317577362, "perplexity": 3138.6494061250296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154486.47/warc/CC-MAIN-20210803222541-20210804012541-00314.warc.gz"} |
https://jeeneetqna.in/1720/sphere-rolling-without-slipping-maximum-travelled-inclined | # A solid sphere as shown is rolling without slipping. Find maximum length travelled on an inclined plane?
more_vert
A solid sphere as shown is rolling without slipping. Find maximum length travelled on an inclined plane?
(1) $7v^2\over10g\sin\theta$
(2) $10v^2\over7g\sin\theta$
(3) $5v^2\over7g\sin\theta$
(4) $7v^2\over5g\sin\theta$
more_vert
Rotational motion, rolling motion
## Your answer
Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: To avoid this verification in future, please log in or register.
1 answer
1 answer
1 answer
0 answers
1 answer
0 answers
1 answer
0 answers
1 answer
0 answers | 2021-04-20 15:42:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22968406975269318, "perplexity": 5206.7118389839825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039476006.77/warc/CC-MAIN-20210420152755-20210420182755-00628.warc.gz"} |
https://jordanbell.info/blog/2023/03/01/bash-and-make.html | The Unix Workbench | Johns Hopkins University
The guessinggame.sh program should have the following behavior:
• When the program starts the user should be asked how many files are in the current directory, and then the user should be prompted for a guess.
• If the user’s answer is incorrect the user should be advised that their guess was either too low or too high and then they should be prompted to try to guess again.
• If the user’s guess is correct then they should be congratulated and the program should end.
• The program should not end until the user has entered the correct number of files in the current directory.
• The program should be able to be run by entering bash guessinggame.sh into the console.
• The program should contain at least one function, one loop, and one if statement.
• The program should be more than 20 lines of code but less than 50 lines of code.
The makefile should produce the README.md which should contain the following information:
• The title of the project.
• The date and time at which make was run.
• The number of lines of code contained in guessinggame.sh.
• The README.md should be produced entirely from the makefile and should not be edited by hand.
https://github.com/jordanbell2357/bash-make-git-and-github
guessinggame.sh
#!/usr/bin/bash
#Filename: guessinggame.sh
numfiles=$(ls -1 | wc -l) function user_guess { echo "Guess how many files are in the current directory:" read response } user_guess while [[$response -ne $numfiles ]] do if [[$response -gt \$numfiles ]]
then
echo "Guess is too high"
else
echo "Guess is too low"
fi
user_guess
done
echo "Guess is correct. Congratulations!"
makefile
all: README.md | 2023-04-01 10:10:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4395809471607208, "perplexity": 1391.5115319498032}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949958.54/warc/CC-MAIN-20230401094611-20230401124611-00331.warc.gz"} |
http://mathoverflow.net/questions/77036/system-with-invariant-measure-but-no-ergodic-measure?sort=votes | # System with invariant measure, but no ergodic measure.
## Question
1. Examples of continuous transformations $T: X \to X$ such that the family of invariant probability measures $M(T)$ is NOT empty but there is no ergodic measure ($E(T) = \emptyset$).
Notice that the measures considered are defined over the Borel sets of $X$.
2. Example of a dynamical system where the following inequality is strict: $$\sup_{m \in E(T)} h_m(T) < \sup_{\mu \in M(T)} h_\mu(T)$$.
## Background
Consider $T(x) = x + 1$ over the set of integers $\mathbb{Z}$. In this case, $E(T) = M(T) = \emptyset$. The first question asks for a $\emptyset = E(T) \subsetneq M(T)$ example.
In the locally-compact metrizable case, the set of positive invariant measures $\mu$ with $0 \leq \mu(X) \leq 1$ is compact (weak* topology) with extremals with total measures equal to $0$ or $1$. That is, according to Krein-Milman Theorem, if $M(T) \neq \emptyset$, then $E(T) \neq \emptyset$. So, an answer to Question 1 is not supposed to be locally-compact metrizable.
[Edit: The question only makes sense if the $\sigma$-algebra is fixed. So the post was edited, making $X$ a topological space, $T$ continuous and the $\sigma$-algebra is the family of Borel sets.]
-
This post is related to mathoverflow.net/questions/76908/… – André Caldas Oct 3 '11 at 11:44
Are you interested in finite measures or infinite measures? What is the notion of entropy you are referring to in the infinite case? Anyways, usually at any question in ergodic theory (especially in entropy theory), one usually deals with "standard" probability spaces (and maybe even Lebesgue spaces). – Asaf Oct 3 '11 at 15:13
@Asaf, I am interested on probability measures. The entropy is the Kolmogorov-Sinai entropy. – André Caldas Oct 3 '11 at 15:46
What's wrong with using the Krein-Milman theorem in the general situation? – Jesse Peterson Oct 3 '11 at 15:49
I don't understand the problem here. Metrizability of $X$ doesn't enter in any of your arguments as long as you have local compactness. By its definition the set of positive measures is a weak$^∗$-closed cone in $M(X)$, and thus it cuts out a compact set out of the unit ball, so as soon as you have invariant measures you have invariant ergodic measures by Krein-Milman. – Theo Buehler Oct 4 '11 at 2:26
First, I'd like to point out that asymptotic density is an ergodic and $T$-invariant probability measure on the set of integers $\mathbb Z$ with $T(x) = x+1$.
@Daniel: I will correct the post to emphasize that the measure is over the Borel sets and the transformation is continuous. If you are free to choose the $\sigma$-algebra, then you can just take $\{\emptyset, X\}$. – André Caldas Oct 4 '11 at 1:54
Am I right in saying that no-one has actually answered either Q1 or Q2 yet? I'm particularly interested in the answer to Q1. (In fact, even ignoring a topology, I haven't managed to find anywhere the answer to the following basic question: Let $(X,\Sigma,\mu)$ be a probability space that is not a Lebesgue space, and let $T:X \to X$ be a $\mu$-preserving measurable map; does there necessarily exist a probability measure $\mu'$ on $(X,\Sigma)$ which is $T$-ergodic?) – Julian Newman Mar 28 at 0:49
@JulianNewman: It does not seem to me that Daniel defined his measure over the whole sigma algebra in his first attempt. In his second attempt, there is nothing that ensures the measure $\mu'$ is in fact ergodic. – André Caldas Jul 16 at 12:15
@DanielMansfield: Asymptotic density is not $\sigma$-additive: every singleton has density 0, and yet the union of all singletons (i.e. the whole space) has density 1. You are right that asymptotic density is an invariant finitely additive measure of the map $n \mapsto n+1$, but it is easy to show that this map has no invariant countably additive probability measures. (Indeed, this is an immediate consequence of the Poincaré recurrence theorem.) – Julian Newman Sep 22 at 18:49 | 2015-10-13 17:06:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9392417073249817, "perplexity": 231.5632011391881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443738008122.86/warc/CC-MAIN-20151001222008-00233-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://www.transtutors.com/questions/calculate-selling-price-of-new-product-what-if-questions-breakeven-d-amp-r-corp-has--1361408.htm | # Calculate selling price of new product; what-if questions; breakeven D&R Corp. has annual...
Calculate selling price of new product; what-if questions; breakeven D&R Corp. has annual revenues of $275,000, an average contribution margin ratio of 34%, and fixed expenses of$100,000.
Required:
a. Management is considering adding a new product to the company’s product line. The new item will have $8.25 of variable costs per unit. Calculate the selling price that will be required if this product is not to affect the average contribution margin ratio. b. If the new product adds an additional$30,600 to D&R’s fixed expenses, how many units of the new product must be sold at the price calculated in part a to break even on the new product?
c. If 20,000 units of the new product could be sold at a price of \$13.75 per unit, and the company’s other business did not change, calculate D&R’s total operating income and average contribution margin ratio.
d. Describe how the analysis of adding the new product would be complicated if it were to “steal” some volume from existing products. | 2019-02-22 16:09:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3410113453865051, "perplexity": 3169.2833752674255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247518497.90/warc/CC-MAIN-20190222155556-20190222181556-00602.warc.gz"} |
https://www.hamradiolicenseexam.com/feedback/page-5.htm | # What people are saying
Here's what people are saying:
• “This program is like having a personal tutor, who's always willing and waiting to work with you when you have the time. I can't see how anyone could be dissatisfied with this purchase. I just passed my General test today. I had been reading the book and finding some of it difficult to retain. I discovered that the study portion of HamTestOnLine helped me retain more information in a shorter period of time.” — Mike, KB1PVC
• “I just wanted to let you know that I passed the Element 4 exam this morning with a perfect score. I spent just 24 hours on your web site studying; that's just $2/hr amortized over the cost of the subscription. I could have used other on-line tests, but your program allowed, or should I say forced, me to learn the material that I didn't know rather than spending time on the things that I already knew. My time is valuable, more so than the very reasonable subscription rate, so overall I saved money by saving time. Again, thanks for your most efficient learning tool. As an engineer, I am always looking for ways to make things better, but was delighted to find that your program needs no improvements at all. My complements, gentlemen.” — Eugene, K6ELC • “Passed the Extra test with no problems. An excellent study system. The program is the only way to go.” — John, WB6VWG • “Passed my General Test 32 out of 35 on 11/06 first time out! So MANY thanks to your wonderful Ops for the support and structure that were instrumental in my exam success. I will take a break for a few weeks, make MORE HF QSO's, and on to EXTRA!! This is the BEST$50 I have EVER spent on Ham Radio!!” — 73 de Norm, KE4GAH/AG
• “I subscribed to your online testing on Dec 16. After review and taking many practice tests, I felt confident and went for the element 3 General class test on Jan 11, 06. To say I passed would be an understatement. To say I blew it out of the water would be more like it. 33 correct out of 35. I believe I have never scored better on ANY test I have ever taken !!!!! I will most certainly come back to study for the extra class exam by using HamTestOnline.com !!!!! If only you could make learning the code this easy... Thanks a million, it was well worth the .” — Doug
• “I used HamTestOnline for 2 weeks and passed element 4 4/28/06. Great product that is well worth the money. The Extra portion of 20 just booms with DX, and its great that I can work those stations now. I wish I had found out about this website earlier. Thanks again for such a great system!” — Sean, N4SHM
• “Passed the Tech. test on 12/15/07. I'm studying for the Gen. test I hope to take soon.”
***UPDATE*** “I took the General test Monday night, 1/21/08, and passed. I could not have passed without HamTestOnline.” — Sid, KJ4BER
• “I took the Extra exam last Saturday, the 15th of January and passed!! The use of your product was indeed beneficial. After 35 years as an Advanced class, finally the Extra. See you on the low ends of the bands!! 73,” — Roger, WA7BOC
• “I am delighted to tell you that I took the Technician class test on Jan. 19. I passed it with a perfect score! I studied, using your website, only a few times. Your ‘site’ is put together with a lot of thought and in-sight. Studying was fun and easy!” — Elmer, KD0CTA
• “I passed my Extra class upgrade last night. I missed 3 questions, but got it done! I found your site really helpful. I'd studied the ARRL manual, but the quick rapid fire presentation you have and short information texts helped quite a bit. It helped narrow in on stuff I was weak on or didn't really understand. I'd rate your site 5/5! Keep up the good work. Amateur radio is fun stuff to learn.” — John, KC0ZDC/AE
• “Passed the Gen. Sat before last. You were of great help THANKS!!!” — Glen, KG6PRW
• “Thanks for maintaining this valuable resource for the Ham community. I passed the Amateur Extra exam on 07/16/06 in Radford, Va.” — All the best, Ted - KF4VCP
• “Thanks for having such a great practice test online! It made my Technician exam a breeze....91%.” — Jeff, KC0YSZ
• “Just wanted you to know I passed elem. 1 on Sat, 7/17. Thanks to your excellent program that allowed me to pass all 3 written elements when I first tested, I could concentrate more on code and am now an Extra!” — Norm K7NCR/AE
Go to page: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | 2019-03-24 05:44:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27314773201942444, "perplexity": 2771.0140225748937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203326.34/warc/CC-MAIN-20190324043400-20190324065400-00010.warc.gz"} |
http://hal.in2p3.fr/in2p3-01514396 | # Observation of the B+→D∗−K+π+ decay
Abstract : The B+→D∗−K+π+ decay potentially provides an excellent way to investigate charm meson spectroscopy. The decay is searched for in a sample of proton-proton collision data collected with the LHCb detector at centre-of-mass energies of 7 and 8 TeV, corresponding to an integrated luminosity of 3 fb−1. A clear signal is observed, and the ratio of its branching fraction to that of the B+→D∗−π+π+ normalisation channel is measured to be B(B+→D∗−K+π+)B(B+→D∗−π+π+)=(6.39±0.27±0.48)×10−2, where the first uncertainty is statistical and the second is systematic. This is the first observation of the B+→D∗−K+π+ decay.
http://hal.in2p3.fr/in2p3-01514396
Contributor : Claudine Bombar <>
Submitted on : Wednesday, April 26, 2017 - 10:56:03 AM
Last modification on : Monday, June 3, 2019 - 1:38:03 PM
### Citation
R. Aaij, L. Beaucourt, M. Chefdeville, D. Decamp, N. Déléage, et al.. Observation of the B+→D∗−K+π+ decay. Physical Review D, American Physical Society, 2017, 96, pp.011101(R). ⟨10.1103/PhysRevD.96.011101⟩. ⟨in2p3-01514396⟩
Record views | 2020-05-26 17:40:45 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8185457587242126, "perplexity": 5077.749499446155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391277.13/warc/CC-MAIN-20200526160400-20200526190400-00119.warc.gz"} |
https://stacks.math.columbia.edu/tag/002H | Remark 4.14.4. We often write $\mathop{\mathrm{lim}}\nolimits _ i M_ i$, $\mathop{\mathrm{colim}}\nolimits _ i M_ i$, $\mathop{\mathrm{lim}}\nolimits _{i\in \mathcal{I}} M_ i$, or $\mathop{\mathrm{colim}}\nolimits _{i\in \mathcal{I}} M_ i$ instead of the versions indexed by $\mathcal{I}$. Using this notation, and using the description of limits and colimits of sets in Section 4.15 below, we can say the following. Let $M : \mathcal{I} \to \mathcal{C}$ be a diagram.
1. The object $\mathop{\mathrm{lim}}\nolimits _ i M_ i$ if it exists satisfies the following property
$\mathop{Mor}\nolimits _\mathcal {C}(W, \mathop{\mathrm{lim}}\nolimits _ i M_ i) = \mathop{\mathrm{lim}}\nolimits _ i \mathop{Mor}\nolimits _\mathcal {C}(W, M_ i)$
where the limit on the right takes place in the category of sets.
2. The object $\mathop{\mathrm{colim}}\nolimits _ i M_ i$ if it exists satisfies the following property
$\mathop{Mor}\nolimits _\mathcal {C}(\mathop{\mathrm{colim}}\nolimits _ i M_ i, W) = \mathop{\mathrm{lim}}\nolimits _{i\in \mathcal{I}^\text {opp}} \mathop{Mor}\nolimits _\mathcal {C}(M_ i, W)$
where on the right we have the limit over the opposite category with value in the category of sets.
By the Yoneda lemma (and its dual) this formula completely determines the limit, respectively the colimit.
There are also:
• 3 comment(s) on Section 4.14: Limits and colimits
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). | 2021-03-05 23:42:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9878273606300354, "perplexity": 421.3976096693316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178373761.80/warc/CC-MAIN-20210305214044-20210306004044-00413.warc.gz"} |
http://www.hawaiilibrary.net/articles/eng/Fermi_energy | #jsDisabledContent { display:none; } My Account | Register | Help
# Fermi energy
Article Id: WHEBN0000170165
Reproduction Date:
Title: Fermi energy Author: World Heritage Encyclopedia Language: English Subject: Collection: Publisher: World Heritage Encyclopedia Publication Date:
### Fermi energy
The Fermi energy is a concept in quantum mechanics usually referring to the energy difference between the highest and lowest occupied single-particle states in a quantum system of non-interacting fermions at absolute zero temperature. In a Fermi gas the lowest occupied state is taken to have zero kinetic energy, whereas in a metal the lowest occupied state is typically taken to mean the bottom of the conduction band.
Confusingly, the term "Fermi energy" is often used to refer to a different but closely related concept, the Fermi level (also called electrochemical potential).[1] There are a few key differences between the Fermi level and Fermi energy, at least as they are used in this article:
• The Fermi energy is only defined at absolute zero, while the Fermi level is defined for any temperature.
• The Fermi energy is an energy difference (usually corresponding to a kinetic energy), whereas the Fermi level is a total energy level including kinetic energy and potential energy.
• The Fermi energy can only be defined for non-interacting fermions (where the potential energy or band edge is a static, well defined quantity), whereas the Fermi level (the electrochemical potential of an electron) remains well defined even in complex interacting systems, at thermodynamic equilibrium.
Since the Fermi level in a metal at absolute zero is the energy of the highest occupied single particle state, then the Fermi energy in a metal is the energy difference between the Fermi level and lowest occupied single-particle state, at zero-temperature.
## Contents
• Introduction 1
• Context 1.1
• Illustration of the concept for a one-dimensional square well 2
• Three-dimensional case 3
• Related quantities 4
• Arbitrary-dimensional case 5
• Typical Fermi energies 6
• Metals 6.1
• White dwarfs 6.2
• Nucleus 6.3
• References 8
## Introduction
### Context
In quantum mechanics, a group of particles known as fermions (for example, electrons, protons and neutrons) obey the Pauli exclusion principle. This states that two fermions cannot occupy the same quantum state. Since an idealized non-interacting Fermi gas can be analyzed in terms of single-particle stationary states, we can thus say that two fermions cannot occupy the same stationary state. These stationary states will typically be distinct in energy. To find the ground state of the whole system, we start with an empty system, and add particles one at a time, consecutively filling up the unoccupied stationary states with the lowest energy. When all the particles have been put in, the Fermi energy is the kinetic energy of the highest occupied state.
What this means is that even if we have extracted all possible energy from a Fermi gas by cooling it to near absolute zero temperature, the fermions are still moving around at a high speed. The fastest ones are moving at a velocity corresponding to a kinetic energy equal to the Fermi energy. This is the Fermi velocity. Only when the temperature exceeds the Fermi temperature do the electrons begin to move significantly faster than at absolute zero.
The Fermi energy is one of the important concepts in the solid state physics of metals and superconductors. It is also a very important quantity in the physics of quantum liquids like low temperature helium (both normal and superfluid 3He), and it is quite important to nuclear physics and to understand the stability of white dwarf stars against gravitational collapse.
The Fermi energy (EF) of a system of non-interacting fermions is the increase in the ground state energy when exactly one particle is added to the system, minus the potential energy of that particle. It can also be interpreted as the maximum kinetic energy of an individual fermion in this ground state. The internal chemical potential at zero temperature is equal to the Fermi energy.
## Illustration of the concept for a one-dimensional square well
The one-dimensional infinite square well of length L is a model for a one-dimensional box. It is a standard model-system in quantum mechanics for which the solution for a single particle is well known. The levels are labeled by a single quantum number n and the energies are given by
E_n = E_0 + \frac{\hbar^2 \pi^2}{2 m L^2} n^2. \,
where E_0 is the potential energy level inside the box.
Suppose now that instead of one particle in this box we have N particles in the box and that these particles are fermions with spin 1/2. Then not more than two particles can have the same energy, i.e., two particles can have the energy of E_1, two other particles can have energy E_2 and so forth. The reason that two particles can have the same energy is that a particle can have a spin of 1/2 (spin up) or a spin of −1/2 (spin down), leading to two states for each energy level. In the configuration for which the total energy is lowest (the ground state), all the energy levels up to n = N/2 are occupied and all the higher levels are empty.
Defining the reference for the Fermi energy to be E_0, the Fermi energy is therefore given by
E_F=E_{N/2}-E_0=\frac{\hbar^2 \pi^2}{2 m L^2} (N/2)^2,
for an even number of electrons (N − 1), or an odd number of electrons (N).
## Three-dimensional case
The three-dimensional isotropic case is known as the Fermi sphere.
Let us now consider a three-dimensional cubical box that has a side length L (see infinite square well). This turns out to be a very good approximation for describing electrons in a metal. The states are now labeled by three quantum numbers nx, ny, and nz. The single particle energies are
E_{n_x,n_y,n_z} = E_0 + \frac{\hbar^2 \pi^2}{2m L^2} \left( n_x^2 + n_y^2 + n_z^2\right) \,
nx, ny, nz are positive integers. There are multiple states with the same energy, for example E_{211}=E_{121}=E_{112}. Now let's put N non-interacting fermions of spin 1/2 into this box. To calculate the Fermi energy, we look at the case where N is large.
If we introduce a vector \vec{n}=\{n_x,n_y,n_z\} then each quantum state corresponds to a point in 'n-space' with energy
E_{\vec{n}} = E_0 + \frac{\hbar^2 \pi^2}{2m L^2} |\vec{n}|^2 \,
With |\vec{n}|^2 denoting the square of the usual Euclidean length (\sqrt{n_x^2+n_y^2+n_z^2})^2 . The number of states with energy less than EF + E0 is equal to the number of states that lie within a sphere of radius |\vec{n}_F| in the region of n-space where nx, ny, nz are positive. In the ground state this number equals the number of fermions in the system.
N =2\times\frac{1}{8}\times\frac{4}{3} \pi n_F^3 \,
The free fermions that occupy the lowest energy states form a sphere in momentum space. The surface of this sphere is the Fermi surface.
the factor of two is once again because there are two spin states, the factor of 1/8 is because only 1/8 of the sphere lies in the region where all n are positive. We find
n_F=\left(\frac{3 N}{\pi}\right)^{1/3}
so the Fermi energy is given by
E_F = \frac{\hbar^2 \pi^2}{2m L^2} n_F^2 = \frac{\hbar^2 \pi^2}{2m L^2} \left( \frac{3 N}{\pi} \right)^{2/3}
Which results in a relationship between the Fermi energy and the number of particles per volume (when we replace L2 with V2/3):
E_F = \frac{\hbar^2}{2m} \left( \frac{3 \pi^2 N}{V} \right)^{2/3} \,
The total energy of a Fermi sphere of N fermions is given by
E_t = N E_0 + \int_0^N E_F \, dN^\prime = \left(\frac{3}{5} E_F + E_0\right)N
Therefore, the average energy of an electron is given by:
E_\mathrm{av} = E_0 + \frac{3}{5} E_F
## Related quantities
Using this definition of Fermi Energy, various related quantities can be useful. The Fermi temperature is defined as:
T_F = \frac{E_F}{k_B}
where k_B is the Boltzmann constant and E_F the Fermi energy. The Fermi temperature can be thought of as the temperature at which thermal effects are comparable to quantum effects associated with Fermi statistics.[2] The Fermi temperature for a metal is a couple of orders of magnitude above room temperature.
Other quantities defined in this context are Fermi momentum and Fermi velocity:
p_F = \sqrt{2 m_e E_F}
v_F = \frac{p_F}{m_e}
where m_e is the mass of the electron. These quantities are the momentum and group velocity, respectively, of a fermion at the Fermi surface. The Fermi momentum can also be described as p_F = \hbar k_F , where k_F is the radius of the Fermi sphere and is called the Fermi wave vector.[3]
These quantities are not well-defined in cases where the Fermi surface is non-spherical. In the case of the quadratic dispersion relations given above, they are given by:[4]
## Arbitrary-dimensional case
Using a volume integral on d dimensions, we can find the state density:
g(E)=2\int\frac{d^d\vec{k}}{(2\pi)^d/V}\delta\left(E-E_0-\frac{\hbar^2\vec{k}^2}{2m}\right)=V\frac{d\,m^{d/2}(E-E_0)^{d/2-1}}{(2\pi)^{d/2}\ \Gamma(d/2+1)\hbar^d}
By then looking for the number of particles, we can extract the Fermi energy: n=\int_{E_0}^{E_0+E_F}g(E) \, dE To get:
E_F=\frac{2\pi\hbar^2}{m}\left(\tfrac{1}{2}\Gamma\left(\tfrac{d}{2}+1\right)n\right)^{2/d}
## Typical Fermi energies
### Metals
The number density N/V of conduction electrons in metals ranges between approximately 1028 and 1029 electrons/m3, which is also the typical density of atoms in ordinary solid matter. This number density produces a Fermi energy of the order:
E_F = \frac{\hbar^2}{2m_e} \left( 3 \pi^2 \ 10^{28 \ \sim \ 29} \ \mathrm{m}^{-3} \right)^{2/3} \approx 2 \ \sim \ 10 \ \mathrm{eV}
### White dwarfs
Stars known as white dwarfs have mass comparable to our Sun, but have about a hundredth of its radius. The high densities means that the electrons are no longer bound to single nuclei and instead form a degenerate electron gas. The number density of electrons in a white dwarf is of the order of 1036 electrons/m3. This means their Fermi energy is:
E_F = \frac{\hbar^2}{2m_e} \left( \frac{3 \pi^2 (10^{36})}{1 \ \mathrm{m}^3} \right)^{2/3} \approx 3 \times 10^5 \ \mathrm{eV} = 0.3 \ \mathrm{MeV}
### Nucleus
Another typical example is that of the particles in a nucleus of an atom. The radius of the nucleus is roughly:
R = \left(1.25 \times 10^{-15} \mathrm{m} \right) \times A^{1/3}
where A is the number of nucleons.
The number density of nucleons in a nucleus is therefore:
n = \frac{A}{\begin{matrix} \frac{4}{3} \end{matrix} \pi R^3 } \approx 1.2 \times 10^{44} \ \mathrm{m}^{-3}
Now since the Fermi energy only applies to fermions of the same type, one must divide this density in two. This is because the presence of neutrons does not affect the Fermi energy of the protons in the nucleus, and vice versa.
So the Fermi energy of a nucleus is about:
E_F = \frac{\hbar^2}{2m_p} \left( \frac{3 \pi^2 (6 \times 10^{43})}{1 \ \mathrm{m}^3} \right)^{2/3} \approx 3 \times 10^7 \ \mathrm{eV} = 30 \ \mathrm{MeV}
The radius of the nucleus admits deviations around the value mentioned above, so a typical value for the Fermi energy is usually given as 38 MeV.
• Fermi–Dirac statistics: the distribution of electrons over stationary states for a non-interacting fermions at non-zero temperature.
## References
1. ^ The use of the term "Fermi energy" as synonymous with Fermi level (a.k.a. electrochemical potential) is widespread in semiconductor physics. For example: Electronics (fundamentals And Applications) by D. Chattopadhyay, Semiconductor Physics and Applications by Balkanski and Wallis.
2. ^ "Introduction to Quantum Statistical Thermodyamics" (PDF). Utah State University Physics. Retrieved 23 April 2014.
3. ^ Ashcroft, Neil W.; Mermin, N. David (1976). Solid State Physics.
4. ^ Fermi level and Fermi function, from HyperPhysics
• Kroemer, Herbert; Kittel, Charles (1980). Thermal Physics (2nd ed.). W. H. Freeman Company.
• Table of Fermi energies, velocities, and temperatures for various elements.
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. | 2020-02-19 11:30:54 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8913547992706299, "perplexity": 775.1789051488548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144111.17/warc/CC-MAIN-20200219092153-20200219122153-00507.warc.gz"} |
https://wiki.seg.org/wiki/Instantaneous_AGC | # Instantaneous AGC
Series Investigations in Geophysics Öz Yilmaz http://dx.doi.org/10.1190/1.9781560801580 ISBN 978-1-56080-094-1 SEG Online Store
Instantaneous AGC is one of the most common gain types used. This gain function is computed as follows. First, the mean absolute value of trace amplitudes is computed within a specified time gate. Second, the ratio of the desired rms level to this mean value is assigned as the value of the gain function. Unlike the rms amplitude AGC, this value is assigned to any desired time sample of the gain function within the time gate, say the nth sample of the trace, rather than to the sample at the center of the gate. The next step is to move the time gate one sample down the trace and compute the value of the gain function for the (n + 1)th time sample, and so on. No interpolation is therefore required to define this gain function. Hence, the scaling function g(t) at the gate center is given by
${\displaystyle g(t)={\frac {\text{desired rms}}{{\frac {1}{N}}\sum \nolimits _{i=1}^{N}{\left|{x_{i}}\right|}}},}$ (11)
where xi is the trace amplitude and N is the number of samples within the gate.
Figure 1.4-11 A portion of a CMP stack before and after application of five different instantaneous AGC functions. The numbers on top indicate gain window sizes in milliseconds used in computing the AGC gain function described by equation (11).
Figure 1.4-11 shows the ungained data and four instantaneous AGC-gained sections. Gate lengths are indicated on top of each panel. Very small time gates cause a significant can loss of signal character by boosting zones that contain small amplitudes. This occurs with the 64-ms AGC output. In processing, this is called a fast AGC. In the other extreme, if a large time gate is selected, then the effectiveness of the AGC process is lessened. In practice, AGC time gates commonly are specified between 200 and 500 ms. | 2020-05-26 21:45:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8209626078605652, "perplexity": 1141.7353535671507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391309.4/warc/CC-MAIN-20200526191453-20200526221453-00455.warc.gz"} |
https://mathoverflow.net/questions/336373/proj-construction-in-derived-algebraic-geometry | # Proj construction in derived algebraic geometry
The question
My question is easy to state:
Is there a Proj construction in derived geometry, that produces a derived stack from a “graded derived algebra”?
Given the vagueness of the question, you’re free to interpret derived geometry in your favourite model for affines: (E-infinity/simplicial/dg)-algebras etc.
Perhaps there’s some well-known answer. If not, below I’ll write about why I’m confused about existence of such a notion.
The struggle
A first obstacle (maybe only to me, as I am dumb) is to formulate a notion of derived graded algebras. This seems possible to do directly. It’s also maybe plausible that we can characterise such graded affine schemes as derived affine schemes that receive an action by the multiplicative group scheme, viewed as a 0-truncated derived scheme. I have not thought either of these through though, but for now let’s assume some notion of graded derived rings exist.
At this point we can use a functor-of-points approach to define a notion of Proj: either in terms of maps into invertible projective modules, or as quotient prestack by the $$G_m$$ action. It’s not immediate (at least to me) that in the derived setting that these approaches produce equivalent results.
A more major obstacle is that in the classical case, the way we show the functor of points for Proj is representable by a scheme is by constructing an explicit model for it by gluing affines along open subschemes. If we try to reproduce this argument in the infinity-categorical setting, these gluing diagrams requires an infinite amount of coherence data. This is much like the example outlined in introduction to DAG-XIV. Can we circumvent this by appealing to the Lurie-Artin theorem?
• If you regard a graded algebra as a $\mathbb{G}_m$-equivariant algebra, then $\mathbf{Proj} A=[((\mathrm{Spec} A)\setminus \{0\})/\mathbb{G}_m]$ (quotient stack). The same formula works when $A$ is derived, and stack quotients by $\mathbb{G}_m$ are defined in terms of $\mathbb{G}_m$-torsors, which are equivalent to line bundles. Jul 18, 2019 at 8:00
• Building on Jon Pridham's comment above, non-negatively graded algebras are exactly the same thing as affine schemes $\mathrm{Spec}\,A$ with an action of the multiplicative monoid scheme $\mathbb{A}^1$. Then $\mathrm{Proj}\,A=\left(\mathrm{Spec}\,A\smallsetminus 0\cdot \mathrm{Spec}\,A\right)/\mathbb{G}_m$ (where $0:\mathrm{Spec}\,A\to\mathrm{Spec}\,A$ is just the multiplication by 0, which has closed image since $\{0\}\subseteq\mathbb{A}^1$ is closed). Jul 18, 2019 at 9:15
It is instructive to look at the simplest case of Proj: that of a free module, i.e. the projective space. Lurie works these out for us quite carefully in his Spectral Algebraic Geometry tome.
Projective spaces in SAG
1. Projective space by gluing: In Section SAG.5.4, more specifically Construction SAG.5.4.1.3, he constructs a spectral algebraic scheme $$\mathbf P^n_S$$, following the classical gluing construction of homogeneous coordinates. It looks a little bit more funky due to $$\infty$$-categorical descent requiring specification of what happends on arbitrary intersections, as opposed to the classically story where double intersections, with a compatibility on triple ones, suffice (this is the infinite coherence data for gluing alluded to in the question).
This projective space $$\mathbf P^n_S$$ is flat, base-changes along $$S\to R$$ to usual projective spaces we know and love over an ordinary ring $$R$$, and possess a "good theory" of Serre twisting sheaves $$\mathscr O(n)$$, e.g. Serre's calculation in FAC of their cohomology still holds.
Its drawbacks: $$\mathbf P^n_S$$ is not smooth over $$\operatorname{Spec} S$$ (more precisely, it is fiber-smooth and is not differentially smooth), and it does not satisfy the expected universal property in terms of line bundles (for other than classical schemes).
2. Projective space by universal property: On the other hand, Subsection SAG.19.2.6 sees Lurie apply the Artin Representability Theorem to obtain the smooth projective space $$\mathbf P^n_{\mathrm{sm}}$$, that satisfies the expected universal property: a map of spectral schemes $$X\to \mathbf P^n_{\mathrm{sm}}$$ corresponds to a line bundle $$\mathscr L$$ on $$X$$ together with a map of quasi-coherent sheaves $$\mathscr L\to\mathscr O_X^{n+1}$$, which exhibits the splitting $$\mathscr O^{n+1}_X\simeq \mathscr L\oplus\mathscr Q$$.
To check the requirements of Artin representability, Lurie uses the already-constructed $$\mathbf P^n_S$$, however the two spectral schemes do not coincide. The smooth projective space is smooth over $$\operatorname{Spec} S$$ (i.e. differentially smooth), but it is not flat. And while $$\mathscr O(-1)$$ is the universal bundle on $$\mathbf P^n_{\mathrm{sm}}$$, the cohomology of it and its twists is not controlled by Serre's computation anymore.
3. Summary: In the world of SAG there are two notions of the projective space, each satisfying some of the nice properties of projective spaces in classical AG.
All that said, that really has nothing to do with projective spaces, but instead with affine ones: it is known that SAG admits two inequivalent notions of the affine space $$\mathbf A^n$$, one of which $$\mathbf A^n_{\mathrm{sm}}$$ is (differentially) smooth, and the other of which $$\mathbf A^n_S$$ is flat. The two projective spaces just correspond to using each of the two variants of affine space to build a projective one.
It is explained in Lurie's thesis how requiring the two affine lines to coincide produces DAG from SAG, hence the projective space in derived algebraic geometry (e.g. built out of simplicial commutative rings, as opposed to $$\mathbb E_\infty$$-rings) will be as nice as you expect.
The question asks for a good notion of a graded derived $$R$$-algebra, and the suggestion in the comments was to just define them as affine derived $$R$$-schemes with a $$\mathbf G_m$$-action. That works, but it is also possible to imitate the usual classical definition of graded rings:
1. Classical definition of graded rings: A graded derived $$R$$-algebra is a lax symmetric monoidal functor $$A:\mathbf Z\to \mathrm{Mod}_R$$, where the LHS is the discrete category indexed by the integers (no non-identity morphisms) with the symmetric monoidal operation given by addition, and the RHS carries the symmetric monoidal structure of the derived relative tensor product $$\otimes_R$$. If we denote by $$A_n$$ the value of the functor $$A$$ on the object $$n\in \mathbf Z$$, then the colimit $$\varinjlim A\simeq \bigoplus_{n\in \mathbf Z} A_n$$ is the underlying commutative $$R$$-algebra. The lax symmetric monoidality translates to the usual definition of a commutative graded ring: the map $$R\to A$$, picking out the unit $$1\in \pi_0A$$, factors through the inclusion $$A_0\to A$$, and the multiplication on $$A$$ takes $$A_m\otimes_R A_n\to A_{m+n}$$. Phrasing things as an $$\infty$$-categorical functor just brings all the necessary homotopy-coherence along for the ride.
If you wanted non-negatively graded derived $$R$$-algebras, you could require that $$A_n\simeq 0$$ for all $$n$$, but that amounts to the same thing as a symmetric monoidal functor $$\mathbf Z_{\ge 0}\to\mathrm{Mod}_R$$, everything else same as before.
The relationship with the group scheme $$\mathbf G_m$$ and the monoid scheme $$\mathbf A^1$$, alluded to in the comments to the question, come from the fact that $$\mathbf G_m = \operatorname{Spec} (R[\mathbf Z])$$ and $$\mathbf A^1 = \operatorname{Spec} (R[\mathbf Z_{\ge 0}])$$.
2. Two options for $$\mathbb E_\infty$$-rings: This also gives another perspective on what "goes wrong" in SAG to produce two versions of Proj. There are two notions of a polynomial algebra over an $$\mathbb E_\infty$$-ring $$R$$: the free $$R$$-algebra $$R\{t\}=\operatorname{Sym}^*_R(R)\simeq R[\coprod_n B\Sigma_n]$$ and the polynomial $$R$$-algebra $$R[t] = R[\mathbf Z_{\ge 0}] = R[\coprod_n \mathrm{pt}].$$ Here $$\coprod_n B\Sigma_n$$, also known as the nerve of the category of finite sets with bijections, is the free $$\mathbb E_\infty$$-space, while its path-connected components $$\mathbf Z_{\ge 0}$$ only form the free commutative monoid. This leads to the two different affine lines $$\mathbf A^1_{\mathrm {sm}}$$ and $$\mathbf A^1_S$$, the difference btween $$\operatorname{GL}_1$$ and $$\mathbf G_m$$ over $$\mathrm{Spec S}$$, and finally to the two projective spaces. So while $$\mathbf P^n_{\mathrm{sm}}$$ has a universal property in terms of line bundles, i.e. $$\operatorname{GL}_1$$-torsors, the corresponding universal property for $$\mathbf P^n_S$$ would be about $$\mathbf G_m$$-torsors. Conversely, as the gluing construction for $$\mathbf P^n_S$$ starts from flat affine spaces $$\mathbf A^n_S$$, so would the one for $$\mathbf P^n_{\mathrm{sm}}$$ start from the smooth flat spaces $$\mathbf A^n_{\mathrm{sm}}$$. As quotient stacks, we have $$\mathbf P^n_S\simeq (\mathbf A^n_S-\{0\})/\mathbf G_m$$ and $$\mathbf P^n_{\mathrm{sm}}\simeq (\mathbf A^n_{\mathrm{sm}}-\{0\})/\operatorname{GL_1}$$.
3. Proj: Following the above discussion, you can develop two notions of Proj in the SAG setting (both of which will coincide in the DAG setting), depending on what kind of grading you feed in, of which the two projective spaces will be examples. Either could be defined equivalently via a gluing construction (specifying the classical base-affines via homogeneous localization in the classical construction of Proj) or via quotienting by the $$\mathbf G_m$$- or $$\mathrm{GL_1}$$-action respectively.
The two constructions will agree under the usual condition on the graded derived ring $$A$$ (phrased purely on $$\pi_0A$$ as we expect) that elements in graded degree $$1$$ generate the irrelevant ideal. Note that this is not terrbily restrictive: even the EGA only really works with Proj in this setting.
Just to sketch where the equivalence is coming from: fixing some generators $$x_1, \ldots, x_n$$ for the irrelevant ideal $$\pi_0(A)^+$$ in degree $$1$$, determines an open cover $$\coprod_j \operatorname{Spec}(A[x_j^{-1}])\to \operatorname{Spec} A - V(A_+)$$ of the complement of the closed subsheme of $$\operatorname{Spec} A$$ cut out by the irrelevant ideal $$A_+ =\bigoplus_{n >0} A_n$$. Since all the derived rings in sight are graded, this covering map is $$\mathbf G_m$$-(or $$\mathrm{GL_1}$$- resp.)equivariant, and passes to a cover of the quotient. Identifying the quotient of $$\operatorname{Spec}\big(A[x_j^{-1}]\big)$$ by $$\mathbf G_m$$ with the spectrum of the $$0$$-th graded part $$(A[x_j^{-1}])_0$$ (which goes by the name homogeneous localization in classical AG), we recover the "gluing construction" of $$\operatorname{Proj}A$$ as the Cech nerve of the open cover.
• Thanks! Do you have a reference for the last paragraph?
– Bbb
Jul 19, 2019 at 2:32
• I'm afraid I don't have a reference. Sorry! Jul 19, 2019 at 4:29 | 2023-03-30 05:13:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 91, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9336286187171936, "perplexity": 321.3500359743187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949097.61/warc/CC-MAIN-20230330035241-20230330065241-00059.warc.gz"} |
https://dml.cz/handle/10338.dmlcz/143955 | # Article
Full entry | PDF (0.2 MB)
Keywords:
complex two-plane Grassmannians; Hopf hypersurface; $\mathfrak D^{\bot }$-invariant hypersurface; commuting shape operator; Reeb vector field
Summary:
Lee, Kim and Suh (2012) gave a characterization for real hypersurfaces $M$ of Type ${\rm (A)}$ in complex two plane Grassmannians $G_2({\mathbb C}^{m+2})$ with a commuting condition between the shape operator $A$ and the structure tensors $\phi$ and $\phi _{1}$ for $M$ in $G_2({\mathbb C}^{m+2})$. Motivated by this geometrical notion, in this paper we consider a new commuting condition in relation to the shape operator $A$ and a new operator $\phi \phi _{1}$ induced by two structure tensors $\phi$ and $\phi _{1}$. That is, this commuting shape operator is given by $\phi \phi _{1} A = A \phi \phi _{1}$. Using this condition, we prove that $M$ is locally congruent to a tube of radius $r$ over a totally geodesic $G_2({\mathbb C}^{m+1})$ in $G_2({\mathbb C}^{m+2})$.
References:
[1] Alekseevskij, D. V.: Compact quaternion spaces. Funkts. Anal. Prilozh. 2 (1968), 11-20 Russian. MR 0231314 | Zbl 0175.19001
[2] Berndt, J.: Riemannian geometry of complex two-plane Grassmannians. Rend. Semin. Mat., Torino 55 (1997), 19-83. MR 1626089 | Zbl 0909.53038
[3] Berndt, J., Suh, Y. J.: Real hypersurfaces in complex two-plane Grassmannians. Monatsh. Math. 127 (1999), 1-14. DOI 10.1007/s006050050018 | MR 1666307 | Zbl 0920.53016
[4] Berndt, J., Suh, Y. J.: Real hypersurfaces with isometric Reeb flow in complex two-plane Grassmannians. Monatsh. Math. 137 (2002), 87-98. DOI 10.1007/s00605-001-0494-4 | MR 1937621 | Zbl 1015.53034
[5] Jeong, I., Lee, H. J., Suh, Y. J.: Anti-commuting real hypersurfaces in complex two-plane Grassmannians. Bull. Aust. Math. Soc. 78 (2008), 199-210. DOI 10.1017/S0004972708000609 | MR 2466859 | Zbl 1154.53031
[6] Kobayashi, S., Nomizu, K.: Foundations of Differential Geometry I. Interscience Publishers, a division of John Wiley and Sons New York (1963). MR 0152974 | Zbl 0119.37502
[7] Kobayashi, S., Nomizu, K.: Foundations of Differential Geometry Vol. II. Interscience Tracts in Pure and Applied Mathematics No. 15, Vol. II Interscience Publishers, a division of John Wiley and Sons, New York (1969). MR 0238225 | Zbl 0175.48504
[8] Lee, H., Kim, S., Suh, Y. J.: Real hypersurfaces in complex two-plane Grassmannians with certain commuting condition. Czech. Math. J. 62 (2012), 849-861. DOI 10.1007/s10587-012-0049-y | MR 2984638 | Zbl 1260.53097
[9] Lee, H., Suh, Y. J.: Real hypersurfaces of type $B$ in complex two-plane Grassmannians related to the Reeb vector. Bull. Korean Math. Soc. 47 (2010), 551-561. DOI 10.4134/BKMS.2010.47.3.551 | MR 2666376 | Zbl 1206.53064
[10] Pérez, J. D., Jeong, I., Suh, Y. J.: Real hypersurfaces in complex two-plane Grassmannians with commuting normal Jacobi operator. Acta Math. Hung. 117 (2007), 201-217. DOI 10.1007/s10474-007-6091-9 | MR 2361601 | Zbl 1220.53070
[11] Pérez, J. D., Suh, Y. J.: The Ricci tensor of real hypersurfaces in complex two-plane Grassmannians. J. Korean Math. Soc. 44 (2007), 211-235. DOI 10.4134/JKMS.2007.44.1.211 | MR 2283469 | Zbl 1156.53034
[12] Pérez, J. D., Suh, Y. J., Watanabe, Y.: Generalized Einstein real hypersurfaces in complex two-plane Grassmannians. J. Geom. Phys. 60 (2010), 1806-1818. DOI 10.1016/j.geomphys.2010.06.017 | MR 2679423 | Zbl 1197.53071
[13] Suh, Y. J.: Real hypersurfaces in complex two-plane Grassmannians with commuting shape operator. Bull. Aust. Math. Soc. 68 (2003), 379-393. DOI 10.1017/S0004972700037795 | MR 2027682 | Zbl 1058.53046
[14] Suh, Y. J.: Real hypersurfaces in complex two-plane Grassmannians with harmonic curvature. J. Math. Pures Appl. 100 (2013), 16-33. DOI 10.1016/j.matpur.2012.10.010 | MR 3057300 | Zbl 1279.53052
[15] Suh, Y. J.: Real hypersurfaces in complex two-plane Grassmannians with parallel Ricci tensor. Proc. Roy. Soc. Edinburgh Sect. A 142 (2012), 1309-1324. MR 3002598 | Zbl 1293.53071
[16] Suh, Y. J.: Real hypersurfaces in complex two-plane Grassmannians with $\xi$-invariant Ricci tensor. J. Geom. Phys. 61 (2011), 808-814. DOI 10.1016/j.geomphys.2010.12.010 | MR 2765405 | Zbl 1209.53046
[17] Suh, Y. J.: Real hypersurfaces in complex two-plane Grassmannians with Reeb parallel Ricci tensor. J. Geom. Phys. 64 (2013), 1-11. DOI 10.1016/j.geomphys.2012.10.005 | MR 3004010 | Zbl 1259.53052
[18] Suh, Y. J.: Real hypersurfaces of Type $B$ in complex two-plane Grassmannians. Monatsh. Math. 147 (2006), 337-355. DOI 10.1007/s00605-005-0329-9 | MR 2215841 | Zbl 1094.53050
Partner of | 2017-12-14 17:21:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9520373344421387, "perplexity": 2362.066795013619}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948545526.42/warc/CC-MAIN-20171214163759-20171214183759-00221.warc.gz"} |
http://mathhelpforum.com/differential-geometry/137886-banach-algebra-example.html | # Math Help - Banach algebra example
1. ## Banach algebra example
Construct a banach algebra $\mathcal{B}$ and a unital subalgebra $\mathcal{A}$ such that $\sigma_{\mathcal{A}}(x)\neq\sigma_{\mathcal{B}}(x)$ for some $x\in\mathcal{A}$
2. ## different spectra in diff alg.
Let $A = A(D)$, the disk algebra. That is $D = \{z \in C : |z|\le 1\}$ and $A$ be the algebra of all functions $f: D \to C$ which are continuous on $D$ and analytic on the interior of $D$. Let $\Gamma = \{z \in C : |z|= 1\}$ and $B = C(\Gamma)$, the algebra of all continuous functions on $\Gamma$. Then with the sup norm, both of them are Banach Algebras. Also the elements of $A(D)$ can be realized as elements of $C(\Gamma)$ by restricting them to $\Gamma$. Thus, in this way, we can say that $A(D)$ is a Banach subalgebra of $C(\Gamma)$. Now let $f : D \to C$ be defined by $f(z) = z$, then the spectrum of $f$ computed in $A$ is $D$ and computed in $B$ is $\Gamma$. I hope this answers your question. | 2016-02-12 21:29:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9152552485466003, "perplexity": 66.66064219862936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701165302.57/warc/CC-MAIN-20160205193925-00297-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://gmatclub.com/forum/if-a-school-cafeteria-needs-c-cans-of-soup-each-week-for-each-student-270019.html | GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 10 Dec 2018, 19:31
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
## Events & Promotions
###### Events & Promotions in December
PrevNext
SuMoTuWeThFrSa
2526272829301
2345678
9101112131415
16171819202122
23242526272829
303112345
Open Detailed Calendar
• ### Free lesson on number properties
December 10, 2018
December 10, 2018
10:00 PM PST
11:00 PM PST
Practice the one most important Quant section - Integer properties, and rapidly improve your skills.
• ### Free GMAT Prep Hour
December 11, 2018
December 11, 2018
09:00 PM EST
10:00 PM EST
Strategies and techniques for approaching featured GMAT topics. December 11 at 9 PM EST.
# If a school cafeteria needs c cans of soup each week for each student,
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 51072
If a school cafeteria needs c cans of soup each week for each student, [#permalink]
### Show Tags
08 Jul 2018, 22:11
00:00
Difficulty:
15% (low)
Question Stats:
78% (01:17) correct 22% (00:30) wrong based on 32 sessions
### HideShow timer Statistics
If a school cafeteria needs c cans of soup each week for each student, and if there are s students in the school, for how many weeks will x cans of soup last?
A. $$csx$$
B. $$\frac{xs}{c}$$
C. $$\frac{s}{cx}$$
D. $$\frac{x}{cs}$$
E. $$\frac{cx}{s}$$
_________________
Intern
Joined: 23 Jul 2017
Posts: 5
Re: If a school cafeteria needs c cans of soup each week for each student, [#permalink]
### Show Tags
09 Jul 2018, 01:02
11
No. of soup cans per student per week = c
Hence total no. of soup cans per week = cs
No. of soup cans available = x
By dividing no. of soup cans by the requirement, we can find no. of weeks.
Therefore, the answer is x/cs, i.e. option D
##### General Discussion
e-GMAT Representative
Joined: 04 Jan 2015
Posts: 2269
Re: If a school cafeteria needs c cans of soup each week for each student, [#permalink]
### Show Tags
08 Jul 2018, 22:22
1
Bunuel wrote:
If a school cafeteria needs c cans of soup each week for each student, and if there are students in the school, for how many weeks will x cans of soup last?
Hey Bunuel,
The number indicating the total students is missing. Most probably it will be s students. Can you please check once?
_________________
Number Properties | Algebra |Quant Workshop
Success Stories
Guillermo's Success Story | Carrie's Success Story
Ace GMAT quant
Articles and Question to reach Q51 | Question of the week
Number Properties – Even Odd | LCM GCD | Statistics-1 | Statistics-2 | Remainders-1 | Remainders-2
Word Problems – Percentage 1 | Percentage 2 | Time and Work 1 | Time and Work 2 | Time, Speed and Distance 1 | Time, Speed and Distance 2
Advanced Topics- Permutation and Combination 1 | Permutation and Combination 2 | Permutation and Combination 3 | Probability
Geometry- Triangles 1 | Triangles 2 | Triangles 3 | Common Mistakes in Geometry
Algebra- Wavy line | Inequalities
Practice Questions
Number Properties 1 | Number Properties 2 | Algebra 1 | Geometry | Prime Numbers | Absolute value equations | Sets
| '4 out of Top 5' Instructors on gmatclub | 70 point improvement guarantee | www.e-gmat.com
Math Expert
Joined: 02 Sep 2009
Posts: 51072
Re: If a school cafeteria needs c cans of soup each week for each student, [#permalink]
### Show Tags
08 Jul 2018, 22:23
EgmatQuantExpert wrote:
Bunuel wrote:
If a school cafeteria needs c cans of soup each week for each student, and if there are students in the school, for how many weeks will x cans of soup last?
Hey Bunuel,
The number indicating the total students is missing. Most probably it will be s students. Can you please check once?
___________________
Yes. Edited. Thank you.
_________________
MBA Section Director
Affiliations: GMATClub
Joined: 22 May 2017
Posts: 1361
Concentration: Nonprofit
GPA: 4
WE: Engineering (Computer Software)
If a school cafeteria needs c cans of soup each week for each student, [#permalink]
### Show Tags
08 Jul 2018, 22:26
Number of cans of soup each student needs for one week is 'c'
Number of students is 's'
Number of cans of soup required for one week is 'c*s'
Number of cans of soup required for 'w' weeks is 'w*c*s'
but we have 'x' cans of soup
=> x = wcs => w = $$\frac{x}{cs}$$
Hence option D
_________________
Director
Status: Learning stage
Joined: 01 Oct 2017
Posts: 931
WE: Supply Chain Management (Energy and Utilities)
Re: If a school cafeteria needs c cans of soup each week for each student, [#permalink]
### Show Tags
08 Jul 2018, 22:38
Bunuel wrote:
If a school cafeteria needs c cans of soup each week for each student, and if there are s students in the school, for how many weeks will x cans of soup last?
A. $$csx$$
B. $$\frac{xs}{c}$$
C. $$\frac{s}{cx}$$
D. $$\frac{x}{cs}$$
E. $$\frac{cx}{s}$$
No of weeks x cans will last= Total no of cans available/No of cans needed per week for 's' students
=$$\frac{x}{cs}$$
Ans. (D)
_________________
Regards,
PKN
Rise above the storm, you will find the sunshine
Re: If a school cafeteria needs c cans of soup each week for each student, &nbs [#permalink] 08 Jul 2018, 22:38
Display posts from previous: Sort by | 2018-12-11 03:31:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4842756986618042, "perplexity": 13494.997359780722}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823550.42/warc/CC-MAIN-20181211015030-20181211040530-00630.warc.gz"} |
https://cs.stackexchange.com/questions/115066/minimum-time-to-assemble-multipart-object | # Minimum time to assemble multipart object
I've been asked to do this task in an online assessment. I've passed, so my solution is supposedly correct, but I am unable to prove it. The task is:
Given a set of parts (array of integer part sizes), worker must put them all together. Parts are assembled in pairs. To put together two parts of sizes A and B worker needs A+B minutes. The resulting part's size is also A+B. Write a program to determine the minimum time required to put together given set of parts.
My solution was:
MinHeap h
for p in parts:
h.push(p)
time = 0
while h.size() > 1:
v1 = h.pop()
v2 = h.pop()
time += v1 + v2 ### assembly time
h.push(v1 + v2) ### adding new part
return time
This solution passed all tests.
Question:
Does this solution produce correct min time, and if yes, how to prove it? | 2021-01-20 01:36:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4347834587097168, "perplexity": 2138.6657753963727}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519843.24/warc/CC-MAIN-20210119232006-20210120022006-00343.warc.gz"} |
https://dsp.stackexchange.com/tags/amplitude/new | # Tag Info
0
From a slightly less "dsp-like" point of view, slightly more geometric / time series, but this also works: The relation between the sinusoid (of amplitude 1) and the unit circle is well known. Instead of thinking of a moving average as a geometric mean on a window that slides from left to right over the time series, you could also define it as the ...
0
This would be trivial to do as simple decimation where each block increments its samples by $n+4$ with each block starting at sample 0,1,2,3 respectively. This is common with polyphase filter implementation and similar techniques to reduce the overall clock rate requirement for the processing (parallel processing). For more details on both of those ...
2
Below is the analytic result for both the actual max value of $0.901243$ and the maximum value found by the OP of $0.898464$ The reason you are not getting the predicted maximum is your samples of the sine wave are not located exactly at the peak. This is clear if you zoom in on the plot and compare the two peak locations for the number of samples given (as ...
1
Puzzle solved, thanks to Cedron Dawg and Dan Boschen! First, I ran a simple N point moving average of a sinewave, using the simulation model below: I used the OP's values: N = 10, P = 40, sinewave amplitude = 1 and a simulation step size, $\Delta t$, equal to unity. The results, shown in the next figure, are the same as those of the OP: The maximum ...
2
Okay, this takes a bit of algebra, Euler's formula, and the geometric series summation formula, and some plugging and chugging, but here is how you can calculate it directly: \begin{aligned} x[m] &= \frac{1}{n}\sum_{k=0}^{n-1} A \cos \left( (m-k) \frac{2\pi}{p} + \phi \right) \\ &= \frac{1}{n}\sum_{k=0}^{n-1} A \left[ \frac{e^{i\left( (m-k) \...
1
The amplitude reduction is simply given as the magnitude of the transfer function of moving average filter. A moving average filter has a rectangular impulse response so the transfer function will be a $sinc()$ function. You need to sample the $sinc()$ function at the frequency or your sign wave
0
For a sampled audio waveform (at least destined for human ears), any DC will typically be removed acoustically, electrically and digitally, and what you are left with is a nominally symmetric waveform (speech can have some asymmetry) fluctuating between +A and -A. For «loudness» you want to take the absolute value or square for a power estimate and do some ...
0
What do these values actually mean? Are they arbitrary? Yes, pretty much: they will just be the scale of numbers your system works with. In floating point systems, we often see that samples get normalized to [-1,+1], whereas in fixed-point system, it's often things like [-2⁻¹⁵,+2¹⁵-1], depending on the bit width of the samples to begin with. So, this is ...
Top 50 recent answers are included | 2020-04-05 11:03:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9558933973312378, "perplexity": 653.2851777039358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371576284.74/warc/CC-MAIN-20200405084121-20200405114121-00092.warc.gz"} |
http://math.stackexchange.com/questions/75746/what-does-matrix-rank-k-to-precision-epsilon-mean | # What does matrix rank $k$ to precision $\epsilon$ mean?
Suppose that the matrix $A_{ij}$ of dimension $n_i \times n_j$ has rank $k$ to precision $\epsilon$, then there exists a factorization of $A_{ij}$ of the form: $A_{ij} = L_i S_{ij} R_j + \text{O}(\epsilon)$.
I wonder what does matrix rank $k$ to precision $\epsilon$ mean?
Thank you.
-
Where did you read this? – Chris Eagle Oct 25 '11 at 14:17
It must mean that there's a rank-$k$ matrix within a distance of $\epsilon$ from $A$, for some appropriate (but unidentified) norm on the space of $n_i\times n_j$ matrices. I wonder what $k$ has to do with the conclusion of the claim, though. – Henning Makholm Oct 25 '11 at 14:19
@Henning: the "unidentified" norm is often the 2-norm, especially in the case of diagnosing badly-behaved least-squares problems and other problems that necessitate the use of orthogonal matrices for decompositions. – J. M. Oct 25 '11 at 14:37
@ChrisEagle This is a restatement of Theorem 3 in ON THE COMPRESSION OF LOW RANK MATRICES by H.Cheng. You could get it through Google Scholar search. – Yao Jin Oct 25 '11 at 16:59
Rank to precision $\epsilon$ means that in computing the rank of the matrix, we consider every singular value of the matrix that is less than $\epsilon$ as zero.
This is also known as "numerical rank": the number of singular values greater than $\epsilon$. | 2014-03-07 20:15:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.872801661491394, "perplexity": 409.26180471993774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999650775/warc/CC-MAIN-20140305060730-00035-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://huggingface.co/transformers/model_doc/fsmt.html | # FSMT¶
DISCLAIMER: If you see something strange, file a Github Issue and assign @stas00.
## Overview¶
FSMT (FairSeq MachineTranslation) models were introduced in Facebook FAIR’s WMT19 News Translation Task Submission by Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, Sergey Edunov.
The abstract of the paper is the following:
This paper describes Facebook FAIR’s submission to the WMT19 shared news translation task. We participate in two language pairs and four language directions, English <-> German and English <-> Russian. Following our submission from last year, our baseline systems are large BPE-based transformer models trained with the Fairseq sequence modeling toolkit which rely on sampled back-translations. This year we experiment with different bitext data filtering schemes, as well as with adding filtered back-translated data. We also ensemble and fine-tune our models on domain-specific data, then decode using noisy channel model reranking. Our submissions are ranked first in all four directions of the human evaluation campaign. On En->De, our system significantly outperforms other systems as well as human translations. This system improves upon our WMT’18 submission by 4.5 BLEU points.
The original code can be found here <https://github.com/pytorch/fairseq/tree/master/examples/wmt19>__.
## FSMTConfig¶
class transformers.FSMTConfig(langs=['en', 'de'], src_vocab_size=42024, tgt_vocab_size=42024, activation_function='relu', d_model=1024, max_length=200, max_position_embeddings=1024, encoder_ffn_dim=4096, encoder_layers=12, encoder_attention_heads=16, encoder_layerdrop=0.0, decoder_ffn_dim=4096, decoder_layers=12, decoder_attention_heads=16, decoder_layerdrop=0.0, attention_dropout=0.0, dropout=0.1, activation_dropout=0.0, init_std=0.02, decoder_start_token_id=2, is_encoder_decoder=True, scale_embedding=True, tie_word_embeddings=False, num_beams=5, length_penalty=1.0, early_stopping=False, use_cache=True, pad_token_id=1, bos_token_id=0, eos_token_id=2, **common_kwargs)[source]
This is the configuration class to store the configuration of a FSMTModel. It is used to instantiate a FSMT model according to the specified arguments, defining the model architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Parameters
• langs (List[str]) – A list with source language and target_language (e.g., [‘en’, ‘ru’]).
• src_vocab_size (int) – Vocabulary size of the encoder. Defines the number of different tokens that can be represented by the inputs_ids passed to the forward method in the encoder.
• tgt_vocab_size (int) – Vocabulary size of the decoder. Defines the number of different tokens that can be represented by the inputs_ids passed to the forward method in the decoder.
• d_model (int, optional, defaults to 1024) – Dimensionality of the layers and the pooler layer.
• encoder_layers (int, optional, defaults to 12) – Number of encoder layers.
• decoder_layers (int, optional, defaults to 12) – Number of decoder layers.
• encoder_attention_heads (int, optional, defaults to 16) – Number of attention heads for each attention layer in the Transformer encoder.
• decoder_attention_heads (int, optional, defaults to 16) – Number of attention heads for each attention layer in the Transformer decoder.
• decoder_ffn_dim (int, optional, defaults to 4096) – Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
• encoder_ffn_dim (int, optional, defaults to 4096) – Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
• activation_function (str or Callable, optional, defaults to "relu") – The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported.
• dropout (float, optional, defaults to 0.1) – The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
• attention_dropout (float, optional, defaults to 0.0) – The dropout ratio for the attention probabilities.
• activation_dropout (float, optional, defaults to 0.0) – The dropout ratio for activations inside the fully connected layer.
• max_position_embeddings (int, optional, defaults to 1024) – The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).
• init_std (float, optional, defaults to 0.02) – The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
• scale_embedding (bool, optional, defaults to True) – Scale embeddings by diving by sqrt(d_model).
• bos_token_id (int, optional, defaults to 0) – Beginning of stream token id.
• pad_token_id (int, optional, defaults to 1) – Padding token id.
• eos_token_id (int, optional, defaults to 2) – End of stream token id.
• decoder_start_token_id (int, optional) – This model starts decoding with eos_token_id
• encoder_layerdrop – (float, optional, defaults to 0.0): Google “layerdrop arxiv”, as its not explainable in one line.
• decoder_layerdrop – (float, optional, defaults to 0.0): Google “layerdrop arxiv”, as its not explainable in one line.
• is_encoder_decoder (bool, optional, defaults to True) – Whether this is an encoder/decoder model.
• tie_word_embeddings (bool, optional, defaults to False) – Whether to tie input and output embeddings.
• num_beams (int, optional, defaults to 5) – Number of beams for beam search that will be used by default in the generate method of the model. 1 means no beam search.
• length_penalty (float, optional, defaults to 1) – Exponential penalty to the length that will be used by default in the generate method of the model.
• early_stopping (bool, optional, defaults to False) – Flag that will be used by default in the generate method of the model. Whether to stop the beam search when at least num_beams sentences are finished per batch or not.
• use_cache (bool, optional, defaults to True) – Whether or not the model should return the last key/values attentions (not used by all models).
• Examples::
>>> from transformers import FSMTConfig, FSMTModel
>>> config = FSMTConfig.from_pretrained('facebook/wmt19-en-ru')
>>> model = FSMTModel(config)
to_dict()[source]
Serializes this instance to a Python dictionary. Override the default to_dict() from PretrainedConfig.
Returns
Dictionary of all the attributes that make up this configuration instance,
Return type
Dict[str, any]
## FSMTTokenizer¶
class transformers.FSMTTokenizer(langs=None, src_vocab_file=None, tgt_vocab_file=None, merges_file=None, do_lower_case=False, unk_token='<unk>', bos_token='<s>', sep_token='</s>', pad_token='<pad>', **kwargs)[source]
Construct an FAIRSEQ Transformer tokenizer. Based on Byte-Pair Encoding. The tokenization process is the following:
• Moses preprocessing and tokenization.
• Normalizing all inputs text.
• The arguments special_tokens and the function set_special_tokens, can be used to add additional symbols (like “__classify__”) to a vocabulary.
• The argument langs defines a pair of languages.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.
Parameters
• langs (List[str]) – A list of two languages to translate from and to, for instance ["en", "ru"].
• src_vocab_file (str) – File containing the vocabulary for the source language.
• tgt_vocab_file (st) – File containing the vocabulary for the target language.
• merges_file (str) – File containing the merges.
• do_lower_case (bool, optional, defaults to False) – Whether or not to lowercase the input when tokenizing.
• unk_token (str, optional, defaults to "<unk>") – The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
• bos_token (str, optional, defaults to "<s>") –
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
Note
When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the cls_token.
• sep_token (str, optional, defaults to "</s>") – The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.
• pad_token (str, optional, defaults to "<pad>") – The token used for padding, for example when batching sequences of different lengths.
build_inputs_with_special_tokens(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None) → List[int][source]
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A FAIRSEQ Transformer sequence has the following format:
• single sequence: <s> X </s>
• pair of sequences: <s> A </s> B </s>
Parameters
• token_ids_0 (List[int]) – List of IDs to which the special tokens will be added.
• token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs.
Returns
List of input IDs with the appropriate special tokens.
Return type
List[int]
create_token_type_ids_from_sequences(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None) → List[int][source]
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A FAIRSEQ Transformer sequence pair mask has the following format:
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
Parameters
• token_ids_0 (List[int]) – List of IDs.
• token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs.
Returns
List of token type IDs according to the given sequence(s).
Return type
List[int]
Creates a mask from the two sequences passed to be used in a sequence-pair classification task. An FAIRSEQ_TRANSFORMER sequence pair mask has the following format:
get_special_tokens_mask(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False) → List[int][source]
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model method.
Parameters
• token_ids_0 (List[int]) – List of IDs.
• token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs.
• already_has_special_tokens (bool, optional, defaults to False) – Whether or not the token list is already formatted with special tokens for the model.
Returns
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Return type
List[int]
prepare_seq2seq_batch(src_texts: List[str], tgt_texts: Optional[List[str]] = None, max_length: Optional[int] = None, max_target_length: Optional[int] = None, padding: str = 'longest', return_tensors: str = None, truncation: bool = True, **kwargs) → transformers.tokenization_utils_base.BatchEncoding
Prepare model inputs for translation. For best performance, translate one sentence at a time.
Parameters
• src_texts (List[str]) – List of documents to summarize or source language texts.
• tgt_texts (list, optional) – List of summaries or target language texts.
• max_length (int, optional) – Controls the maximum length for encoder inputs (documents to summarize or source language texts) If left unset or set to None, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.
• max_target_length (int, optional) – Controls the maximum length of decoder inputs (target language texts or summaries) If left unset or set to None, this will use the max_length value.
• padding (bool, str or PaddingStrategy, optional, defaults to False) –
Activates and controls padding. Accepts the following values:
• True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).
• 'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.
• False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different lengths).
• return_tensors (str or TensorType, optional) –
If set, will return tensors instead of list of python integers. Acceptable values are:
• 'tf': Return TensorFlow tf.constant objects.
• 'pt': Return PyTorch torch.Tensor objects.
• 'np': Return Numpy np.ndarray objects.
• truncation (bool, str or TruncationStrategy, optional, defaults to True) –
Activates and controls truncation. Accepts the following values:
• True or 'longest_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.
• 'only_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
• 'only_second': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
• False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
• **kwargs – Additional keyword arguments passed along to self.__call__.
Returns
A BatchEncoding with the following fields:
• input_ids – List of token ids to be fed to the encoder.
• attention_mask – List of indices specifying which tokens should be attended to by the model.
• labels – List of token ids for tgt_texts.
The full set of keys [input_ids, attention_mask, labels], will only be returned if tgt_texts is passed. Otherwise, input_ids, attention_mask will be the only keys.
Return type
BatchEncoding
save_vocabulary(save_directory: str, filename_prefix: Optional[str] = None) → Tuple[str][source]
Save only the vocabulary of the tokenizer (vocabulary + added tokens).
This method won’t save the configuration and special token mappings of the tokenizer. Use _save_pretrained() to save the whole state of the tokenizer.
Parameters
• save_directory (str) – The directory in which to save the vocabulary.
• filename_prefix (str, optional) – An optional prefix to add to the named of the saved files.
Returns
Paths to the files saved.
Return type
Tuple(str)
## FSMTModel¶
class transformers.FSMTModel(config: transformers.models.fsmt.configuration_fsmt.FSMTConfig)[source]
The bare FSMT Model outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
Parameters
config (FSMTConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
forward(input_ids, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, encoder_outputs: Optional[Tuple] = None, past_key_values=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]
The FSMTModel forward method, overrides the __call__() special method.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Parameters
• input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –
Indices of input sequence tokens in the vocabulary.
IIndices can be obtained using FSTMTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.
What are input IDs?
• attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) –
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
• 1 for tokens that are not masked,
• 0 for tokens that are masked.
What are attention masks?
• decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) – Provide for translation and summarization training. By default, the model will create this tensor by shifting the input_ids right, following the paper.
• decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) – Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default. If you want to change padding behavior, you should read modeling_fstm._prepare_fstm_decoder_inputs() and modify. See diagram 1 in the paper for more info on the default strategy
• encoder_outputs (Tuple(torch.FloatTensor), optional) – Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
• past_key_values (Tuple(torch.FloatTensor) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) – Contains precomputed key and value hidden-states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length).
• use_cache (bool, optional, defaults to True) – If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
• output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
• output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
• return_dict (bool, optional) – Whether or not to return a ModelOutput instead of a plain tuple.
Returns
A Seq2SeqModelOutput (if return_dict=True is passed or when config.return_dict=True) or a tuple of torch.FloatTensor comprising various elements depending on the configuration (FSMTConfig) and inputs.
• last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) – Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
• past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) – Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
• decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
• decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
• cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
• encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) – Sequence of hidden-states at the output of the last layer of the encoder of the model.
• encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
• encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
Return type
Seq2SeqModelOutput or tuple(torch.FloatTensor)
Example:
>>> from transformers import FSMTTokenizer, FSMTModel
>>> import torch
>>> tokenizer = FSMTTokenizer.from_pretrained('facebook/wmt19-ru-en')
>>> model = FSMTModel.from_pretrained('facebook/wmt19-ru-en')
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
## FSMTForConditionalGeneration¶
class transformers.FSMTForConditionalGeneration(config: transformers.models.fsmt.configuration_fsmt.FSMTConfig)[source]
The FSMT Model with a language modeling head. Can be used for summarization.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
Parameters
config (FSMTConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
forward(input_ids, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, encoder_outputs=None, past_key_values=None, labels=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]
The FSMTForConditionalGeneration forward method, overrides the __call__() special method.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Parameters
• input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –
Indices of input sequence tokens in the vocabulary.
IIndices can be obtained using FSTMTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.
What are input IDs?
• attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) –
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
• 1 for tokens that are not masked,
• 0 for tokens that are masked.
What are attention masks?
• decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) – Provide for translation and summarization training. By default, the model will create this tensor by shifting the input_ids right, following the paper.
• decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) – Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default. If you want to change padding behavior, you should read modeling_fstm._prepare_fstm_decoder_inputs() and modify. See diagram 1 in the paper for more info on the default strategy
• encoder_outputs (Tuple(torch.FloatTensor), optional) – Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
• past_key_values (Tuple(torch.FloatTensor) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) – Contains precomputed key and value hidden-states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length).
• use_cache (bool, optional, defaults to True) – If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
• output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
• output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
• return_dict (bool, optional) – Whether or not to return a ModelOutput instead of a plain tuple.
• labels (torch.LongTensor of shape (batch_size, sequence_length), optional) – Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
A Seq2SeqLMOutput (if return_dict=True is passed or when config.return_dict=True) or a tuple of torch.FloatTensor comprising various elements depending on the configuration (FSMTConfig) and inputs.
• loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) – Language modeling loss.
• logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) – Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
• past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) – Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
• decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
• decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
• cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
• encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) – Sequence of hidden-states at the output of the last layer of the encoder of the model.
• encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
• encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
Return type
Seq2SeqLMOutput or tuple(torch.FloatTensor)
Translation example:
from transformers import FSMTTokenizer, FSMTForConditionalGeneration
mname = "facebook/wmt19-ru-en"
model = FSMTForConditionalGeneration.from_pretrained(mname)
tokenizer = FSMTTokenizer.from_pretrained(mname)
src_text = "Машинное обучение - это здорово, не так ли?"
input_ids = tokenizer.encode(src_text, return_tensors='pt')
outputs = model.generate(input_ids, num_beams=5, num_return_sequences=3)
for i, output in enumerate(outputs):
decoded = tokenizer.decode(output, skip_special_tokens=True)
print(f"{i}: {decoded})
# 1: Machine learning is great, isn't it? ... | 2021-01-26 16:00:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19127702713012695, "perplexity": 10076.194432410372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704800238.80/warc/CC-MAIN-20210126135838-20210126165838-00218.warc.gz"} |
https://www.gamedev.net/forums/topic/612573-n64-quality-water/ | # N64 Quality Water...
This topic is 2913 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Hi again all.
I'm trying to mimic some old Nintendo 64 quality water...
The key points would be to interact with the water and to be able to run it on a low end android device. I'd like the water to animate, and for objects to be able to interact; but it doesn't need to be super realistic (IE: no reflection)
The issue is i'm having a hard time finding resources on renderin water that aren't about particle systems needing the latest and greatest hardware
Can anyone please point me in the right direction?
##### Share on other sites
The surface looks like it's just a triangle mesh, rendered after the scene with a surface texture and the waves are done by shifting the verticies up and down, probably in the shader. It's probably just a simple set of sine waves of different frequencies added together. It's fast and also easily reproducible for when you want to do (say) collision detection.
The interactions with the objects can be done in two ways.
Firstly, for decorative objects (say things floating on the surface of the water) they can be drawn with a shader which just computes the same sort of offset as the waves based on their position and adds that to all coordinates.
For player objects you'd probably do the position calculation on the CPU and send them prepositioned. This allows them to do things like sink into the water upon landing.
Scatter water splash particle effects around at all the interaction points based on what interaction is happening and that'll help obscure some of the polygon edge artefacts (although many are still visible in that video).
The only complicated part is going to be doing the bouncey "floating" behaviour for your player objects but it'll end up being a fairly small lump of code run a few times a frame (so efficiency won't be too much of a stress) with a few constant factors which will need tuning by hand until you get a behaviour you're happy with -- it's that that tuning process which will give your game its character and feel.
##### Share on other sites
[font=arial, verdana, tahoma, sans-serif][size=2]Thanks Katie.
That's pretty much what i had in mind too, but after some searching i found all sorts of particle this and spring simulation that which made me second guess myself. I also found a cool video on [color="#000000"][font="arial, sans-serif"]Gerstner Waves which i might try to do. Having characters interact with water is going to be hard, but i think i'll be able to figure something out. Thanks again![/font]
[color="#000000"][font="arial, sans-serif"]
[/font][/font]
[color="#000000"][font="arial, sans-serif"]
[/font]
[color="#000000"][font="arial, sans-serif"][size="4"]EDIT:[/font]
[color="#000000"][font="arial, sans-serif"]Actually, i have one more question. I can make a pretty nice wave effect using sinf(x) / x, however i have no idea how to animate this... Or how to combine multiple waves so my water isn't just going left to right... Any suggestions?[/font]
##### Share on other sites
[color="#000000"][font="arial, sans-serif"]Actually, i have one more question. I can make a pretty nice wave effect using sinf(x) / x, however i have no idea how to animate this... Or how to combine multiple waves so my water isn't just going left to right... Any suggestions?[/font]
Just add a time coefficient in there - replace x with (x + ct) where t is time and c is some constant you can tweak to adjust the speed. To get waves moving in different directions, instead of x, use some combination of x and y. To combine waves in different directions, just add them together.
##### Share on other sites
BattleMetalChri,
Thanks for the reply. Works perfectly now!
##### Share on other sites
One last question:
So, right now in my update function i'm calculating sine twice for every vertex. I kind of want to change this to use a lookup table as my project will be run on an embedded device (Nindento dsi).
I think constructing a lookup table then % the argument of sin to keep it in the lookup range would do the trick, but i don't know how the floating point value of timeCoefficient would effect this.
Can anyone point me in the right direction to using a lookup table for this?
void Update() { DWORD thisTime = GetTickCount(); float deltaTime = (float)(thisTime - lastTime) * 0.001f; lastTime = thisTime; timeCoefficient += deltaTime; for (int z = 0; z < WATER_WIDTH; ++z) { for (int x = 0; x < WATER_HEIGHT; ++x) { deformData[x][z] = sinf(x + timeCoefficient) / x; deformData[x][z] += sinf(z + timeCoefficient)/ z; water[x][z][1] = deformData[x][z]; } } }
##### Share on other sites
The surface looks like it's just a triangle mesh, rendered after the scene with a surface texture and the waves are done by shifting the verticies up and down, probably in the shader. It's probably just a simple set of sine waves of different frequencies added together. It's fast and also easily reproducible for when you want to do (say) collision detection.
In the shader? that was on N64!
Would be interested to see some screenshots of your results PrjM
##### Share on other sites
Here's a description of what we used to do back in the old days.
http://freespace.virgin.net/hugo.elias/graphics/x_water.htm
When you've got it working you can just write some values into the current buffer for each position where you want objects to interact with the water.
This method is still used for games today when you need to get the water to ripple around objects.
##### Share on other sites
Can anyone point me in the right direction to using a lookup table for this?
Take a look at Direct Digital Synthesis.
No need to even calculate sine values, just keep them in a lookup table.
If a function call at a fixed frequency is available to you, this is an efficient way to calculate sine values:
#define INC_FREQ 10000 // hertz #define ACC_BITS 32 #define ACC_VALUES 4294967296 #define SAMPLE_BITS 8 // There are 2^(SAMPLE_BITS) samples #define FREQ_INCR(f) ((long)f * (long)ACC_VALUES) / INC_FREQ void UpdateSineWaves() { for( int i=0; i<numSineWaves; i++ ) { sineWaves.accumulator += sineWaves.frequencyIncrement; sineWaves.amplitude = sineWave[sineWaves.accumulator>>(ACC_BITS-SAMPLE_BITS)]; } }
To adjust the frequency of a sine wave, just change the frequency increment. This doesn't affect the phase, so the result is very smooth, like what might be wanted for water waves. ;)
##### Share on other sites
Wow, just wow! Thanks sooo much @whitechaos35 i would have never found that on my own!
• ### Game Developer Survey
We are looking for qualified game developers to participate in a 10-minute online survey. Qualified participants will be offered a \$15 incentive for your time and insights. Click here to start!
• 13
• 30
• 9
• 16
• 12 | 2019-10-14 06:52:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18938472867012024, "perplexity": 1345.1633619969448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986649232.14/warc/CC-MAIN-20191014052140-20191014075140-00289.warc.gz"} |
http://gmatclub.com/forum/m7-q14-118520.html | Find all School-related info fast with the new School-Specific MBA Forum
It is currently 25 Oct 2016, 15:46
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# M7,Q14
Author Message
Senior Manager
Joined: 17 May 2010
Posts: 299
GMAT 1: 710 Q47 V40
Followers: 4
Kudos [?]: 47 [0], given: 7
### Show Tags
07 Aug 2011, 13:13
What is the number of integers from 1 to 1000 (inclusive) that are divisible by neither 11 nor by 35?
884
890
892
910
945
_________________
If you like my post, consider giving me KUDOS!
Senior Manager
Joined: 17 May 2010
Posts: 299
GMAT 1: 710 Q47 V40
Followers: 4
Kudos [?]: 47 [0], given: 7
### Show Tags
07 Aug 2011, 13:17
The explanation says
"To count the number of integers from 1 to $$N$$ (inclusive) that are divisible by $$x$$ , find the value of $$\frac{N}{x}$$ "
But MGMAT says
(Last-First)/Increment + 1.
How come these two are different?
Also in the explanation, the 1000/11 figure is rounded down, even though the number is 90.9. Why not round it up?
_________________
If you like my post, consider giving me KUDOS!
Manager
Joined: 26 Oct 2010
Posts: 82
Followers: 0
Kudos [?]: 7 [0], given: 10
### Show Tags
09 Aug 2011, 15:40
Let's try to understand the logic behind the formula by taking a smaller example. How many nos are there between 1 to9 that are divisible by 2 ?
Going over the multiples of 2 - 2,4,6,8 , i.e. 4
9/2 = 4.5 ...rounds down to 4.
If we round it up, we are including 10 as well. That is why it is important to round down for these examples.
As for the formulae that you have listed, I would use the first formula, the second formula seems to be taken out of context.
Re: M7,Q14 [#permalink] 09 Aug 2011, 15:40
Display posts from previous: Sort by
# M7,Q14
Moderator: Bunuel
Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®. | 2016-10-25 22:46:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39521324634552, "perplexity": 5040.779634317347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720468.71/warc/CC-MAIN-20161020183840-00048-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://www.ias.ac.in/describe/article/pram/093/02/0028 | • Numerical simulation of space-fractional Helmholtz equation arising in seismic wave propagation, imaging and inversion
• # Fulltext
https://www.ias.ac.in/article/fulltext/pram/093/02/0028
• # Keywords
Fractional Helmholtz equation; q-fractional homotopy analysis transform method; fractional variation iteration method; Caputo fractional derivative
• # Abstract
In this paper, a reliable numerical scheme, the q-fractional homotopy analysis transform method (q-FHATM), is proposed to examine the Helmholtz equation of fractional order arising in seismic wave propagation, imaging and inversion. Sufficient conditions for its convergence and error estimates are established. The q-FHATMprovides a solution in a rapidly convergent series. Results for different fractional values of space derivatives are compared with the existing methods and discussed with the help of figures. A proper selection of parameters yields approximations identical to the exact solution. Parameter $\bar{h}$ offers an expedient way of controlling the region of convergence of the solution. Test examples are provided to illustrate the accuracy and competency of the proposed scheme. The outcomes divulge that our scheme is attractive, user-friendly, reliable and highly effective.
• # Author Affiliations
1. Department of Mathematics, National Institute of Technology, Kurukshetra 136 119, India
2. Department of Mathematics, Institute of Applied Sciences and Humanities, GLA University, Mathura 281 406, India
• # Pramana – Journal of Physics
Volume 97, 2023
All articles
Continuous Article Publishing mode
• # Editorial Note on Continuous Article Publication
Posted on July 25, 2019 | 2023-03-27 07:24:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1910022795200348, "perplexity": 2083.917285580067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948609.41/warc/CC-MAIN-20230327060940-20230327090940-00626.warc.gz"} |
https://tex.stackexchange.com/questions/504316/bold-symbol-in-glossary-but-not-in-text | # Bold Symbol in Glossary but not in text
I have a file with two types of glossaries and want the symbols to be bold within the glossary (which they are as currently in the code) but when I use them with \gls{xx) in the text or a table, I do not want them to appear bold. I hope someone can help me with this.
\documentclass[12pt]{scrreprt}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage[english]{babel}
\usepackage{graphicx}
\usepackage{booktabs}
\usepackage{abstract}
\usepackage[footskip=2cm, hmargin=2.5cm,vmargin=3cm,bindingoffset=0.5cm] {geometry}
\usepackage{setspace} %enable onehalfspacing
\onehalfspacing
\usepackage[numbered,framed]{matlab-prettifier} %integration matlab files, required for glossary
\usepackage{multicol} %enable multicolumn
\usepackage[acronym, nonumberlist, nopostdot, nogroupskip]{glossaries}
\usepackage{glossary-mcols}
\renewcommand{\glsclearpage}{}
\setglossarysection{section}
\newglossary*{symbols}{List of Symbols}
\makenoidxglossaries
\glsnoexpandfields
\newacronym{pvt}{PVT}{Pressure Volume Temperature}
\newglossaryentry{ng}{type=symbols,name=$\boldsymbol n\raisebox{-.4ex}{\tiny G}$,sort=ng,description={Degree at k\raisebox{-.4ex}{\tiny{{rG}}}}}
\begin{document}
\pagenumbering{roman}
\gls{pvt} and \gls{ng}
\cleardoublepage
\phantomsection
%\twocolumn
\singlespacing
\chapter*{Nomenclature}
\begin{multicols}{2}
\printnoidxglossaries
\end{multicols}
\end{document}
• Welcome to TeX.SE! Good first question, it is very helpful that you have included an MWE (Minimal Working Example) with your code that I could use for my answer. Maybe for your next question you can try to make the MWE even more minimal? There were a lot of packages and other code that was not necessary for reproducing the issue, they could be removed as well. – Marijn Aug 15 at 15:00
• @Marijn Thank you very much, your answer helped a lot! Will try to minimize it as much as possible next time – Lucky Aug 19 at 7:45
It is possible to define the format of a glossary entry in the main text using the command \defglsentryfmt. This command has an optional argument for the glossary type, so you can define it only for terms of type symbols.
The idea is to use this command to switch off \boldsymbol in the main text. One way to do that is to store the original definition of \boldsymbol to another macro (for example \origboldsym), then temporarily redefine \boldsymbol to mean nothing (i.e., \relax), print the glossary entry, and restore the original definition. Redefining the entry format only affects the main text, so in the List of Symbols the original definition of \boldsymbol is used.
MWE:
\documentclass[12pt]{scrreprt}
\usepackage[acronym, nonumberlist, nopostdot, nogroupskip]{glossaries}
\setglossarysection{section}
\newglossary*{symbols}{List of Symbols}
\makenoidxglossaries
\glsnoexpandfields
\defglsentryfmt[symbols]{%
\let\origboldsym\boldsymbol%
\let\boldsymbol\relax%
\glsgenentryfmt%
\let\boldsymbol\origboldsym%
}
\newacronym{pvt}{PVT}{Pressure Volume Temperature}
\newglossaryentry{ng}{type=symbols,name=$\boldsymbol n\raisebox{-.4ex}{\tiny G}$,sort=ng,description={Degree at k\raisebox{-.4ex}{\tiny{{rG}}}}}
\begin{document}
\gls{pvt} and \gls{ng}
\printnoidxglossaries
\end{document}
Result: | 2019-11-14 01:08:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8661752939224243, "perplexity": 1524.6364803149472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667767.6/warc/CC-MAIN-20191114002636-20191114030636-00133.warc.gz"} |
https://socratic.org/questions/if-z-max-z-2-z-2-then | # If |z| = Max{|z-2|,|z+2|}, then?
## A) $| z + \overline{z} | = 1$ B)$z + \overline{z} = {2}^{2}$ C)$| z + \overline{z} | = 2$ D) None of these
Mar 21, 2018
C)
#### Explanation:
This is equivalent to:
Determine $x , y$ such that
${x}^{2} + {y}^{2} = \max \left({\left(x - 2\right)}^{2} + {y}^{2} , {\left(x + 2\right)}^{2} + {y}^{2}\right)$
which is equivalent to
${x}^{2} = \max \left({\left(x - 2\right)}^{2} , {\left(x + 2\right)}^{2}\right)$ or
${x}^{2} = {\left(x \pm 2\right)}^{2} \Rightarrow 0 = \pm 4 x + 4 \Rightarrow x = \pm 1$ then
${x}^{2} = \max \left({\left(x - 2\right)}^{2} , {\left(x + 2\right)}^{2}\right) \Rightarrow x = \pm 1$
So this gives a two lines set
$\left\{- 1 , y\right\}$ and $\left\{1 , y\right\}$ or ${z}_{1} = 1 + i y$ and ${z}_{2} = - 1 + i y$
then we have
${z}_{1} + {\overline{z}}_{1} = 2 \Rightarrow \left\mid {z}_{1} + {\overline{z}}_{1} \right\mid = 2$ and
${z}_{2} + {\overline{z}}_{2} = - 2 \Rightarrow \left\mid {z}_{2} + {\overline{z}}_{2} \right\mid = 2$ | 2019-08-22 20:17:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 14, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8525511026382446, "perplexity": 1784.2671334679337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317359.75/warc/CC-MAIN-20190822194105-20190822220105-00018.warc.gz"} |
https://www.physicsforums.com/threads/system-of-equations-help.337391/ | # System of equations : Help
1. Sep 14, 2009
### CanaBra
I have been trying to solve this problem for a whole week without success, I need help.
Solve the following for x,y and z:
2x+5y-z = 18
7x-y+4z = 22
6x+2y-3z = 0.1x + 0.2y + 0.3z
I have already combined all the terms of eq.#3 and equal it to zero, but it didn't work.
I also, multiplyed eq.#3 by 1 and tried to have a comon x for the other two equations to eliminate x and it didn't work, I've tried many other tatics but it doesn't work.
Thank you
2. Sep 15, 2009
### CRGreathouse
Pari:
Code (Text):
matsolve([2,5,-1;7,-1,4;6-.1,2-.2,-3-.3],[18,22,0]~)
TI-BASIC:
Code (Text):
rref([[2,5,-1,18][7,-1,4,22][6-.1,2-.2,-3-.3,0]])
Matlab:
Code (Text):
linsolve([2 5 -1; 7 -1 4; 6-.1 2-.2 -3-.3], [18; 22; 0])
Mathematica:
Code (Text):
LinearSolve[{{2,5,-1}{7,-1,4}{6-.1,2-.2,-3-.3}},{18,22,0}]
3. Sep 15, 2009
### KLoux
Maybe you can show some steps or explain in more detail what your problem is? Show us where you're getting stuck or explain what "it didn't work" means.
-Kerry
4. Sep 15, 2009
### HallsofIvy
2x+ 5y- z= 18
7x- y+ 4z= 22
5.9x+ 1.8y- 3.3z= 0 ?
Why in the world would you multiply anything by "1"? Do you mean "10"? That would give you 59x+ 18y- 33z= 0. Getting a "common x" for the other two equations (I guess you mean the same coefficient) would give 14x+ 35y- 7z= 119 and 14x- 5y+ 20z= 110. Subtracting the second equation from the first gives 40y- 27z= 9. What do you mean "it doesn't work"?
The equations above are the
5. Sep 15, 2009
### KLoux
Ahhh - I read it the same way HallsOfIvy did the first time - perhaps you multiplied eq. 3 by eq. 1? Maybe this is the problem - you don't want to multiply, you want to add. Really, you should multiply an equation by a carefully selected constant, then add it to another equation. For example, if you multiply eq. 1 by 4, and add it to eq. 2, you get
$$8x + 20y - 4z + 7x - y +4z = 72 + 22$$
or
$$15x + 19y = 94$$
Does this help get you started?
-Kerry
Last edited: Sep 15, 2009
6. Sep 16, 2009
### hotvette
7. Sep 17, 2009
### CanaBra
Thank you everyone,
I was confused, but with your help found the solution to this problem | 2018-02-19 08:30:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6371781229972839, "perplexity": 2463.3128336290124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812556.20/warc/CC-MAIN-20180219072328-20180219092328-00540.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-1-foundations-for-algebra-common-core-cumulative-standards-review-selected-response-page-75/4 | # Chapter 1 - Foundations for Algebra - Common Core Cumulative Standards Review - Selected Response - Page 75: 4
D
#### Work Step by Step
$12+8(.85)$ Now, we simplify $12+6.80=18.80$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | 2020-02-18 18:36:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5433341264724731, "perplexity": 2565.9616151058717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143805.13/warc/CC-MAIN-20200218180919-20200218210919-00177.warc.gz"} |
https://www.aimsciences.org/article/doi/10.3934/dcds.2013.33.1333 | Article Contents
Article Contents
Semigroup representations in holomorphic dynamics
• We use semigroup theory to describe the group of automorphisms of some semigroups of interest in holomorphic dynamical systems. We show, with some examples, that representation theory of semigroups is related to usual constructions in holomorphic dynamics. The main tool for our discussion is a theorem due to Schreier. We extend this theorem, and our results in semigroups, to the setting of correspondences and holomorphic correspondences.
Mathematics Subject Classification: 37F05, 37F10, 20M30.
Citation:
• [1] A. F. Beardon and T. W. Ng, On Ritt's factorization of polynomials, J. London Math. Soc. (2), 62 (2000), 127-138.doi: 10.1093/rpc/2000rpc587. [2] C. Cabrera and P. Makienko, On dynamical Teichmüller spaces, Conf. Geom and Dyn., 14 (2010), 256-268.doi: 10.1090/S1088-4173-2010-00214-6. [3] A. Douady, Systèmes dynamiques holomorphes, Bourbaki seminar, 1982/83, Astèrisque, 105, Soc. Math. France, Paris, (1983), 39-63. [4] A. Douady and J. H. Hubbard, A proof of Thurston's topological characterization of rational functions, Acta Math., 171 (1993), 263-297.doi: 10.1007/BF02392534. [5] A. Eremenko, On the characterization of a Riemann surface by its semigroup of endomorphisms, Trans. Amer. Math. Soc., 338 (1993), 123-131.doi: 10.2307/2154447. [6] A. Hinkkanen, Functions conjugating entire functions to entire functions and semigroups of analytic endomorphisms, Complex Variables and Elliptic Equations, 18 (1992), 149-154.doi: 10.1080/17476939208814541. [7] M. Lyubich and Y. Minsky, Laminations in holomorphic dynamics, J. Diff. Geom., 47 (1997), 17-94. [8] R. Mañè, P. Sad and D. Sullivan, On the dynamics of rational maps, Ann. Scien. Ec. Norm. Sup. Paris(4), 16 (1983), 193-217. [9] K. D. Magill, Jr., A survey of semigroups of continous self maps, Semigroup Forum, 11 (1975/76), 189-282. doi: 10.1007/BF02195270. [10] C. McMullen, "Complex Dynamics and Renormalization," Annals of Mathematics Studies, vol. 135, Princeton University Press, Princeton, NJ, 1994. [11] ______, "Renormalization and 3-Manifolds Which Fiber Over the Circle," Annals of Mathematics Studies, vol. 142, Princeton University Press, Princeton, NJ, 1996. [12] J. Milnor, "Dynamics of One Complex Variable," Friedr. Vieweg & Sohn, 1999. [13] J. F. Ritt, Prime and composite polynomials, Trans. Amer. Math. Soc., 23 (1922), 51-66.doi: 10.1090/S0002-9947-1922-1501205-4. [14] J. Schreier, Uber Abbildungen einer abstrakten Menge auf ihre Teilmengen, Fund. Math., (1937), 261-264. | 2022-12-05 08:14:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.7300329208374023, "perplexity": 1763.2352464960163}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711013.11/warc/CC-MAIN-20221205064509-20221205094509-00259.warc.gz"} |
https://alpha.physionet.org/content/lyapunov/1.0.0/ | Software Open Access
# A practical method for calculating Lyapunov exponents from small data sets
Published: Jan. 16, 2001. Version: 1.0.0
When using this resource, please cite the original publication:
M.T. Rosenstein, J.J. Collins, and C.J. De Luca. A practical method for calculating largest Lyapunov exponents from small data sets. This article originally appeared in Physica D 65:117-134, 1993
Please include the standard citation for PhysioNet:
Goldberger AL, Amaral LAN, Glass L, Hausdorff JM, Ivanov PCh, Mark RG, Mietus JE, Moody GB, Peng C-K, Stanley HE. PhysioBank, PhysioToolkit, and PhysioNet: Components of a New Research Resource for Complex Physiologic Signals (2003). Circulation. 101(23):e215-e220.
### Abstract
Detecting the presence of chaos in a dynamical system is an important problem that is solved by measuring the largest Lyapunov exponent. Lyapunov exponents quantify the exponential divergence of initially close state-space trajectories and estimate the amount of chaos in a system. We present a new method for calculating the largest Lyapunov exponent from an experimental time series. The method follows directly from the definition of the largest Lyapunov exponent and is accurate because it takes advantage of all the available data. We show that the algorithm is fast, easy to implement, and robust to changes in the following quantities: embedding dimension, size of data set, reconstruction delay, and noise level. Furthermore, one may use the algorithm to calculate simultaneously the correlation dimension. Thus, one sequence of computations will yield an estimate of both the level of chaos and the system complexity.
The full article may be downloaded in PDF (783KB) or gzip-compressed PostScript (361KB) formats.
Visualizing the effects of filtering chaotic signals
Reconstruction expansion as a geometry-based framework for choosing proper delay times
##### Access
Access Policy:
Anyone can access the files, as long as they conform to the terms of the specified license.
Topics:
chaos complexity
##### Corresponding Author
You must be logged in to view the contact information.
## Files
Total uncompressed size: 1.4 MB.
##### Access the files
wget -r -N -c -np https://alpha.physionet.org/files/lyapunov/1.0.0/ | 2019-10-15 07:23:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4651356041431427, "perplexity": 2215.1905919781043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986657586.16/warc/CC-MAIN-20191015055525-20191015083025-00259.warc.gz"} |
https://webapps.stackexchange.com/questions/79105/is-there-a-way-i-can-search-urls-on-google-bing-verbatim-with-dots | # Is there a way I can search URL's on Google/Bing verbatim with dots?
I'm trying to do some security related research with Google but it filters out . and fuzzy matches across the whole page. Is there a search engine that will let me do exact in-URL matching?
## migrated from security.stackexchange.comJun 11 '15 at 11:39
This question came from our site for information security professionals.
Basically, you can “tell” the Google web search app how to interpret your input.
• Literal search: you can use " to enclose a literal string to search for, like "BandIsBand"
• Domain Search: you can use is site:<domain or URL>. This can limit the results to only include the site/domain you specify. Like banana Site:stackexchange.com
• I tried that but it won't let me find say .editorconfig in the URL only. – Kit Sunde Jun 12 '15 at 8:13
• I'm not actually searching for a particular domain, I'm searching for URLs. For example: http://www.google.com/foo/bar/.test/foo I want to find URLs that contain .test. – Kit Sunde Jun 12 '15 at 10:59
• I do not think google lets you search specificly for URL parts. but you can just search for the .text and add some context to the like wordpress to try and find wordpress sites with a .config. – LvB Jun 12 '15 at 11:43 | 2019-10-16 09:13:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23689600825309753, "perplexity": 2298.1950721111775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986666959.47/warc/CC-MAIN-20191016090425-20191016113925-00396.warc.gz"} |
https://brilliant.org/problems/quad-rat-roots/ | Algebra Level 3
$$\text{The Quadratic equation}$$, $\large x^2 + [a^2 -5a + b + 4]x + b = 0$ $$\text{has roots -5 and 1 , then number of integral values of}$$ a $$\text{are}$$
Note: [.] denotes the greatest integer function.
× | 2017-01-17 23:39:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9995071887969971, "perplexity": 3069.470623183626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00499-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://www.varsitytutors.com/ap_statistics-help/how-to-find-descriptive-data-from-a-z-score | # AP Statistics : How to find descriptive data from a z-score
## Example Questions
### Example Question #1 : How To Find Descriptive Data From A Z Score
There are four suspects in a police line-up, and one of them committed a robbery. The suspect is described as "abnormally tall". In this case, "abnormally" refers to a height at least two standard deviations away from the average height. Their heights are converted into the following z-scores:
Suspect 1: 2.3
Suspect 2: 1.2
Suspect 3: 0.2
Suspect 4: -1.2.
Which of the following suspects committed the crime?
Suspect 4
Suspect 2
Suspect 3
Suspect 1
Suspect 1
Explanation:
Z-scores describe how many standard deviations a given observation is from the mean observation. Suspect 1's z-score is greater than two, which means that his height is at least two standard deviations greater than the average height and thus, based on the description, Suspect 1 is the culprit.
### Example Question #1 : How To Find Descriptive Data From A Z Score
A value has a -score of . The value is . . .
two standard deviations from the population mean
one standard deviation from the population mean
above the population mean
below the population mean
the same as the population mean
below the population mean
Explanation:
The -score indicates how close a particular value is to the population mean and whether the value is above or below the mean. A positive -score is always above the mean and a negative -score is always below it. Here, we know the value is below the mean because we have a negative -score.
### Example Question #2 : How To Find Descriptive Data From A Z Score
All of the students at a high school are given an entrance exam at the beginning of 9th grade. The scores on the exam have a mean of and a standard deviation of . Sally's z-score is . What is her score on the test?
Explanation:
The z-score equation is .
To solve for we have .
### Example Question #3 : How To Find Descriptive Data From A Z Score
Your professor gave back the mean and standard deviation of your class's scores on the last exam.
Your friend says the z-score of her exam is .
What did she score on her exam?
Explanation:
The z-score is the number of standard deviations above the mean.
We can use the following equation and solve for x.
Two standard deviations above 75 is 85.
### Example Question #4 : How To Find Descriptive Data From A Z Score
Your boss gave back the mean and standard deviation of your team's sales over the last month.
Your friend says the z-score of her number of sales is .
How many sales did she make?
Explanation:
The z-score is the number of standard deviations above or below the mean.
We can use the known information with the following formula to solve for x.
### Example Question #5 : How To Find Descriptive Data From A Z Score
The following data set represents Mr. Marigold's students' scores on the final. The standard deviation for this data set is 8.41. If you scored 0.91 standard deviations worse than the mean, what was your score?
Explanation:
To work with a z-score, first we need to find the mean of the data set. By adding together and dividing by 26, we get 81.15.
We know that your score is 0.91 standard deviations WORSE than the mean, which means that your z-score is -0.91. We can use the following formula for the z-score:
where z is the z-score, x is your data point, is the mean, 81.15, and is the standard deviation, which we are told is 8.41.
multiply both sides by 8.41
we can reasonably round this to 73.5, which is an actual score in the data set. That must be your grade.
### Example Question #5 : How To Find Descriptive Data From A Z Score
Your teacher gives you the z-score of your recent test, and says that the mean score was a 60, with a standard deviation of 6. Your z-score was a -2.5. What did you score on the test?
Explanation:
To find out your score on the test, we enter the given information into the z-score formula and solve for .
where is the z-score, is the mean, and is the standard deviation.
As such,
So you scored a on the test. | 2016-10-23 18:19:03 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8390065431594849, "perplexity": 1017.1061377761783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719397.0/warc/CC-MAIN-20161020183839-00292-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://stats.stackexchange.com/questions/80412/chi-square-test-for-survey-data | # Chi Square test for survey data
I am working with a product that turns survey data into useful statistics. Reviewing their code, has made me somewhat nervous, and I'm not a statistician, so I hope I can ask for clarity of the following problem:
Out of a survey S, for a product P. Respondents where asked if they
1. liked the product
2. were indifferent
3. hated the product
The group of respondents were separated into men and women. The chart supplied by the software when crunching some survey data, says that "Men are significantly more likely to be likers." OR "Men and Women..." Or "Women..."
For me this already raises issues:
1. Men are significantly more likely to be likers than what?
2. Men and Women are more signicantly likely to be likers than what?
3. How are these things measured?
4. What test is being used?... etc.
When I had a look at the code, I noticed they were using a chi-test(!). I had to ask what exactly the null hypothesis was, because this was making less and less sense. Apparently the null hypothesis is that "the chance that men and women are likers is the same" ...ok, fine. But wait.
So, we have the following table:
Men Women Total
likers 54 46 100
indifferent 23 26 49
non-likers 22 31 53
Total 99 103 202
We can populate the expected distributions for all three rows:
Men Women Total
likers 54-49 46-50 100
indifferent 23-24 26-24 49
non-likers 22-25 31-27 53
Total 99 103 202
The code then populates a matrix with chi values based on the above. The programmer decided that the degrees of freedom when doing these calculations was (m-1)(n-1) = 2, which at this point made me think the null hypothesis was rather that if you are a liker, indifferent or a non-liker, there is equal probability that you are a man or a woman.
We're using a 90% confidence level, so all I imagined we needed to do was to sum over all the 6 chi values, and compare that with a critical value given by the degrees of freedom and the confidence interval. From that point we could say with 90% certainty that men and women were equally as likely to be a liker, etc... or reject the N.H.
This is what the code does instead:
1. It uses 1 degree of freedom instead of 2 (still at 90%), so we have a new critical value 2.706
2. For each row (liker, etc...) of the chi value matrix, if an element is greater than the critical value reject the null hypothesis, and add the element to a 'significance' list.
To illustrate, it looks at [likers;men] > cv i.e. chi_value[0][0] > cv, if that is true, reject N.H., and add 'men' to the list.
On the chart this result is reflected as: men are more likely to be likers. For me this single evaluation of men and women for each row seems wrong. It doesn't make sense to make pronouncements about two variables when you're only looking at one...
I am not nearly as smart as my boss, but I feel like something has gone wrong here and I would appreciate it if someone could help clarify this.
Lastly the client has asked to know the % more likely men are to be likers than women -- I think this is an erroneous request, as a chi square test does not address questions of which is greater or smaller, but only serves to confirm that a set of variables are independent. Am I right?
I just want to add, that I used the following statement to guide my thinking:
Cautionary Note It is important to keep in mind that the chi-square test only tests whether two variables are independent. It cannot address questions of which is greater or less. Using the chi-square test, we cannot evaluate directly the hypothesis that men are likers more than girls; rather, the test (strictly speaking) can only test whether the two variables Like and Gender, are independent or not.
It appears that you are first doing an omnibus test (Chi square test for independence) with 2 df to determine if the "like status" and "gender" are independent or not. And then you are doing post-hoc tests on the individual rows (Chi square goodness of fit tests) to see if the males/females are equally likely under each row. According to This Link under the section "Post Hoc Follow-up Tests", these post-hoc tests are allowable. Each row would generate a Chi square test with 1 df. They would test, for instance "Ho: men and women 'are likers' at the same rate", for each row.
However, I am leery that no adjustment was made for multiple comparisons. Since it appears you are doing three of these 1 df tests, you should adjust your $\alpha$ to correct the familywise error rate (Bonferroni correction for instance).
If your client wants to know how much more likely men are to be a "liker", etc. you could (a), provide a point estimate based on your data as Peter Flom suggested, or (b) you could construct a CI for the difference between the two proportions if you want an interval estimate. Along with the statement that the difference is significant (or not significant), my guess is that a point estimate would suffice for your clients.
Other than the problem with not controlling the familywise error rate, the analysis seems adequate to me. I hope this helps.
• So I do an omnibus test to test that "like, ind', non-liker" are independent with gender. If they are not independent with gender, I do the Post Hoc Follow-up Tests for each row, and use Bonferroni correction. This will tell me what exactly? I didn't quite understand. But this is very helpful. – dominic Dec 24 '13 at 10:43
• That is correct. The post hoc tests will individually test whether "liker" is equally seen in males and females; same for "ind" and "nonliker". It's basically drilling into the data to find out which rows are showing significance difference between genders. – Underminer Dec 24 '13 at 15:26
• Thanks for your help. So I did the omnibus as an independence test, the post hoc tests allow me to test for goodness of fit for each row. I used k = r!/2!(r-1)! * c!/2!(c-1)! and my new alpha become a = a/k. If I rejected Ho, I reported that sex influences the response, and gave a point estimate, but did not state that men were more likely than women (or visa versa). In the case of accepting the null hypothesis, should I inspect the p value, just to make sure the test statistic is acceptable for say 0.05 ? – dominic Dec 27 '13 at 12:47
The portion after "this is what the code does instead" seems off, although it is hard to tell.
The client's request is reasonable. It isn't answered by chi-square, but it still a reasonable request. The proportion of men who liked it is 54/99 = about 54%, of women it is 46/103 = about 46% (you can calculate the exact values) so the difference is about 8%.
The chi-square reported here is about two variables: Liking and sex. Specifically, it looks at whether they are associated. Given that one variable is ordinal, there are more powerful tests that regular chi-square.
• Hi, the point estimate you are talking about does not give you that "men are x% more likely than women to like Product P". It gives you "Men liked this product 8% more than Women". If there is a significance between Gender and Liking, the chi test will only address this, and only this - i.e. they the variable are not independent. As to which gender is significantly more likely to to be Liker, that is something that may require other tests? Am I right? – dominic Dec 27 '13 at 7:14
• The language around % differences gets confusing. – Peter Flom Dec 27 '13 at 12:29
• Although somebody is repeatedly flagging the preceding comment, I see nothing in the least offensive or incorrect about it and therefore have been dismissing the flags, which I will continue to do if they recur. – whuber Dec 31 '13 at 15:17
• Thanks for your help Peter, could I ask you to explain what you mean with a little more detail? – dominic Jan 13 '14 at 13:00
• Sure. Let's say, for instance, that in 2012 20% of respondents say "Yes" to some question. In 2013, 25% say "yes" to the same question. Is that a 5% improvement (25-20)? Or is it a 25% improvement ((25-20)/20)? Or possibly it's 6.25% ((25-20)/80)? – Peter Flom Jan 13 '14 at 13:22 | 2019-08-19 01:53:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5184769034385681, "perplexity": 981.3020827796387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314638.49/warc/CC-MAIN-20190819011034-20190819033034-00033.warc.gz"} |
https://piping-designer.com/index.php/disciplines/electrical/3152-magnetic-permeability | # Magnetic Permeability
Written by Jerry Ratzlaff on . Posted in Electrical Engineering
Magnetic permeability, abbreviated as $$\mu$$, also called permeability, is the ability of a material to respond to how much electromagnetic flux it can support to pass through itself within an applied electromagnetic field.
## Magnetic Permeability FORMULA
$$\large{ \mu = \frac{B}{H} }$$
### Where:
$$\large{ \mu }$$ (Greek symbol mu) = permeability
$$\large{ H }$$ = total magnetic field outside the material (magnetic field)
$$\large{ B }$$ = total magnetic field inside the material (magnetic intensity)
$$\large{ M }$$ = total magnetic field created by the material
$$\large{ X }$$ = magnetic susceptibility
### Solve For:
$$\large{ B = \mu \; H }$$ $$\large{ M = X \; H }$$
Tags: Magnetic Equations | 2022-10-05 15:05:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8051102161407471, "perplexity": 4360.121511948384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00053.warc.gz"} |
http://html.rhhz.net/yykj/html/201911014.htm | «上一篇
文章快速检索 高级检索
应用科技 2020, Vol. 47 Issue (3): 87-93 DOI: 10.11991/yykj.201911014 0
### 引用本文
ZHAO Junli. Soft switching analysis and experiment of isolated three-port converter[J]. Applied Science and Technology, 2020, 47(3): 87-93. DOI: 10.11991/yykj.201911014.
### 文章历史
Soft switching analysis and experiment of isolated three-port converter
ZHAO Junli
Beijing Research Institute of Mechanical and Electrical Technology, Beijing 100083, China
Abstract: In order to improve the conversion efficiency and power density of the isolated three-port converter and reduce the volume of the power converter, the conditions and scope of the soft turn-on of the isolated three-port converter are analyzed in this paper. The analytical expression for soft switching of full-bridge converter switch in each port is derived and obtained under phase-shift modulation mode, and the correctness of theoretical analysis and simulation results are validated based on the hardware test circuit of isolated three-port converter built in the laboratory. The simulation and experimental results are consistent with the theoretical analysis, which is helpful for the auxiliary design and parameter optimization when the actual system is developed.
Keywords: three-port converter electrical isolation voltage matching soft switching conversion efficiency double active bridge modulation strategy
1 拓扑结构与软开关分析
v1为参考,v2(v2')和v3(v3')与v1之间的移相角分别为φ12φ13,它们之间的关系可以用图1(c)来表示。图1(a)中的L1L2L3分别为3个端口与变压器绕组串联的电感,经变换后得到在图1(b)所示的△连接的等效电路中的L12L13L32,可表示为
$\left\{ {\begin{array}{*{20}{l}} {{L_{12}} = {L_1} + {L_2}^\prime + {{{L_1}{L_2}^\prime } / {{L_3}^\prime }}} \\ {{L_{32}} = {L_2}^\prime + {L_3}^\prime + {{{L_2}^\prime {L_3}^\prime } / {{L_1}}}} \\ {{L_{13}} = {L_3} + {L_1} + {{{L_1}{L_3}^\prime } / {{L_2}^\prime }}} \end{array}} \right.$
$\left\{ {\begin{array}{*{20}{c}} {{L_2}^\prime = {{N_1^2L_2^{}} / {N_2^2}}} \\ {{L_3}^\prime = {{N_1^2L_3^{}} / {N_3^2}}} \end{array}} \right.$
${i_{12}}\left( t \right) = \left\{ {\begin{array}{*{20}{l}} {\dfrac{{{V_{1{\rm{r}}}} + {V_{2{\rm{r}}}}}}{{{L_{12}}}}\left( {t - {t_0}} \right) + {i_{12}}\left( {{t_0}} \right)\;,\;\;{t_0} \leqslant t < {t_2}} \\ {\dfrac{{{V_{1{\rm{r}}}} - {V_{2{\rm{r}}}}}}{{{L_{12}}}}\left( {t - {t_2}} \right) + {i_{12}}\left( {{t_1}} \right)\;,\;\;{t_2} \leqslant t < {t_3}} \end{array}} \right.$ (1)
${i_{13}}\left( t \right) = \left\{ {\begin{array}{*{20}{l}} {\dfrac{{{V_{1{\rm{r}}}} + {V_{3{\rm{r}}}}}}{{{L_{13}}}}\left( {t - {t_0}} \right) + {i_{13}}\left( {{t_0}} \right)\;,\;\;{t_0} \leqslant t < {t_1}} \\ {\dfrac{{{V_{1{\rm{r}}}} - {V_{3{\rm{r}}}}}}{{{L_{13}}}}\left( {t - {t_1}} \right) + {i_{13}}\left( {{t_1}} \right)\;,\;\;{t_1} \leqslant t < {t_3}} \end{array}} \right.$ (2)
$\left\{ {\begin{array}{*{20}{l}} {{i_{12}}\left( {{t_3}} \right) = - {i_{12}}\left( {{t_0}} \right)} \\ {{i_{13}}\left( {{t_3}} \right) = - {i_{13}}\left( {{t_0}} \right)} \end{array}} \right.$ (3)
$\left\{ {\begin{array}{*{20}{l}} {{i_{12}}\left( {{t_0}} \right) = - \dfrac{{\left( {{V_{1{\rm{{\rm{r}}}}}} - {V_{2{\rm{{\rm{r}}}}}}} \right){\rm{{\text{π}} }} + 2{V_{2{\rm{{\rm{r}}}}}}{\varphi _{12}}}}{{4{\rm{{\text{π}} }}{f_{\rm{S}}}{L_{{\rm{12}}}}}}} \\ {{i_{12}}\left( {{t_2}} \right) = \dfrac{{\left( {{V_{2{\rm{{\rm{r}}}}}} - {V_{1{\rm{{\rm{r}}}}}}} \right){\rm{{\text{π}} }} + 2{V_{1{\rm{{\rm{r}}}}}}{\varphi _{12}}}}{{4{\rm{{\text{π}} }}{f_{\rm{S}}}{L_{{\rm{12}}}}}}} \end{array}} \right.$
$\left\{ {\begin{array}{*{20}{l}} {{i_{13}}\left( {{t_0}} \right) = - \dfrac{{\left( {{V_{1{\rm{{\rm{r}}}}}} - {V_{{\rm{3{\rm{r}}}}}}} \right){\rm{{\text{π}} }} + 2{V_{{\rm{3{\rm{r}}}}}}{\varphi _{13}}}}{{4{\text{π}} {f_{\rm{S}}}{L_{{\rm{13}}}}}}} \\ {{i_{13}}\left( {{t_1}} \right) = \dfrac{{\left( {{V_{{\rm{3{\rm{r}}}}}} - {V_{1{\rm{{\rm{r}}}}}}} \right){\rm{{\text{π}} }} + 2{V_{1{\rm{{\rm{r}}}}}}{\varphi _{13}}}}{{4{\rm{{\text{π}} }}{f_{\rm{S}}}{L_{{\rm{13}}}}}}} \end{array}} \right.$
$\left\{ {\begin{array}{*{20}{l}} {{i_1} = {i_{1{\rm{2}}}}{\rm{ + }}{i_{1{\rm{3}}}}} \\ {{i_2}^\prime = - {i_{32}} - {i_{12}}} \\ {{i_3}^\prime = - {i_{13}} + {i_{32}}} \end{array}} \right.$
${i_1}({t_3}) = {i_{1{\rm{2}}}}({t_3}){\rm{ + }}{i_{1{\rm{3}}}}({t_3}) > 0$
$\frac{{\left( {{V_{1{\rm{{\rm{r}}}}}} - {V_{2{\rm{{\rm{r}}}}}}} \right){\rm{{\text{π}} }} + 2{V_{2{\rm{{\rm{r}}}}}}{\varphi _{12}}}}{{4{\rm{{\text{π}} }}{f_{\rm{S}}}{L_{12}}}} + \frac{{\left( {{V_{1{\rm{{\rm{r}}}}}} - {V_{3{\rm{{\rm{r}}}}}}} \right){\rm{{\text{π}} }} + 2{V_{3{\rm{{\rm{r}}}}}}{\varphi _{13}}}}{{4{\rm{{\text{π}} }}{f_{\rm{S}}}{L_{13}}}} > 0$ (4)
$\left\{ {\begin{array}{*{20}{l}} {{i_2}^\prime ({t_5}) = - {i_{32}}({t_5}) - {i_{12}}({t_5}) > 0} \\ {{i_3}^\prime ({t_4}) = - {i_{13}}({t_4}) + {i_{32}}({t_4}) > 0} \end{array}} \right.$
$\left\{ {\begin{array}{*{20}{l}} {\dfrac{{\left( {{V_{2{\rm{{\rm{r}}}}}} - {V_{3{\rm{{\rm{r}}}}}}} \right){\rm{{\text{π}} }} + 2{V_{3{\rm{{\rm{r}}}}}}{\varphi _{32}}}}{{4{\rm{{\text{π}} }}{f_{\rm{S}}}{L_{32}}}} + \dfrac{{\left( {{V_{{\rm{2{\rm{r}}}}}} - {V_{{\rm{1{\rm{r}}}}}}} \right){\rm{{\text{π}} }} + 2{V_{{\rm{1{\rm{r}}}}}}{\varphi _{12}}}}{{4{\rm{{\text{π}} }}{f_{\rm{S}}}{L_{12}}}} > 0} \\ {\dfrac{{\left( {{V_{3{\rm{{\rm{r}}}}}} - {V_{1{\rm{{\rm{r}}}}}}} \right){\rm{{\text{π}} }} + 2{V_{1{\rm{{\rm{r}}}}}}{\varphi _{13}}}}{{4{\rm{{\text{π}} }}{f_{\rm{S}}}{L_{13}}}} + \dfrac{{\left( {{V_{3{\rm{{\rm{r}}}}}} - {V_{2{\rm{{\rm{r}}}}}}} \right){\rm{{\text{π}} }} + 2{V_{2{\rm{{\rm{r}}}}}}{\varphi _{32}}}}{{4{\rm{{\text{π}} }}{f_{\rm{S}}}{L_{32}}}} > 0} \end{array}} \right.$ (5)
$\left\{ \begin{array}{l} \left[ {\left( {1 - {d_{12}}} \right){\rm{{\text{π}} }} + 2{d_{12}}{\varphi _{12}}} \right] + \left[ {\left( {1 - {d_{13}}} \right){\rm{{\text{π}} }} + 2{d_{13}}{\varphi _{13}}} \right] > 0 \\ \left[ {\left( {{d_{12}} - 1} \right){\rm{{\text{π}} }} + 2{\varphi _{12}}} \right] + \left[ {\left( {{d_{12}} - {d_{13}}} \right){\rm{{\text{π}} }} + 2{d_{13}}{\varphi _{32}}} \right] > 0 \\ \left[ {\left( {{d_1}_3 - 1} \right){\rm{{\text{π}} }} + 2{\varphi _{13}}} \right] + \left[ {\left( {{d_{13}} - {d_{12}}} \right){\rm{{\text{π}} }} + 2{d_{12}}{\varphi _{32}}} \right] > 0 \end{array} \right.$ (6)
$\left\{ \begin{array}{l} {d_{12}} < \dfrac{{2{\rm{{\text{π}} }} + {d_{13}}(2{\varphi _{13}} - {\rm{{\text{π}} }})}}{{{\rm{{\text{π}} }} - 2{\varphi _{12}}}} \\ {d_{12}} > \dfrac{{{\rm{{\text{π}} }} - 2{\varphi _{12}} + {\rm{{\text{π}} }}{d_{13}} - 2{d_{13}}({\varphi _{12}} - {\varphi _{13}})}}{{2{\rm{{\text{π}} }}}} \\ {d_{12}} < \dfrac{{2{\varphi _{13}} - {\rm{{\text{π}} }} + 2{\text{π}} {d_{13}}}}{{{\rm{{\text{π}} }} - 2{\varphi _{12}} + 2{\varphi _{13}}}} \end{array} \right.$ (7)
$\left\{ \begin{array}{l} {d_{13}} < \dfrac{{2{\rm{{\text{π}} }} - {d_{12}}({\rm{{\text{π}} }} - 2{\varphi _{12}})}}{{{\rm{{\text{π}} }} - 2{\varphi _{13}}}} \\ {d_{13}} < \dfrac{{2{d_{12}}{\rm{{\text{π}} }} - ({\rm{{\text{π}} }} - 2{\varphi _{12}})}}{{{\rm{{\text{π}} }} - 2({\varphi _{12}} - {\varphi _{13}})}} \\ {d_{13}} > \dfrac{{({\rm{{\text{π}} }} - 2{\varphi _{13}}) + {d_{12}}({\rm{{\text{π}} }} - ({\varphi _{12}} - {\varphi _{13}}))}}{{2{\rm{{\text{π}} }}}} \end{array} \right.$
2 仿真验证 2.1 仿真模型
2.2 仿真过程
3 实验验证 | 2020-09-24 21:20:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3677855134010315, "perplexity": 2378.8809566519053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400220495.39/warc/CC-MAIN-20200924194925-20200924224925-00646.warc.gz"} |
https://gateoverflow.in/203307/cmi2017-b-1 | 100 views
Let Σ = {a, b, c}. Let Leven be the set of all even length strings in Σ*
(a) Construct a deterministic finite state automaton for L$_{EVEN}$.
(b) We consider an operation Erase$_{ab}$ that takes as input a string w ∈ Σ* and erases all occurrences of the pattern ab from w. Formally, it can be defined as follows:
Erase$_{ab}$(w):=$\left\{\begin{matrix} w &if\:w\:does\:not\:contain\:the\:pattern\:ab \\ Erease_{ab}(w_{1})\:Erease_{ab}(w_{2}) & if\: w=w1\:ab\:w2\:for\:some\:w1,w2\in \Sigma * \end{matrix}\right.$
For instance, Erase$_{ab}$(cacb) = cacb, Erase$_{ab}$(cabcbab) = ccb and Erase$_{ab}$(ab) = $\epsilon$.
For a language L, we define $Erase_{ab}(L)$ to be the set of strings obtained by applying the $Erase_{ab}$ operation to each string in L:
Erease$_{ab}$(L):= { Erease$_{ab}$(w) | w$\in$ L}
Show that Erase$_{ab}$(L$_{even}$) is a regular language.
[ Official answer by CMI ]
(a) Leven can be recognized by an automaton with two states {q0, q1}, where q0 is both an initial and final state. On input letters a and b, the automaton switches from q0 and q1 and vice versa. An odd length input will take the automaton to q1 and an even length input will take the automaton to q0.
(b) Eraseab(Leven) is the set of all even length strings which do not contain ab. It is easy to construct a nondeterministic automaton with three states {q0, q1, q2} for the language Lab consisting of all strings containing ab. Here, q0 is the initial state and q2 is the final state. There is a self loop on {a, b} at both q0 and q2 and there are transitions q0 $\overset{a}{\rightarrow}$ q1 and q1 b$\overset{b}{\rightarrow}$ q2. Since Lab is regular, so is its complement L$\bar{ab}$, the language of all strings without ab. Eraseab(Leven) is the intersection of L$\bar{ab}$ with Leven. | 2018-09-25 01:10:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5204017162322998, "perplexity": 1005.9334651842119}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160853.60/warc/CC-MAIN-20180925004528-20180925024928-00459.warc.gz"} |
https://dsp.stackexchange.com/questions/63888/how-to-verify-if-my-cross-correlation-algorithm-works | # How to verify if my cross-correlation algorithm works?
So I have a signal generator which can produce two separate signals.
I use a spectrum analyzer to read these signals, and can interface with Python.
I wrote a Python code which takes in these two signals, perform the FFT of both, take the complex conjugate of one of them, and multiply together which gives me the cross-correlated frequency spectrum?
My problem is, I want some validity checks to see if this works or not. The main objective is noise floor reduction, but I am not sure how much the noise should drop by, etc. This signal generator is connected to the spectrum analyzer using BNC coaxial cables and that is it. I am sending in simple sinusoidal waves of frequency 2kHz.
There are two issues / concerns with this approach in that you may not be getting what you want. The primary one is the equivalent of post detection averaging by using the post-processed results from a spectrum analyzer, and the second is that the result of your complex conjugate multiplied FFT is actually in the time domain since you started in the frequency domain (so the FFT processing is actually converting to the the time domain where you perform the multiplication.-- I suspect but am not certain that your goal is to observe the frequency domain spectrum after cross-correlation of your signals?).
Most significantly, spectrum analyzers "detect" the signal in that the result is dB magnitude, and regardless of the dB conversion which could easily be undone, all phase information is lost such that any processing gain through coherent averaging can no longer be done. This is equivalent to the difference in adjusting the resolution bandwidth (RBW) versus the video bandwidth (VBW) of the spectrum analyzer . Since you are already using a spectrum analyzer, playing with those knobs will actually provide a great demonstration of pre-detection versus post-detection averaging, which will give you much further insight into what you are trying to accomplish. For that reason, I will first go into that in more detail of that, starting with a simple functionally equivalent block diagram of a spectrum analyzer:
In this simplified but functionally equivalent view of a spectrum analyzer an input signal (in this diagram a 70 MHz tone) is frequency translated by the voltage controlled oscillator (VCO) and mixer to be swept through the Bandpass Filter with resolution bandwidth RBW. It may be helpful to envision the sweep rate as being extremely slow in comparison to the inverse of the filter bandwidth, such that the operation for explanation here is similar to stepping the VCO through each frequency one at a time and statically processing each result (pixel on the display) before stepping to the next frequency. This will avoid us getting into constraints on the sweep rate which exist. Thus at any given moment, the power detector is converting the entire power that is within the bandwidth RBW to a power level. This power level passes through a low pass filter with bandwidth VBW, in converted to dB and then used to control the vertical position of the pixel on the screen representing the power level of our signal under test. Similarly the ramp rate controls the horizontal position representing the frequency at that moment in time.
By adjusting the bandwidth RBW we reduce the total power that would be presented to the power detector. If the signal is centered on our tone (in this case 70 MHz), the power of the tone would dominate and we would see no change in the power detector output regardless of the bandwidth of RBW, as long as the bandpass filter remained centered on our signal under test, and assuming the signal under test was a pure tone. In this case the apparent bandwidth of that signal, as displayed in the diagram above is actually the bandwidth of RBW shown as the single tone is swept through our filter during the horizontal trace. This would be very apparent by adjusting RBW and observing the result. More notably you would see the noise floor everywhere else go down at $$10Log(N)$$ where N is the decrease in RBW. (If you half RBW, the noise floor goes down by 3 dB). I believe this is a direct demonstration of the noise floor reduction you seek to that the extent correlation has mathematical similarity to averaging, as in this case you would observe that the signal level would remain unchanged. In this case, this is the result of "Pre-detection" averaging which is exactly what a bandpass filter is doing (with the averaging operation mathematically translated to the center frequency of the filter). In the plot shown in the figure above, the noise floor is approximately -90 dBm. Both the noise floor and the width of the tone suggest that the RBW is about 1 MHz, so I will assume that was the case. If we adjusted RBW to 500 KHz, we would see the noise floor drop to -93 dBm while the peak of the tone would remain unchanged.
Similarly if you adjust the bandwidth of the lowpass filter (VBW), you are simply smoothing (averaging) the noise power already measured but not decreasing it! In the plot above we would see the "noise on the noise" get smoother, but it will remain at the -90 dBm level. Thus we are simply averaging the result of the noise without reducing it.
To accomplish what you want, I would consider cross-correlating the signals directly in the time domain. The complex conjugate multiplication of the FFT of these time domain signals would then represent the spectrum of the cross correlation, if that is what was ultimately desired.
• Just your last paragraph: are you suggesting that I cross-correlate the two signals in the time domain first, before I FFT them and complex conjugate multiply them? – anony Feb 14 '20 at 14:32
• My goal is to observe the frequency spectrum and how it changes. for eg. I am sending in a signal A with 2 kHz, and a signal B which is mixed at 2kHz and 3kHz. If I take the FFT of the time domain data of these two signals, complex conjugate multiply them, this should result in the spectrum after cross-correlation? So in this case, the 3 kHz is almost like it has been attenuated? – anony Feb 14 '20 at 14:34
• Yes that is correct – Dan Boschen Feb 14 '20 at 14:59
• But in your earlier comment- the result of doing a complex conjugate multiply of the FFT of each time domain signal IS the FFT of the (circular) cross correlation— so you simply need to do that multiplication after taking the FFT— the cross correlation is already done in that process – Dan Boschen Feb 14 '20 at 15:01
• That I understand, so the result is the FFT of the cross correlation. This means, according to my second comment, the 3 kHz peak should not be visible? In my code, I still see peaks at 2kHz and 3kHz – anony Feb 14 '20 at 15:26 | 2021-06-14 05:44:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6721072196960449, "perplexity": 549.1852921840817}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487611445.13/warc/CC-MAIN-20210614043833-20210614073833-00280.warc.gz"} |
https://www.zbmath.org/?q=an%3A0799.41015 | zbMATH — the first resource for mathematics
Müntz systems and orthogonal Müntz-Legendre polynomials. (English) Zbl 0799.41015
Summary: The Müntz-Legendre polynomials arise by orthogonalizing the Müntz system $$\{x^{\alpha_ 0}, x^{\alpha_ 1}, \dots\}$$ with respect to Lebesgue measure on [0,1]. In this paper, differential and integral recurrence formulae for the Müntz-Legendre polynomials are obtained. Interlacing and lexicographical properties of their zeros are studied, and the smallest and largest zeros are universally estimated via the zeros of Laguerre polynomials. The uniform convergence of the Christoffel functions is proved equivalent to the nondenseness of the Müntz space on [0,1], which implies that in this case the orthogonal Müntz-Legendre polynomials tends to 0 uniformly on closed subintervals of [0,1). Some inequalities for Müntz polynomials are also investigated, most notably, a sharp $$L^ 2$$ Markov inequality is proved.
MSC:
41A17 Inequalities in approximation (Bernstein, Jackson, Nikol’skiĭ-type inequalities) 42C05 Orthogonal functions and polynomials, general theory of nontrigonometric harmonic analysis 30C15 Zeros of polynomials, rational functions, and other analytic functions of one complex variable (e.g., zeros of functions with bounded Dirichlet integral) 39A10 Additive difference equations
Full Text: | 2021-03-04 06:55:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6863791346549988, "perplexity": 1591.3752135666161}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178368608.66/warc/CC-MAIN-20210304051942-20210304081942-00589.warc.gz"} |
https://math.stackexchange.com/questions/1714293/how-to-compute-this-integral-with-contour-integration | # How to compute this integral with contour integration?
Consider the function
$$g(z)=\dfrac{e^{izt}\phi(z)}{z},$$
where $\phi$ is a $C^\infty$ function. I want to compute the integral
$$I=\int_{-\infty}^{\infty}\dfrac{e^{ixt}\phi(x)}{x}dx,$$
where $t$ is a parameter which we can consider positive. For that I know that
$$I=\lim_{R\to \infty}\int_{-R}^{R}g(z)dz,$$
so that we can use contour integration. The problem is that in this case the pole is along the path over which we need to integrate. The usual way I've seem to deal with this is to consider the following paths:
1. The interval $[-R,-\eta]$ being $\eta > 0$.
2. The semicircle $C_{\eta}$ of radius $\eta$ centered at the origin, so that we traverse it starting at $-\eta$ and going all the way to $\eta$.
3. The interval $[\eta,R]$,
4. The semicircle $C_R$ of radius $R$ centered at the origin, so that we complete the loop.
We then have that inside the enclosed area there are no singularities of $g$. The integral on this whole path then is zero:
$$\int_{-R}^{-\eta}g(z)dz+\int_{C_\eta}g(z)dz+\int_{\eta}^Rg(z)dz+\int_{C_R}g(z)dz=0$$
If we then let $R\to \infty$ we can easily see that the integral over $C_R$ goes to zero if $t>0$ thus we get
$$\int_{-\infty}^{-\eta}g(z)dz+\int_{\eta}^\infty g(z)dz=-\int_{C_\eta}g(z)dz,$$
now we just need to make $\eta\to 0$ to get what we want. The problem lies in computing the integral over $C_\eta$. We can parametrize the path as $z = \eta e^{i\theta}$, but then
$$\int_{C_\eta}g(z)dz=\int_{\pi}^{0}ie^{i(\eta \cos \theta + i\eta \sin \theta)t}d\theta,$$
but now I have no idea of what I do with this integral.
My question is: is my approach to this integration correct? If so, how do I compute this last integral? If not, how should I compute $I$ using contour integration?
• 1.) in ur last integral $g(\eta e^{i\Theta})$ is missing 2.) because $\eta$ is infinitely small u might use a zeroth order taylor expansion of the integrand can u conclude? – tired Mar 26 '16 at 14:37
Check the corollary to the lemma in the most upvoted answer here: Definite integral calculation with poles at 0 and $\pm i\sqrt{3}$
$$\left.\text{Res}\left(\frac{e^{izt}\phi(z)}z\right)\right|_{z=0}=\lim_{z\to 0} e^{izt}\phi(z)=\phi(0)$$
$$\int_{C_\eta}g(z)\,dz=-i\pi\phi(0)$$ | 2019-06-20 05:42:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9899457097053528, "perplexity": 107.06450856182475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999141.54/warc/CC-MAIN-20190620044948-20190620070948-00066.warc.gz"} |
https://economics.stackexchange.com/questions/42661/does-modern-monetary-theory-mmt-provide-a-useful-insight-into-how-to-manage-th/42664 | # Does Modern Monetary Theory (MMT) provide a useful insight into how to manage the economy?
According to advocates of [Modern Monetary Theory][MMT] (MMT), the primary risk once the economy reaches full employment is inflation, which can be addressed by gathering taxes to reduce the spending capacity of the private sector. Deflation is not seen as a risk because the Government of any country with sovereign control of its own internationally accepted currency can print money to create as much inflation as it wants at will (depending on who receives that money).
It appears to me that this must be true to some extent, but my concern is that the feedback mechanism where the extra money generates inflation might not be smooth and free flowing for various reasons and a vast excess of money might accumulate in an unstable way in certain areas of the economy. After a period of time some trigger event might cause a sudden uncontrollable cascade flood of that money that could destroy public confidence and the currency itself. Is this an unfounded fear?
• I am in the MMT camp, but I fear this question would only generate opinion-based answers. Ideally, it should be re-phrased to be a technical question about MMT. Another issue is that your characterisation about money printing is absolutely counter to MMT views on inflation. Embedding that within the question makes it harder to get a good answer. Feb 20 at 22:36
• @Brian Romanchuk, I'm certainly not an expert so I probably haven't phrased it very well. If you could elaborate I will see if I can improve the question. I'm slightly confused as I was under the impression that money printing causes or can cause inflation no? and taxation should reduced it? All other things being equal etc etc. Feb 20 at 23:37
• I gave you an answer, which discusses the body of your text. The title is too grandiose and open ended, and invites opinion-based answers. Your question is really about how MMT relates money supply to inflation (which it doesn’t). I would cut down the question to be something like “How does MMT view the relationship between mobey and inflation.” Feb 21 at 2:28
• No doubt I have worded the question badly; however, I meant that excess wealth could build up in savings of those already better off, gold bullion, bit coins and other liquid or relatively liquid but non-circulating assets that given the right circumstances might be released back into the consumer side of the economy suddenly causing a wave of inflation. Presumably economist think they know in general terms where all of the "printed" money actual is or has ended up so this is not an issue (bank reserves?). So perhaps such “areas of the economy” are not a concern as they are not large enough. Feb 21 at 14:21
Does Modern Monetary Theory (MMT) provide a useful insight into how to manage the economy?
That depends on your definition of MMT, because it is not generally agreed on what it even is. You will find some arguing it is just a macro/monetary theory (such as the Wikipedia page) but then I seen MMT proponents on this site arguing it is a whole new paradigm that encompasses all macroeconomics and microeconomics. In addition, the MMT is not a theory in a way that is commonly understood in science, that is as far as I know there is no rigorous agreed upon MMT model that is testable. This makes it difficult to address it, because it is hard to criticize something that has no agreed upon framework as one can always simply respond to any criticism that it is not about the right interpretation of MMT.
However, this being said, the main tenets of MMT, as laid out by advocates such as Stephanie Kelton seem to include assertions that:
• debt and deficit for countries issuing their own currency does not matter.
• government can fund virtually arbitrary amount of real spending through monetary financing.
This is rejected by conventional economists. In fact, an IGM poll among more than 40 top mostly Ivy league US policy economists shows that literally none of them agrees with the MMT propositions
Modern Monetary Theory
You can also have a look at critiques of MMT by other top economists such as Krugman, Rogoff, Summers or Mankiw (2019).
Consequently, at least when it comes to the above mentioned tenets, conventional economists reject the modern monetary theory and do not consider it useful. This is not to say that some policy recommendations might not be valid. For example, as far as I know many MMTers advocate for more government spending and higher deficits - that is not necessary bad idea especially in recession, and it is recommended policy by conventional macro models (e.g. see Mankiw Macroeconomics 8ed or Blanchard et al Macroeconomics: a European Perspective) but most conventional economists would still worry about deficits getting too out of hand and would caution against too much monetary financing (outside perhaps liquidity traps). Or most conventional economists and conventional macro models would argue it is not possible to simultaneously control inflation through taxes while simultaneously stimulating economy with monetary financed spending, in most macro models to avoid inflation you would have to raise taxes by amount exactly offsetting the effect of government spending (see Romer Advanced Macroeconomics), and to the extend government uses distortionary taxes (e.g. non-lump-sum taxes) it would leave economy worse off.
However, as stated at the beginning it is not clear what MMT is. For example, as pointed in the comments your description of MMT is quite different from what many MMT proponents asserts. For example, you state:
Government of any country with sovereign control of its own internationally accepted currency can print money to create as much inflation as it wants at will (depending on who receives that money).
But as far as I know most MMTers reject the idea that inflation is caused by expansion of money supply and rather argue inflation is consequence of different factors (such as institutional factors, or lack of competition that allows firms to hike prices and so forth). For example, according to the recently published 'MMT textbook' by the most public proponents of MMT, Mitchell, Wray, and Watts:
“Conflict theory situates the problem of inflation as being intrinsic to the power relations between workers and capital (class conflict), which are mediated by government within a capitalist system.”
It appears to me that this must be true to some extent, but my concern is that the feedback mechanism where the extra money generates inflation might not be smooth and free flowing for various reasons and a vast excess of money might accumulate in an unstable way in certain areas of the economy. After a period of time some trigger event might cause a sudden uncontrollable cascade flood of that money that could destroy public confidence and the currency itself. Is this an unfounded fear?
This is difficult to address because it is not clear what you are trying to say here. For example, it is not clear how money could "accumulate in certain areas of the economy" (whatever that is even supposed to mean - people from selected sectors hoarding money under matrasses?) or what is meant by 'flood of money' - once money circulate they are already in the economy and affect the economy.
However, this being said there are many prominent economists who argue that following policies advocated by MMTers such as Kelton or Mitchell, Wray, and Watts it would eventually after some time lead to excessive inflation (see again the articles by the critics in previous part of the answer). There are also empirical examples of countries which attempted to excessively monetize their debt and fund spending via monetary expansion where this led to such high inflation that in the end it led to currency substitution (e.g. see Kamin & Ericsson 2003) about the Argentinian case), but I think it is more likely most developed countries would simply reverse course and change policy before going that far.
• @BrianRomanchuk 1. I would not call peer reviewed research published in AEA Papers and Proceedings as well as leading peer reviewed macroeconomic textbooks cited in the answer above an 'opinion' but to each according to their own. 2. I literary cited Mitchell, Wray, and Watts: Macroeconomics... who literally claim that their book is advocating MMT... if you don't consider that MMT literature then fine but then I wonder what counts as MMT literature to begin with
– 1muflon1
Feb 21 at 2:28
• I missed the textbook citation, sorry. However, the cited text had almost no MMT content to it. This was exactly the situation with the Mankiw article, which only had a fe out of context quotes from the text Feb 21 at 2:35
• MMT seems at best ill defined and contentious. If sufficient money was printed and circulated widely enough it would cause inflation. If everyone in the UK were given £1,000,000 to spend how could that fail to have an inflationary effect? But the more important question is whether the Governments deficit matters. Here I am less certain. When I hear people say there is no magic money tree, it seems to me that they are being simplistic. There is, the only danger being that nobody knows quite how hard you can shake it before it comes up by the roots destroying the currency in the process. Feb 21 at 14:33
• They are saying debt and deficit do not matter, but they aren't saying money supply doesn't matter, are they? And they are saying spending creates money supply. So, that's very different from being able to spend infinite quantities of money. It seems like the survey questions were loaded questions with respect to MMT May 4 at 20:27
• @user253751 those poll questions are literally based of the referenced article written by Kelton for the Bloomberg and that other MMTer - as far as I can see almost any proponent of MMT claims something different and not consistent with other MMT proposals but the questions were definitely not loaded
– 1muflon1
May 4 at 21:27
According to advocates of MMT, the primary risk once the economy reaches full employment is inflation, which can be addressed by gathering taxes to reduce the spending capacity of the private sector.
This statement is in accord with MMT, and it can be traced back to the concept of Functional Finance. One could do a search for Abba Lerner’s articles on Functional Finance to see the roots of the idea. There are theoretical differences between Functional Finance. L. Randall Wray’s working paper #900 at the Levy Institute discusses this.
Deflation is not seen as a risk because the Government of any country with sovereign control of its own internationally accepted currency can print money to create as much inflation as it wants at will (depending on who receives that money).
MMT rejects the concept of “printing money” as is understood by mainstream economics, and certainly rejects Monetarist notions about a linkage between money supply and the price level. Instead, deflation is not a worry as fiscal policy can be loosened.
To add to the previous point: there is a difference between fiscal spending (e.g., handing households \$1 trillion) versus “quantitative easing” (the central bank buying \$1 trillion in existing bonds, which increases the money supply and reduces bonds held by the public). The difference should be obvious: handouts create an income flow, the second is just a secondary market financial transaction. MMT proponents argued that QE accomplishes almost nothing, saying that it is just a swap between two types of government liabilities. It should be noted that some neoclassical economists have made the same observation.
Since MMT proponents argue that QE accomplishes nothing, it is very hard to understand why some critics argue that MMT is just advocacy of “monetary financing.”
If we turn to page 343 of “Macroeconomics” by William Mitchell, L. Randall Wray, and Martin Watts (Wray & Mitchell are leading authorities on MMT, considered co-founders along with Warren Mosler) the authors state: “...when MMT says that government spends by keystrokes, this is a description, not a prescription. If critics were correct that government spending by printing money necessarily leads to high inflation or hyperinflation, then most developed nations would have at least high inflation, if not hyperinflation all the time because they all spend by keystrokes.”
It appears to me that this must be true to some extent, but my concern is that the feedback mechanism where the extra money generates inflation might not be smooth and free flowing for various reasons and a vast excess of money might accumulate in an unstable way in certain areas of the economy. After a period of time some trigger event might cause a sudden uncontrollable cascade flood of that money that could destroy public confidence and the currency itself. Is this an unfounded fear?
The issue is not money printing, rather too loose fiscal policy. The MMT argument is that you avoid this by structuring fiscal policy properly. For example, the key counter-cyclical policy proposed is a Job Guarantee. The Job Guarantee (JG) offers a job at a fixed wage. The JG wage acts as a de facto minimum wage, and so long as it is not increased, should not create upward pressure on wages. If the economy is seen to be overheating, workers will be bid away from the programme. This reduces the wages paid and increases income taxes - i.e., tightening fiscal policy automatically.
Arguably, bad policies - such as was pursued by mainstream Keynesian economists in the 1960-70s can create inflationary pressure. Fiscal policy needs to be tightened to avoid inflationary pressures if regulatory measures are not enough.
The fear of a “cascade” probably refers to fear of hyperinflation. The textbook “Macroeconomics” explains in Chapter 21 how mainstream theories about hyperinflation are incorrect in that they blame monetary financing. Instead, if we look at real world cases, hyperinflation is typically the result of the impairment of real productive resources. In the case of Weimar, there were gold reparations as well the occupation of the Ruhr.
However, these questions are really about a couple of topics that are misunderstood or misrepresented by other economists. The question title asks whether MMT offers insights into managing the economy. To answer that requires switching to topics not discussed within the question. For example, the Job Guarantee has nothing to do with “monetary financing.”
The previously mentioned textbook would be a good starting point to learn about MMT, or there are many resources online.
• I can't see how sufficient money could fail to be inflationary regardless of the productive resource in existence or lack of it. In the extreme, if the Government credited everyone's bank account with a million pounds surely there would be an increase in inflation? Does MMT assume that even in this circumstance that there would be no inflation? Or is there a tacit assumption about reasonable limits? Feb 21 at 14:42
• That’s fiscal policy, not just a change in money supply. As seen in QE, central banks can replace bonds with money, and there are very few visible effects. I.e., handing out one trillion dollars to households is different than the central bank buying one trillion in bonds. This is why MMT economists do not frame this as “printing money,” that is largely an invention of MMT critics. Feb 21 at 15:03
• Does MMT not still agree that price level = money supply / output, but disagree that printing money doesn't change the output? May 4 at 20:34
• The usual equation is MV=PQ, which means that you are missing velocity. The standard heterodox argument is that velocity is not stable, so the MV=PQ relationship gives no useful information. May 6 at 10:37
• @Slarty From what I understand the limits of spending are the real economic capacities. If demand increases beyond what can be "produced" in the mid-term as a result of the government spending too much there will be inflation. MMT doesn't claim there aren't limits of spending without risking inflation but that the limits are in actual real world capacities rather than an arbitrary budget. There are different indicators of when the capacity is close to being used fully, one of them is full employment. If there is no more "workforce reserve" you are likely at capacity soon. May 7 at 17:01 | 2021-09-17 18:54:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34637898206710815, "perplexity": 1846.885395260986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055775.1/warc/CC-MAIN-20210917181500-20210917211500-00001.warc.gz"} |
https://euromathsoc.org/magazine/articles/99 | # Seeing the invisible: Digital holography
• ### Ana Carpio
For the past years there has been an increasing interest in developing mathematical and computational methods for digital holography. Holographic techniques furnish noninvasive tools for high-speed 3D live cell imaging. Holograms can be recorded in the millisecond or microsecond range without damaging samples. A hologram encodes the wave field scattered by an object as an interference pattern. Digital holography aims to create numerical images from digitally recorded holograms. We show here that partial differential equation constrained optimization, topological derivatives of shape functionals, iteratively regularized Gauss–Newton methods, Bayesian inference, and Markov chain Monte Carlo techniques provide effective mathematical tools to invert holographic data with quantified uncertainty. Holography set-ups are particularly challenging because a single incident wave is employed. Similar tools could be useful in inverse scattering problems involving other types of waves and different emitter/receiver configurations, such as microwave imaging or elastography, for instance.
## 1 Introduction
Experimental sciences have traditionally been a source of challenging mathematical problems with a double edge: while mathematical theories are created, technology moves fast and industry develops. Imaging sciences provide a remarkable example. Typical imaging systems, such as radar [28 S. W. McCandless and C. R. Jackson, Principles of synthetic aperture radar. In SAR Marine Users Manual, NOAA (2004) ], magnetic resonance tomography, ultrasound, echography [25 A. Maier, S. Steidl, V. Christlein, J. Hornegger, Medical imaging systems: An introductory guide. Springer (2018) ], and seismic imaging [34 J. Tromp, Seismic wavefield imaging of Earth’s interior across scales. Nature Reviews Earth & Environment 1, 40–53 (2020) ], pose inverse scattering problems with a similar mathematical structure. In all of them, waves generated by a set of emitters interact with a medium under study and the wave field resulting from the interaction is recorded at a set of receivers [10 D. Colton and R. Kress, Inverse acoustic and electromagnetic scattering theory. Appl. Math. Sci. 93, Springer, Berlin (1992) ]. Different imaging systems resort to different types of waves and arrange emitters and receivers according to varied geometries. The nature of the employed waves depends on factors such as the size of the specimens under study, the contrast between components, and the damage caused to the sample during the imaging procedure. Knowing the emitted and recorded waves, we aim to infer the structure of the medium.
Approximating the solutions of inverse scattering problems is a challenging task because such problems are severely ill posed [10 D. Colton and R. Kress, Inverse acoustic and electromagnetic scattering theory. Appl. Math. Sci. 93, Springer, Berlin (1992) ]. Given arbitrary data, the problem under study may not admit a solution, the solution may not be unique, or it may not depend continuously on the given data. This means that small errors may lead to a solution different from the searched one. In view of the relevant technological applications in a host of fields, such as medicine, security, geophysics, or materials testing, to mention a few, there is a need of even better mathematical techniques for classical imaging problems, as well as a need of new ideas to tackle new imaging set-ups.
We focus here on recent developments in digital holography, summarizing work done during the past 10 years in collaboration with experimentalists designing holographic microscopes. This collaboration started in 2012 thanks to the interdisciplinary communication environment created at the Harvard University’s Kavli Institute seminars. Since then, we have developed analytical and computational tools to handle inverse problems arising in digital holography, in collaboration with researchers from Harvard University and Tesla, Universidad Complutense de Madrid, Universidad Politécnica de Madrid, Universidad de Oviedo, Université de Technologie de Compiègne, and New York University.
Digital in-line holography is a noninvasive tool for accelerated three-dimensional imaging of soft matter and live cells [23 S. H. Lee, Y. Roichman, G. R. Yi, S. H. Kim, S. M. Yang, A. van Blaaderen, P. van Oostrum and D. G. Grier, Characterizing and tracking single colloidal particles with video holographic microscopy. Optics Express 15, 18275–18282 (2007) , 16 J. Fung, R. P. Perry, T. G. Dimiduk and V. N. Manoharan, Imaging multiple colloidal particles by fitting electromagnetic scattering solutions to digital holograms. J. Quant. Spectroscopy Radiative Transfer 113, 212–219 (2012) , 26 P. Marquet, B. Rappaz, P. J. Magistretti, E. Cuche, Y. Emery, T. Colomb and C. Depeursinge, Digital holographic microscopy: a noninvasive contrast imaging technique allowing quantitative visualization of living cells with subwavelength axial accuracy. Optics Letters 30, 468–478 (2005) , 37 A. Yevick, M. Hannel and D. G. Grier, Machine-learning approach to holographic particle characterization. Optics Express 22, 26884–26890 (2014) ] that achieves high spatial (nanometers) and temporal (microseconds) resolution without the need of toxic fluorescent markers or stains. In this context, a hologram is a two-dimensional light interference pattern encoding information about the optical and geometrical properties of a set of objects [35 T. Vincent, Introduction to holography. CRC Press (2012) ]. Shining a properly chosen light beam back through the hologram we can recreate the original three-dimensional image. Instead, digital holography is designed to produce numerical reconstructions of the objects in an automatic way, which amounts to solving computationally an inverse scattering problem. We will show next that optimization schemes with partial differential equation constraints, analysis of the topological derivative of objective functions, regularized Gauss–Newton iterations, and Bayesian inference are effective tools to invert holographic data in the presence of noise while quantifying uncertainty.
## 2 The forward problem
The forward problem is a mathematical model of how a hologram is generated. Figure 1 illustrates how an in-line hologram is formed, though more complicated set-ups are possible. First, a laser light beam interacts with a sample. Then, interference of the scattered light field with the undiffracted beam generates the hologram on a detector screen past the object [23 S. H. Lee, Y. Roichman, G. R. Yi, S. H. Kim, S. M. Yang, A. van Blaaderen, P. van Oostrum and D. G. Grier, Characterizing and tracking single colloidal particles with video holographic microscopy. Optics Express 15, 18275–18282 (2007) ]. The light wave field obeys the Maxwell equations. Typically, the emitted laser beams are time harmonic, that is, $\mathcal{E}_{\mathrm{inc}}(\mathbf{x},t)=\operatorname{Re}[e^{-\imath\omega t}{\mathbf{E}}_{\mathrm{inc}}(\mathbf{x})]$. The resulting wave field is also time harmonic, namely, $\mathcal{E}_{\Omega,\kappa}(\mathbf{x},t)=\operatorname{Re}[e^{-\imath\omega t}{\mathbf{E}}_{\Omega,\kappa}(\mathbf{x})]$, with complex amplitude $\mathbf{E}_{\Omega,\kappa}(\mathbf{x})$ governed by the stationary Maxwell equations. The resulting forward problem is
\begin{gathered}\begin{aligned} \operatorname{\mathbf{curl}}\biggl(\frac{1}{\mu_{e}}\operatorname{\mathbf{curl}}\mathbf{E}\biggr)-\frac{\kappa_{e}^{2}}{\mu_{e}}\mathbf{E}&=0&&\textrm{in}\ \mathbb{R}^{3}\setminus\overline{\Omega},\\ \operatorname{\mathbf{curl}}\biggl(\frac{1}{\mu_{i}}\operatorname{\mathbf{curl}}\mathbf{E}\biggr)-\frac{\kappa_{i}^{2}}{\mu_{i}}\mathbf{E}&=0&&\textrm{in}\ \Omega,\\ \hat{\mathbf{n}}\times\mathbf{E}^{-}&=\hat{\mathbf{n}}\times\mathbf{E}^{+}&&\textrm{on}\ \partial\Omega,\\ \frac{1}{\mu_{i}}\hat{\mathbf{n}}\times\operatorname{\mathbf{curl}}\mathbf{E}^{-}&=\frac{1}{\mu_{e}}\hat{\mathbf{n}}\times\operatorname{\mathbf{curl}}\mathbf{E}^{+}&&\textrm{on}\ \partial\Omega,\end{aligned}\\ \lim_{\lvert\mathbf{x}\rvert\to\infty}\lvert\mathbf{x}\rvert\biggl|\operatorname{\mathbf{curl}}(\mathbf{E}-\mathbf{E}_{\mathrm{inc}})\times\frac{\mathbf{x}}{\lvert\mathbf{x}\rvert}-\imath\kappa_{e}(\mathbf{E}-\mathbf{E}_{\mathrm{inc}})\biggr|=0,\end{gathered}
where $\mu_{i}$, $\varepsilon_{i}$ and $\kappa_{i}=\omega_{i}^{2}\varepsilon_{i}\mu_{i}$ are the permeabilities, permittivities and wavenumbers of the imaged objects $\Omega$, while $\mu_{e}$, $\varepsilon_{e}$ and $\kappa_{e}$ correspond to the ambient medium [3 C. F. Borhen and D. R. Huffman, Absorption and scattering of light by small particles. Wiley Sciences, John Wiley & Sons, Berlin (1998) ] and are known. In biomedical applications, $\mu_{i}\sim\mu_{e}\sim\mu_{0}$, $\mu_{0}$ being the vacuum permeability. The upper signs $-$ and $+$ represent limit values from inside and outside $\Omega$, respectively, and $\hat{\mathbf{n}}$ denotes the outer unit normal vector. Incident waves are polarized in a direction $\hat{\mathbf{p}}$ orthogonal to the direction of propagation $\hat{\mathbf{d}}$, that is, $\mathbf{E}_{\mathrm{inc}}(\mathbf{x})=E_{0}\hat{\mathbf{p}}\,e^{\imath\kappa_{e}\hat{\mathbf{d}}\cdot\mathbf{x}}$, where $E_{0}$ stands for the magnitude of the incident field.
For any smooth region $\Omega^{\prime}\subset\mathbb{R}^{3}\setminus\overline{\Omega}$ and any real $\kappa_{e}>0$, system (1) has a unique solution [31 J.-C. Nédélec, Acoustic and electromagnetic equations. Appl. Math. Sci. 144, Springer, New York (2001) ] in the Sobolev space $H^{2,0}(\Omega^{\prime})=\{\mathbf{E}\in H^{2}(\Omega^{\prime}),\operatorname{div}\mathbf{E}=0\}$ that is continuous in $\Omega^{\prime}$ (see [19 P. Grisvard, Elliptic problems in nonsmooth domains. Classics Appl. Math. 69, SIAM, Philadelphia (2011) ]). For collections of spheres and piecewise-constant $\kappa_{i}$, one can calculate Mie series solutions [3 C. F. Borhen and D. R. Huffman, Absorption and scattering of light by small particles. Wiley Sciences, John Wiley & Sons, Berlin (1998) ]. Starshaped object parametrizations with piecewise-constant $\mu_{i}$ allow for fast spectral solvers [20 H. Harbrecht and T. Hohage, Fast methods for three-dimensional inverse obstacle scattering problems. J. Integral Equations Appl. 19, 237–260 (2007) , 24 F. Le Louër, A spectrally accurate method for the direct and inverse scattering problems by multiple 3D dielectric obstacles. ANZIAM J. 59, E1–E49 (2018) ]. Coupled BEM/FEM formulations [29 S. Meddahi, F.-J. Sayas and V. Selgás, Nonsymmetric coupling of BEM and mixed FEM on polyhedral interfaces. Math. Comp. 80, 43–68 (2011) , 31 J.-C. Nédélec, Acoustic and electromagnetic equations. Appl. Math. Sci. 144, Springer, New York (2001) ] are convenient for more general parametrizations, while discrete dipole approximations [36 A. Wang, T. G. Dimiduk, J. Fung, S. Razavi, I. Kretzschmar, K. Chaudhary and V. N. Manoharan, Using the discrete dipole approximation and holographic microscopy to measure rotational dynamics of non-spherical colloidal particles. J. Quant. Spectroscopy Radiative Transfer 146, 499–509 (2014) , 38 M. A. Yurkin and A. G. Hoekstra, The discrete-dipole-approximation code ADDA: Capabilities and known limitations. J. Quant. Spectroscopy Radiative Transfer 112, 2234–2247 (2011) ] solve the problem avoiding the use of parametrizations.
In principle, the hologram is obtained evaluating the solution of the forward problem (1) at detectors placed on the screen: $I_{\Omega,\kappa_{i}}=\lvert\mathbf{E}_{\mathrm{inc}}+{\mathbf{E}}_{\mathrm{sc},\Omega,\kappa_{i}}\rvert^{2}=\lvert\mathbf{E}_{\Omega,\kappa_{i}}\rvert^{2}$. In practice, the measured holograms $\mathbf{I}_{\mathrm{meas}}$ are corrupted by noise.
## 3 Deterministic inverse problem
Given a hologram $\mathbf{I}_{\mathrm{meas}}$ measured at screen points $\mathbf{x}_{j}$, $j=1,\ldots,N$, the inverse holography problem seeks objects $\Omega=\bigcup_{\ell=1}^{L}\Omega_{\ell}$ and functions $\kappa_{i}\colon\Omega\to\mathbb{R}^{+}$ such that
$I_{\mathrm{meas}}(\mathbf{x}_{j})=\lvert\mathbf{E}_{\Omega,\kappa_{i}}(\mathbf{x}_{j})\rvert^{2},\quad j=1,\ldots,N,$
where $\mathbf{E}_{\Omega,\kappa_{i}}=\mathbf{E}_{\mathrm{inc}}+\mathbf{E}_{\mathrm{sc},\Omega,\kappa_{i}}$ is the solution of the forward problem (1) with an object $\Omega$ and the wavenumber $\kappa_{i}$ (see [5 A. Carpio, T. G. Dimiduk, F. Le Louër and M. L. Rapún, When topological derivatives met regularized Gauss–Newton iterations in holographic 3D imaging. J. Comput. Phys. 388, 224–251 (2019) ]). Since the measured data are not exact, in practice one seeks shapes $\Omega$ and functions $\kappa_{i}$ for which the error between the recorded hologram and the synthetic hologram that would be generated solving (1) for the proposed objects and wavenumbers is as small as possible.
### 3.1 Constrained optimization
We recast the inverse problem as an optimization problem with a partial differential equation constraint: find $\Omega$ and $\kappa_{i}$ minimizing the cost functional
$J(\Omega,\kappa_{i})=\frac{1}{2}\sum_{j=1}^{N}\lvert I_{\Omega,\kappa_{i}}(\mathbf{x}_{j})-I_{\mathrm{meas}}(\mathbf{x}_{j})\rvert^{2},$
where $I_{\Omega,\kappa_{i}}=\lvert\mathbf{E}_{\Omega,\kappa_{i}}\rvert^{2}$ and $\mathbf{E}_{\Omega,\kappa_{i}}$ is the solution of (1). Here $\Omega$ and $\kappa_{i}$ are the design variables and the stationary Maxwell system (1) is the constraint. For exact data, the true objects would be a global minimum at which the functional (2) vanishes. In general, spurious local minima may arise.
### 3.2 Topological derivative based approximations
A topological study of the shape functional (2) for $\kappa_{i}$ fixed provides first guesses of the imaged objects without a priori information on them. The topological derivative of a shape functional [32 J. Sokołowski and A. Żochowski, On the topological derivative in shape optimization. SIAM J. Control Optim. 37, 1251–1272 (1999) ] quantifies its sensitivity to removing and including points in an object. Given a point $\mathbf{x}$ in a region $\mathcal{R}$, we have the expansion
$J(\mathcal{R}\setminus\overline{B_{\varepsilon}(\mathbf{x})})=J(\mathcal{R})+\frac{4}{3}\pi\varepsilon^{3}D_{\mathrm{T}}(\mathbf{x},\mathcal{R})+o(\varepsilon^{3}),\quad\varepsilon\to 0,$
for any ball $B_{\varepsilon}(\mathbf{x})=B(\mathbf{x},\varepsilon)$ centered at $\mathbf{x}$ with radius $\varepsilon$. The factor $D_{\mathrm{T}}(\mathbf{x},\mathcal{R})$ is the topological derivative of the functional at ${\mathbf{x}}$ (see [32 J. Sokołowski and A. Żochowski, On the topological derivative in shape optimization. SIAM J. Control Optim. 37, 1251–1272 (1999) ]). If $D_{\mathrm{T}}(\mathbf{x},\mathcal{R})$ is negative, $J(\mathcal{R}\setminus\overline{B_{\varepsilon}(\mathbf{x})}) for $\varepsilon>0$ small. We expect the cost functional to decrease by forming objects $\Omega_{\mathrm{ap}}$ with points below a large enough negative threshold [9 A. Carpio and M.-L. Rapún, Solving inhomogeneous inverse problems by topological derivative methods. Inverse Problems 24, Article ID 045014 (2008) , 14 G. R. Feijoo, A new method in inverse scattering based on the topological derivative. Inverse Problems 20, 1819–1840 (2004) , 27 M. Masmoudi, J. Pommier and B. Samet, The topological asymptotic expansion for the Maxwell equations and some applications. Inverse Problems 21, 547–564 (2005) ]:
$\Omega_{\mathrm{ap}}≔\{\mathbf{x}\in\mathcal{R}\mid D_{\mathrm{T}}(\mathbf{x},\mathcal{R})<-C_{0}\},\quad C_{0}>0.$
When $\mu_{e}=\mu_{i}$, $\mathcal{R}=\mathbb{R}^{3}$ and $\mathbf{E}_{\mathrm{inc}}(\mathbf{x})=\hat{\mathbf{p}}\,e^{\imath\kappa_{e}z}$, asymptotic expansions yield the formula [27 M. Masmoudi, J. Pommier and B. Samet, The topological asymptotic expansion for the Maxwell equations and some applications. Inverse Problems 21, 547–564 (2005) ]
$D_{\mathrm{T}}(\mathbf{x},\mathbb{R}^{3})=3\operatorname{Re}\biggl[\frac{\kappa_{e}^{2}(\kappa_{e}^{2}-\kappa_{i}^{2})}{(\kappa_{i}^{2}+2\kappa_{e}^{2})}\mathbf{E}(\mathbf{x})\cdot\overline{\mathbf{P}}(\mathbf{x})\biggr],\quad\mathbf{x}\in\mathbb{R}^{3},$
where $\mathbf{E}=\mathbf{E}_{\mathrm{inc}}$ and
$\overline{\mathbf{P}}(\mathbf{x})=\sum_{j=1}^{N}\operatorname{\mathbf{curl}}\operatorname{\mathbf{curl}}\biggl(\frac{2}{\kappa_{e}^{2}}G_{\kappa_{e}}(\mathbf{x}-\mathbf{x}_{j})\bigl(I_{\mathrm{meas}}(\mathbf{x}_{j})-\lvert\mathbf{E}_{\mathrm{inc}}(\mathbf{x}_{j})\rvert^{2}\bigr)\overline{\mathbf{E}_{\mathrm{inc}}(\mathbf{x}_{j})}\biggr)$
with $G_{\kappa_{e}}(\mathbf{x})=\frac{1}{4\pi\lvert\mathbf{x}\rvert}e^{\imath\kappa_{e}\lvert\mathbf{x}\rvert}$ denoting the outgoing Green function of the Helmholtz equation [31 J.-C. Nédélec, Acoustic and electromagnetic equations. Appl. Math. Sci. 144, Springer, New York (2001) ]. Once $\Omega_{\mathrm{ap}}$ is constructed, we fit aparametrized contour $\mathbf{q}_{\mathrm{ap}}$ to its boundary. Starshaped parametrizations are typical choices. Figure 2 exemplifies the procedure. The method is robust to noise, in the sense that perturbations of the data with random 10 % or 20 % noise, for instance, produce similar results. Notice that the value of $\kappa_{i}$ enters through a factor that we may scale out in (5) and it is not really needed to localize the object. Similar results are obtained using the topological energy [6 A. Carpio, T. G. Dimiduk, M. L. Rapún and V. Selgas, Noninvasive imaging of three-dimensional micro and nanostructures by topological methods. SIAM J. Imaging Sci. 9, 1324–1354 (2016) ]
$E_{\mathrm{T}}(\mathbf{x},\mathbb{R}^{3})=\lvert\mathbf{E}(\mathbf{x})\rvert^{2}\lvert\mathbf{P}(\mathbf{x})\rvert^{2},$
which does not involve $\kappa_{i}$ at all. No knowledge of $\kappa_{i}$ is needed to construct a first guess of the objects.
### 3.3 Regularized Gauss–Newton iterations
Fast methods to improve our knowledge of the objects starting from an initial guess are based on the following result. Let us consider two Hilbert spaces $X$, $Y$ and a Fréchet differentiable operator $\mathcal{F}\colon D(\mathcal{F})\subset X\to Y$. Assuming that the exact data $y\in Y$ are attainable (that is, there is $x\in X$ such that $\mathcal{F}(x)=y$), but only noisy data $y^{\delta}$ verifying $\lVert y^{\delta}-y\rVert_{Y}\leq\delta$ are accessible, the iteratively regularized Gauss–Newton (IRGN) method [1 A. B. Bakushinskii, On a convergence problem of the iterative-regularized Gauss–Newton method. Zh. Vychisl. Mat. i Mat. Fiz. 32, 1503–1509 (1992) ] constructs a sequence $x_{k+1}^{\delta}$ as follows. We linearize the equation at $x_{k}^{\delta}$ at each step, approximate the solution of $\mathcal{F}(x_{k}^{\delta})+\mathcal{F}^{\prime}(x_{k}^{\delta})\xi=y^{\delta}$ through the minimization problem
$\begin{split}\xi_{k+1}=\operatorname*{Argmin}_{\xi\in X}&\lVert\mathcal{F}(x_{k}^{\delta})+\mathcal{F}^{\prime}(x_{k}^{\delta})\xi-y^{\delta}\rVert_{Y}^{2}\\[-8.6pt] &\quad+\alpha_{k}\lVert x_{k}^{\delta}+\xi-x_{0}\rVert_{X}^{2}\end{split}$
and set $x_{k+1}^{\delta}=x_{k}^{\delta}+\xi_{k+1}$. The Tikhonov term $\alpha_{k}\lVert x_{k}^{\delta}+\xi-x_{0}\rVert_{X}^{2}$ has regularizing properties and promotes convergence for specific choices $x_{0}$ and $\alpha_{k}$ (see [21 T. Hohage, Logarithmic convergence rates of the iteratively regularized Gauss–Newton method for an inverse potential and an inverse scattering problem. Inverse Problems 13, 1279–1299 (1997) ]). The theory of linear Tikhonov regularization guarantees that
$\begin{split}\xi_{k+1}=-\bigl(\mathcal{F}^{\prime}(x_{k}^{\delta})^{*}\mathcal{F}^{\prime}(x_{k}^{\delta})+\alpha_{k}I\bigr)^{-1}[&\mathcal{F}^{\prime}(x_{k}^{\delta})^{*}(\mathcal{F}(x_{k}^{\delta})-y^{\delta})\\[-3.0pt] &\quad+\alpha_{k}(x_{k}^{\delta}-x_{0})],\end{split}$
where $\mathcal{F}^{\prime}(x_{k}^{\delta})^{*}$ denotes the adjoint of the Fréchet derivative $\mathcal{F}^{\prime}(x_{k}^{\delta})$. The noise level $\delta$ affects the stopping criterion, the so-called discrepancy principle.
In a holography set-up, the map $\mathcal{F}$ is the operator that to each parametrization of objects $\mathbf{q}$ assigns the synthetic hologram $\mathbf{I}(\mathbf{q})$ generated by solving the forward problem for those objects. Starshaped parametrizations are a standard choice for simple objects. They describe each object by a few parameters: its center and a radius function represented by a finite combination of spherical harmonics [5 A. Carpio, T. G. Dimiduk, F. Le Louër and M. L. Rapún, When topological derivatives met regularized Gauss–Newton iterations in holographic 3D imaging. J. Comput. Phys. 388, 224–251 (2019) , 20 H. Harbrecht and T. Hohage, Fast methods for three-dimensional inverse obstacle scattering problems. J. Integral Equations Appl. 19, 237–260 (2007) ]. Given a starshaped parametrization $\mathbf{q}_{k}$ and a recorded hologram $\mathbf{I}_{\mathrm{meas}}$ with a level of noise $\delta$, the IRGN method first solves the linearized equation
$\mathbf{I}(\mathbf{q}_{k})+\mathbf{I}^{\prime}(\mathbf{q}_{k})\mathbb{\xi}=\mathbf{I}_{\mathrm{meas}}$
by addressing the nonlinear least squares problem
$\begin{split}\mathbb{\xi}_{k+1}=\operatorname{Argmin}_{\mathbb{\xi}}\bigl\{&\lVert\mathbf{I}_{\mathrm{meas}}-\mathbf{I}(\mathbf{q}_{k})-\mathbf{I}^{\prime}(\mathbf{q}_{k})\mathbb{\xi}\rVert^{2}_{2}\\ &\quad+\alpha_{k}\lVert\mathbf{q}_{k}+\mathbb{\xi}-\mathbf{q}_{\mathrm{ap}}\rVert^{2}_{H^{s}(\mathbb{S}^{2})}\bigr\},\end{split}$
where $H^{s}(\mathbb{S}^{2})$, $s>0$, is an adequate Sobolev space [5 A. Carpio, T. G. Dimiduk, F. Le Louër and M. L. Rapún, When topological derivatives met regularized Gauss–Newton iterations in holographic 3D imaging. J. Comput. Phys. 388, 224–251 (2019) ], and then sets $\mathbf{q}_{k+1}=\mathbf{q}_{k}+\mathbb{\xi}_{k+1}$. The initial parametrization $\mathbf{q}_{0}=\mathbf{q}_{\mathrm{ap}}$ represents the first guess of the objects constructed by topological methods. The updated objects $\Omega_{k}$ correspond to the parametrizations $\mathbf{q}_{k}$. The stopping criterion for the noise level $\delta$ is as follows. If the synthetic hologram calculated numerically for the current approximation of the objects $\mathbf{I}(\mathbf{q}_{k})$ satisfies
$\lVert\mathbf{I}(\mathbf{q}_{k})-\mathbf{I}_{\mathrm{meas}}\rVert_{2}\leq\tau\delta,$
we stop the algorithm, $\tau>0$ being a parameter adjusted to guarantee a reasonable approximation while preventing early stops.
Figures 3 and 4 illustrate the process. Figure 4 depicts the hologram generated by the configuration with three objects shown in Figure 3 (a). We use the topological derivative (5) to spot a first dominant object at the top and locate an object there, see panel (b). Then we apply the IRGN method, see panels (c) and (d). At step 4 the cost functional, depicted in panel (j), stagnates without fulfilling the stopping criteria. This suggests that more objects should be created. This can be done by hybrid methods, as we explain next.
### 3.4 Topologically informed IRGN methods
Approaches that use initial object parametrization as reference have a drawback: the initial guess of the number of objects may be wrong. To overcome it, we have developed hybrid algorithms [5 A. Carpio, T. G. Dimiduk, F. Le Louër and M. L. Rapún, When topological derivatives met regularized Gauss–Newton iterations in holographic 3D imaging. J. Comput. Phys. 388, 224–251 (2019) ] combining topological derivatives and regularized Gauss–Newton iterations [5 A. Carpio, T. G. Dimiduk, F. Le Louër and M. L. Rapún, When topological derivatives met regularized Gauss–Newton iterations in holographic 3D imaging. J. Comput. Phys. 388, 224–251 (2019) ]. We fit an initial parametrization $\mathbf{q}_{\mathrm{ap}}$ to the first guess of the objects constructed by topological methods. Then, we apply the IRGN method and check that the cost (2) decreases. When the cost stagnates without fulfilling the stopping criteria, we reset $\Omega_{\mathrm{ap}}$ equal to the current guess of the objects $\Omega_{k}$ for the last parametrization obtained $\mathbf{q}_{k}$ and calculate the topological derivative of the cost for $\mathcal{R}=\mathbb{R}^{3}\setminus\overline{\Omega_{\mathrm{ap}}}$. This is given by (3) if $\mathbf{x}\in\mathcal{R}=\mathbb{R}^{3}\setminus\overline{\Omega}$ and its equivalent
$\begin{split}&J\bigl(\mathbb{R}^{3}\setminus(\overline{\Omega\setminus\overline{B_{\varepsilon}(\mathbf{x})}})\bigr)\\ &\qquad=J\bigl((\mathbb{R}^{3}\cup B_{\varepsilon}(\mathbf{x}))\setminus\overline{\Omega}\bigr)\\[-3.0pt] &\qquad=J(\mathbb{R}^{3}\setminus\overline{\Omega})-\frac{4}{3}\pi\varepsilon^{3}D_{\mathrm{T}}(\mathbf{x},\mathbb{R}^{3}\setminus\overline{\Omega})+o(\varepsilon^{3})\end{split}$
if $\mathbf{x}\in\Omega$. Asymptotic calculations yield the formula [9 A. Carpio and M.-L. Rapún, Solving inhomogeneous inverse problems by topological derivative methods. Inverse Problems 24, Article ID 045014 (2008) , 5 A. Carpio, T. G. Dimiduk, F. Le Louër and M. L. Rapún, When topological derivatives met regularized Gauss–Newton iterations in holographic 3D imaging. J. Comput. Phys. 388, 224–251 (2019) ]
$\begin{split}&D_{\mathrm{T}}(\mathbf{x},\mathbb{R}^{3}\setminus\overline{\Omega})\\ &\quad=\begin{cases}3\operatorname{Re}\biggl[\dfrac{\kappa_{e}^{2}(\kappa_{e}^{2}-\kappa_{i}^{2})}{(\kappa_{i}^{2}+2\kappa_{e}^{2})}\mathbf{E}(\mathbf{x})\cdot\overline{\mathbf{P}}(\mathbf{x})\biggr],&\mathbf{x}\in\mathbb{R}^{3}\setminus\overline{\Omega},\\[6.45pt] 3\operatorname{Re}\biggl[\dfrac{\kappa_{i}^{2}(\kappa_{e}^{2}-\kappa_{i}^{2})}{(\kappa_{e}^{2}+2\kappa_{i}^{2})}\mathbf{E}(\mathbf{x})\cdot\overline{\mathbf{P}}(\mathbf{x})\biggr],&\mathbf{x}\in\Omega,\end{cases}\end{split}$
when $\mu_{e}=\mu_{i}$, with forward and conjugate adjoint fields satisfying transmission Maxwell problems with object $\Omega=\Omega_{\mathrm{ap}}$:
\displaystyle\begin{aligned} \operatorname{\mathbf{curl}}(\operatorname{\mathbf{curl}}\mathbf{E})-\kappa_{e}^{2}\mathbf{E}&=0&&\textrm{in}\ \mathbb{R}^{3}\setminus\overline{\Omega},\\ \operatorname{\mathbf{curl}}(\operatorname{\mathbf{curl}}\mathbf{E})-\kappa_{i}^{2}\mathbf{E}&=0&&\textrm{in}\ \Omega,\\ \hat{\mathbf{n}}\times\mathbf{E}^{-}&=\hat{\mathbf{n}}\times\mathbf{E}^{+}&&\textrm{on}\ \partial\Omega,\\ \hat{\mathbf{n}}\times\operatorname{\mathbf{curl}}\mathbf{E}^{-}&=\hat{\mathbf{n}}\times\operatorname{\mathbf{curl}}\mathbf{E}^{+}&&\textrm{on}\ \partial\Omega,\end{aligned}
$\displaystyle\lim_{\lvert\mathbf{x}\rvert\to\infty}\lvert\mathbf{x}\rvert\lvert\operatorname{\mathbf{curl}}(\mathbf{E}-\mathbf{E}_{\mathrm{inc}})\times\hat{\mathbf{x}}-\imath\kappa_{e}(\mathbf{E}-\mathbf{E}_{\mathrm{inc}})\rvert=0,$
\displaystyle\begin{aligned} \operatorname{\mathbf{curl}}(\operatorname{\mathbf{curl}}\overline{\mathbf{P}})-\kappa_{e}^{2}\overline{\mathbf{P}}&=2\sum_{j=1}^{N}(\mathbf{I}_{\mathrm{meas}}-\lvert\mathbf{E}\rvert^{2})\overline{\mathbf{E}}\delta_{\mathbf{x}_{j}}&&\textrm{in}\ \mathbb{R}^{3}\setminus\overline{\Omega},\\ \operatorname{\mathbf{curl}}(\operatorname{\mathbf{curl}}\overline{\mathbf{P}})-\kappa_{i}^{2}\overline{\mathbf{P}}&=0&&\textrm{in}\ \Omega,\\ \hat{\mathbf{n}}\times\overline{\mathbf{P}}^{-}&=\hat{\mathbf{n}}\times\overline{\mathbf{P}}^{+}&&\textrm{on}\ \partial\Omega,\\ \hat{\mathbf{n}}\times\operatorname{\mathbf{curl}}\overline{\mathbf{P}}^{-}&=\hat{\mathbf{n}}\times\operatorname{\mathbf{curl}}\overline{\mathbf{P}}^{+}&&\textrm{on}\ \partial\Omega,\end{aligned}
$\displaystyle\lim_{\lvert\mathbf{x}\rvert\to\infty}\lvert\mathbf{x}\rvert\lvert\operatorname{\mathbf{curl}}\overline{\mathbf{P}}\times\hat{\mathbf{x}}-\imath\kappa_{e}\overline{\mathbf{P}}\rvert=0,$
where $\hat{\mathbf{n}}$ is the unit outer normal, $\hat{\mathbf{x}}={\mathbf{x}}/\lvert\mathbf{x}\rvert$ and $\delta_{\mathbf{x}_{j}}$ are Dirac masses concentrated at the detectors $\mathbf{x}_{j}$, $j=1,...,N$.
We create a new approximation $\Omega_{\mathrm{new}}$ from $\Omega_{\mathrm{ap}}$ by removing the points in $\Omega_{\mathrm{ap}}$ at which the topological derivate surpasses a positive threshold $c_{\mathrm{new}}$ and adding the points outside $\Omega_{\mathrm{ap}}$ at which the topological derivate falls below a negative threshold $-C_{\mathrm{new}}$, see [9 A. Carpio and M.-L. Rapún, Solving inhomogeneous inverse problems by topological derivative methods. Inverse Problems 24, Article ID 045014 (2008) , 6 A. Carpio, T. G. Dimiduk, M. L. Rapún and V. Selgas, Noninvasive imaging of three-dimensional micro and nanostructures by topological methods. SIAM J. Imaging Sci. 9, 1324–1354 (2016) ]:
$\begin{split}\Omega_{\mathrm{new}}&≔\{\mathbf{x}\in\Omega_{\mathrm{ap}}\mid D_{\mathrm{T}}(\mathbf{x},\mathbb{R}^{3}\setminus\overline{\Omega}_{\mathrm{ap}})
The constants $C_{\mathrm{new}}$, $c_{\mathrm{new}}$ are selected to ensure a decrease in the cost functional (2) keeping $\kappa_{i}$ fixed. Once $\Omega_{\mathrm{new}}$ is constructed, we fit a parametrization $\mathbf{q}_{\mathrm{new}}$ to its contour and restart the IRGN procedure for $\mathbf{q}_{\mathrm{ap}}=\mathbf{q}_{\mathrm{new}}$. The procedure stops when the changes in the cost and the parametrizations fall below selected thresholds.
Let us revisit the example studied in Figures 3 and 4. At step 4 of the IRGN method the cost stagnates without fulfilling the stopping criteria. We calculate the topological derivative (6) of the cost for the current approximation of the objects, illustrated in Figure 3 (e). A new region where the topological derivative attains large negative values appears. We create a new object there and update the parametrization, see panel (f). Then we apply the IRGN method again. Since the cost functional still stagnates without fulfilling the stopping criteria, we recalculate the topological derivative (6) for the available object approximation. Panel (g) suggests the creation of a third object. We update the IRGN method using this new configuration, and evolve the resulting object configuration, represented in panel (h), until the stopping criterion is met at panel (i) after 24 steps. Panel (j) illustrates stagnation and decrease of the cost as new objects are added to the parametrization using topological information and the updated IRGN method evolves, in a logarithmic scale. These simulations assume $\kappa_{i}$ known and fixed. Once first guesses for $\kappa_{i}$ are available, we can implement this procedure considering constant values for $\kappa_{i}$ at each component of the parametrization. Obtaining first guesses for $\kappa_{i}$ that are reliable enough is a hard task [7 A. Carpio, T. G. Dimiduk, V. Selgas and P. Vidal, Optimization methods for in-line holography. SIAM J. Imaging Sci. 11, 923–956 (2018) ] and the optimization procedure can encounter difficulties. Bayesian approaches provide alternative procedures that can handle these difficulties while quantifying uncertainty associated to noise and missing information.
## 4 Bayesian inverse problem
Bayesian formulations consider all unknowns in the inverse problem as random variables. Given a recorded hologram $\mathbf{I}_{\mathrm{meas}}$, we seek a finite-dimensional vector of parameters $\mathbb{\nu}$ characterizing the imaged objects. When we assume the presence of $L$ objects, $\mathbb{\nu}$ is formed by $L$ blocks, one per object. Using Bayes’ formula [22 J. Kaipio and E. Somersalo, Statistical and computational inverse problems. Appl. Math. Sci. 160, Springer, New York (2005) , 33 A. Tarantola, Inverse problem theory and methods for model parameter estimation. Society for Industrial and Applied Mathematics (SIAM), Philadelphia (2005) ]
$p_{\mathrm{pt}}(\mathbb{\nu})≔p(\mathbb{\nu}|\mathbf{I}_{\mathrm{meas}})=\frac{p(\mathbf{I}_{\mathrm{meas}}|\mathbb{\nu})}{p(\mathbf{I}_{\mathrm{meas}})}p_{\mathrm{pr}}(\mathbb{\nu}),$
where $p_{\mathrm{pr}}(\mathbb{\nu})$ represents the prior probability of the variables, which incorporates our previous knowledge on them, while $p(\mathbf{I}_{\mathrm{meas}}|\mathbb{\nu})$ is the conditional probability or likelihood of observing $\mathbf{I}_{\mathrm{meas}}$ given $\mathbb{\nu}$. The solution of the Bayesian inverse problem is the posterior probability $p_{\mathrm{pt}}(\mathbb{\nu}|\mathbf{I}_{\mathrm{meas}})$ of the parameters given the data. Sampling the posterior distribution, we obtain statistical information on the most likely values of the object parameters with quantified uncertainty.
### 4.1 Likelihood choice
Assuming additive Gaussian measurement noise, the measured hologram and the synthetic hologram obtained for the true object parameters are related by $\mathbf{I}_{\mathrm{meas}}=\mathbf{I}(\mathbb{\nu}_{\mathrm{true}})+\mathbb{\varepsilon}$, where the measurement noise $\mathbb{\varepsilon}$ is distributed as a multivariate Gaussian $\mathcal{N}(0,\mathbb{\Gamma}_{\mathrm{n}})$ with zero mean and covariance matrix $\mathbb{\Gamma}_{\mathrm{n}}$. A possible choice for the likelihood $p(\mathbf{I}_{\mathrm{meas}}|\mathbb{\nu})$ is [8 A. Carpio, S. Iakunin and G. Stadler, Bayesian approach to inverse scattering with topological priors. Inverse Problems 36, Article ID 105001 (2020) ]
$p(\mathbf{I}_{\mathrm{meas}}|\mathbb{\nu})=\frac{1}{(2\pi)^{N/2}\sqrt{\lvert\mathbb{\Gamma}_{\mathrm{n}}\rvert}}\exp\Bigl(-\frac{1}{2}\lVert\mathbf{I}(\mathbb{\nu})-\mathbf{I}_{\mathrm{meas}}\rVert^{2}_{\mathbb{\Gamma}_{\mathrm{n}}^{\smash{-1}}}\Bigr)$
with $\lVert\mathbf{v}\rVert_{\mathbb{\Gamma}_{\mathrm{n}}^{-1}}^{2}=\overline{\mathbf{v}}^{\mathrm{t}}\mathbb{\Gamma}_{\mathrm{n}}^{-1}\mathbf{v}$. Here, $\mathbf{I}(\mathbb{\nu})$ represents the synthetic hologram obtained solving the forward problem (1) for objects characterized by parameters $\mathbb{\nu}$, see Section 2.
### 4.2 Topological priors
A typical choice for the prior distribution is a multivariate Gaussian
$p_{\mathrm{pr}}(\mathbb{\nu})=\frac{1}{(2\pi)^{n/2}}\frac{1}{\sqrt{\lvert\mathbb{\Gamma}_{\smash[b]{\mathrm{pr}}}\rvert}}\exp\Bigl(-\frac{1}{2}(\mathbb{\nu}-\mathbb{\nu}_{0})^{t}\mathbb{\Gamma}_{\mathrm{pr}}^{-1}(\mathbb{\nu}-\mathbb{\nu}_{0})\Bigr)$
if $\mathbb{\nu}$ is “admissible”, and $p_{\mathrm{pr}}(\mathbb{\nu})=0$ when $\mathbb{\nu}$ is “not admissible”, that is, it does not satisfy known constraints on the parameter set, see [8 A. Carpio, S. Iakunin and G. Stadler, Bayesian approach to inverse scattering with topological priors. Inverse Problems 36, Article ID 105001 (2020) ] for details. Here, $\mathbb{\Gamma}_{\mathrm{pr}}$ is the covariance matrix and $n$ is the total number of parameters characterizing the objects. The mean $\mathbb{\nu}_{0}$ is typically a set of parameter values characterizing an initial guess of the objects. Sharp priors are obtained fitting parametrizations to first guesses of the objects obtained from the study of topological fields associated to deterministic shape costs, as explained in Section 3.2.
### 4.3 Markov chain Monte Carlo sampling
Combining (7), (8) and (9), the posterior probability becomes (neglecting normalization constants)
$p_{\mathrm{pt}}(\mathbb{\nu})\propto\exp\Bigl(-\frac{1}{2}\lVert\mathbf{I}(\mathbb{\nu})-\mathbf{I}_{\mathrm{meas}}\rVert^{2}_{\mathbb{\Gamma}_{\mathrm{n}}^{\smash{-1}}}-\frac{1}{2}\lVert\mathbb{\nu}-\mathbb{\nu}_{0}\rVert_{\mathbb{\Gamma}_{\mathrm{pr}}^{\smash{-1}}}^{2}\Bigr)$
when $\mathbb{\nu}$ is admissible, and $p_{\mathrm{pt}}(\mathbb{\nu})=0$ otherwise. Markov chain Monte Carlo (MCMC) methods provide tools to sample unnormalized posteriors. Classical MCMC methods, such as Hamiltonian Monte Carlo or Metropolis–Hastings [30 R. M. Neal, MCMC using Hamiltonian dynamics. In Handbook of Markov Chain Monte Carlo, edited by S. Brooks, A. Gelman, G. L. Jones and X. L. Meng, Chapman & Hall/CRC Handb. Mod. Stat. Methods, CRC Press, Boca Raton, 113–162 (2011) ] construct a chain of $n$-dimensional states $\mathbb{\nu}^{(0)}\to\mathbb{\nu}^{(1)}\to\cdots\to\mathbb{\nu}^{(k)}\to\nobreak\cdots$ which evolve to be distributed in accordance with the target distribution $p_{\mathrm{pt}}(\mathbb{\nu})$. After sampling an initial state $\mathbb{\nu}^{(0)}$ from the prior distribution (9), the chain advances from one state $\mathbb{\nu}^{(k)}$ to the next $\mathbb{\nu}^{(k+1)}$ by means of a transition operator that varies with the method employed [30 R. M. Neal, MCMC using Hamiltonian dynamics. In Handbook of Markov Chain Monte Carlo, edited by S. Brooks, A. Gelman, G. L. Jones and X. L. Meng, Chapman & Hall/CRC Handb. Mod. Stat. Methods, CRC Press, Boca Raton, 113–162 (2011) ]. More recent ensemble MCMC samplers [13 M. M. Dunlop and G. Stadler, A gradient-free subspace-adjusting ensemble sampler for infinite-dimensional Bayesian inverse problems, preprint, arXiv:2202.11088v1 (2022) , 18 J. Goodman and J. Weare, Ensemble samplers with affine invariance. Commun. Appl. Math. Comput. Sci. 5, 65–80 (2010) ] draw $W$ initial states from the prior distribution (the “walkers” or “particles”) and transition to new states while mixing the previous ones to generate several chains. This approach allows for parallelization and can handle multimodal posteriors [8 A. Carpio, S. Iakunin and G. Stadler, Bayesian approach to inverse scattering with topological priors. Inverse Problems 36, Article ID 105001 (2020) ].
Figure 5 illustrates the results in a two-dimensional geometry, to reduce the computational cost in the tests. A few million samples were generated, which requires solving an identical number of forward problems. In two-dimensional set-ups we replace the stationary transmission problem for the Maxwell equations by a transmission problem posed for the Helmholtz equation [8 A. Carpio, S. Iakunin and G. Stadler, Bayesian approach to inverse scattering with topological priors. Inverse Problems 36, Article ID 105001 (2020) ]. Assuming $\kappa_{i}$ is piecewise constant, we resort to fast boundary elements to solve the Hemholtz transmission problems in two dimensions [12 V. Domínguez, S. Lu and F.-J. Sayas, A fully discrete Calderón calculus for two dimensional time harmonic waves. Int. J. Numer. Anal. Model. 11, 332–345 (2014) ]. Once a large enough collection of samples is generated [17 A. Gelman and D. B. Rubin, Inference from iterative simulation using multiple sequences. Statist. Sci. 7, 457–472 (1992) ], we extract statistical information describing the imaged object: the most likely shapes, sizes, locations, as well as uncertainty in the predictions. While starshaped two-dimensional objects can be reasonably characterized with 10–20 parameters, three-dimensional objects require 80–90. Full characterization of the posterior probability by MCMC sampling becomes more expensive as the number of parameters and the time required to solve forward problems increase.
### 4.4 Laplace approximation
The full characterization of the posterior probability is a challenging and costly probability problem for moderate- and high-dimensional parameters $\mathbb{\nu}$. Low-cost approximations of the posterior distribution often rely on finding the maximum a posteriori (MAP) point, that is, the set of parameters that maximize the posterior probability. Upon taking logarithms, maximizing the posterior probability of the parameter set $\mathbb{\nu}$ given the data $\mathbf{I}_{\mathrm{meas}}$ is equivalent to minimizing the regularized cost functional [2 C. M. Bishop, Pattern recognition and machine learning. Springer, New York (2006) ]
$J(\mathbb{\nu})≔\frac{1}{2}\lVert\mathbf{I}(\mathbb{\nu})-\mathbf{I}_{\mathrm{meas}}\rVert^{2}_{\mathbb{\Gamma}_{\mathrm{n}}^{\smash{-1}}}+\frac{1}{2}\lVert\mathbb{\nu}-\mathbb{\nu}_{0}\rVert^{2}_{\mathbb{\Gamma}_{\mathrm{pr}}^{\smash{-1}}}.$
This is a nonlinear least-squares problem of the form previously considered in deterministic inversion, including regularization terms provided by the prior knowledge. We can solve it efficiently by using an adapted Levenberg–Marquardt–Fletcher iterative scheme [15 R. Fletcher, Modified Marquardt subroutine for non-linear least squares. Technical report 197213 (1971) ]. Starting from $\mathbb{\nu}^{0}=\mathbb{\nu}_{0}$, we set $\mathbb{\nu}^{k+1}=\mathbb{\nu}^{k}+\mathbb{\xi}^{k+1}$, where $\mathbb{\xi}^{k+1}$ is the solution of
$\bigl(\mathbf{H}^{\mathrm{GN}}_{\lambda_{k}}(\mathbb{\nu}^{k})+\omega_{k}\operatorname{diag}(\mathbf{H}^{\mathrm{GN}}_{\lambda_{k}}(\mathbb{\nu}^{k}))\bigr)\mathbb{\xi}^{k+1}=-\mathbf{g}_{\lambda_{k}}(\mathbb{\nu}^{k}).$
Here, $\mathbf{H}^{\mathrm{GN}}$ is the Gauss–Newton approximation to the Hessian of the functional (10) and $\mathbf{g}$ is its gradient, while $\lambda_{k}$ is a scaling factor for $\mathbb{\Gamma}_{\mathrm{pr}}^{-1}$ that balances the different orders of magnitude of the two terms defining the cost in the first iterations, and becomes equal to 1 at a certain point. At each step, the adjustable parameter $\omega_{k}>\nobreak 0$ increases until the cost $J(\mathbb{\nu}^{k})$ decreases, and decreases otherwise, making the iteration closer to Gauss–Newton or gradient schemes as required.
Linearization about the resulting MAP point $\mathbb{\nu}_{\mathrm{MAP}}$ (the so-called Laplace approximation) provides an approximation of the posterior distribution by a Gaussian with mean $\mathbb{\nu}_{\mathrm{MAP}}$ and posterior covariance $\mathbb{\Gamma}_{\mathrm{po}}=\mathbf{H}^{\mathrm{GN}}(\mathbb{\nu}_{\mathrm{MAP}})^{-1}$. Sampling this Gaussian, we extract statistical information representing the dominant mode at a much lower computational cost, see Figure 6. Reaching $\mathbb{\nu}_{\mathrm{MAP}}$ takes about 20 steps of scheme (11). The whole process, sampling included, is finished in a few minutes, instead of a few days.
We have considered $\kappa_{i}$ fixed and known in these tests. In case it is constant and unknown, it becomes an additional parameter included in $\mathbb{\nu}$. In the end, we obtain additional histograms reflecting uncertainty about the value with highest probability [8 A. Carpio, S. Iakunin and G. Stadler, Bayesian approach to inverse scattering with topological priors. Inverse Problems 36, Article ID 105001 (2020) ].
## 5 Perspectives
Digital holography poses challenging inverse problems which provide an opportunity to develop and test a variety of analytical and computational tools. First guesses of imaged objects are obtained by calculating the topological derivative of misfit functionals comparing the true hologram and the synthetic holograms that would be generated for different object configurations according to the selected forward model. Such guesses are robust to noise in the data. To reduce dimensionality, one can characterize the imaged objects by means of starshaped parametrizations. In a deterministic framework, we have shown that hybrid schemes combining iteratively regularized Gauss–Newton methods with topological derivative initializations and updates lead to good reconstructions of simple object configurations in a few steps, using stopping criteria that take into account the expected level of noise in the data. We are able to quantify uncertainty in such predictions by resorting to Bayesian formulations with topological priors. In two dimensions, Markov chain Monte Carlo methods provide a complete characterization of the posterior probability of the observed hologram being that is generated by a few starshaped objects. Three-dimensional tests are affordable for very simple shapes, such as a sphere or a cylinder [11 T. G. Dimiduk and V. N. Manoharan, Bayesian approach to analyzing holograms of colloidal particles. Optics Express 24, 24045–24060 (2016) ]. Handling high-dimensional parametrizations, in three dimensions or just irregular shapes, requires the introduction of strategies to reduce the computational cost. Laplace approximations based on optimizing to find the highest probability parameter set and then linearizing the posterior probability about it to obtain a multivariate Gaussian distribution are useful tools for uncertainty quantification when there is a single dominant mode. Developing fast sampling methods which are robust as dimension grows would be an important step forward to handle more general situations.
Holography set-ups are particularly challenging due to the fact that a single incident wave is used. We have focused here on light imaging, though acoustic waves can also be used to resolve at different scales. We expect similar techniques to be useful in inverse scattering problems involving other types of waves and different emitter/receiver configurations, such as microwave imaging or elastography, for instance.
Ana Carpio graduated in numerical analysis from Universidad del País Vasco in Spain. She holds a PhD in mathematics from Laboratoire Jacques Louis Lions (Université Paris VI, now Paris Sorbonne), and has been a postdoctoral fellow at the Oxford Centre for Industrial and Applied Mathematics. She is a recipient of the SEMA (Spanish Society of Applied Mathematics) Prize to Young Researchers. Since 2006, she is a professor of applied mathematics at Universidad Complutense de Madrid and a member of the Gregorio Millán Barbany Institute for Modelling and Simulation in Fluid Dynamics, Nanoscience and Industrial Mathematics at Universidad Carlos III de Madrid. Currently, she serves as a Spanish representative in the ECMI (European Consortium for Mathematics in Industry) Council. Her main topics of research nowadays are inverse problems and data driven computational models in biomedicine. ana_carpio@mat.ucm.es
## References
1. A. B. Bakushinskii, On a convergence problem of the iterative-regularized Gauss–Newton method. Zh. Vychisl. Mat. i Mat. Fiz. 32, 1503–1509 (1992)
2. C. M. Bishop, Pattern recognition and machine learning. Springer, New York (2006)
3. C. F. Borhen and D. R. Huffman, Absorption and scattering of light by small particles. Wiley Sciences, John Wiley & Sons, Berlin (1998)
4. T. Bui-Thanh, O. Ghattas, J. Martin and G. Stadler, A computational framework for infinite-dimensional Bayesian inverse problems Part I: The linearized case, with application to global seismic inversion. SIAM J. Sci. Comput. 35, A2494–A2523 (2013)
5. A. Carpio, T. G. Dimiduk, F. Le Louër and M. L. Rapún, When topological derivatives met regularized Gauss–Newton iterations in holographic 3D imaging. J. Comput. Phys. 388, 224–251 (2019)
6. A. Carpio, T. G. Dimiduk, M. L. Rapún and V. Selgas, Noninvasive imaging of three-dimensional micro and nanostructures by topological methods. SIAM J. Imaging Sci. 9, 1324–1354 (2016)
7. A. Carpio, T. G. Dimiduk, V. Selgas and P. Vidal, Optimization methods for in-line holography. SIAM J. Imaging Sci. 11, 923–956 (2018)
8. A. Carpio, S. Iakunin and G. Stadler, Bayesian approach to inverse scattering with topological priors. Inverse Problems 36, Article ID 105001 (2020)
9. A. Carpio and M.-L. Rapún, Solving inhomogeneous inverse problems by topological derivative methods. Inverse Problems 24, Article ID 045014 (2008)
10. D. Colton and R. Kress, Inverse acoustic and electromagnetic scattering theory. Appl. Math. Sci. 93, Springer, Berlin (1992)
11. T. G. Dimiduk and V. N. Manoharan, Bayesian approach to analyzing holograms of colloidal particles. Optics Express 24, 24045–24060 (2016)
12. V. Domínguez, S. Lu and F.-J. Sayas, A fully discrete Calderón calculus for two dimensional time harmonic waves. Int. J. Numer. Anal. Model. 11, 332–345 (2014)
13. M. M. Dunlop and G. Stadler, A gradient-free subspace-adjusting ensemble sampler for infinite-dimensional Bayesian inverse problems, preprint, arXiv:2202.11088v1 (2022)
14. G. R. Feijoo, A new method in inverse scattering based on the topological derivative. Inverse Problems 20, 1819–1840 (2004)
15. R. Fletcher, Modified Marquardt subroutine for non-linear least squares. Technical report 197213 (1971)
16. J. Fung, R. P. Perry, T. G. Dimiduk and V. N. Manoharan, Imaging multiple colloidal particles by fitting electromagnetic scattering solutions to digital holograms. J. Quant. Spectroscopy Radiative Transfer 113, 212–219 (2012)
17. A. Gelman and D. B. Rubin, Inference from iterative simulation using multiple sequences. Statist. Sci. 7, 457–472 (1992)
18. J. Goodman and J. Weare, Ensemble samplers with affine invariance. Commun. Appl. Math. Comput. Sci. 5, 65–80 (2010)
19. P. Grisvard, Elliptic problems in nonsmooth domains. Classics Appl. Math. 69, SIAM, Philadelphia (2011)
20. H. Harbrecht and T. Hohage, Fast methods for three-dimensional inverse obstacle scattering problems. J. Integral Equations Appl. 19, 237–260 (2007)
21. T. Hohage, Logarithmic convergence rates of the iteratively regularized Gauss–Newton method for an inverse potential and an inverse scattering problem. Inverse Problems 13, 1279–1299 (1997)
22. J. Kaipio and E. Somersalo, Statistical and computational inverse problems. Appl. Math. Sci. 160, Springer, New York (2005)
23. S. H. Lee, Y. Roichman, G. R. Yi, S. H. Kim, S. M. Yang, A. van Blaaderen, P. van Oostrum and D. G. Grier, Characterizing and tracking single colloidal particles with video holographic microscopy. Optics Express 15, 18275–18282 (2007)
24. F. Le Louër, A spectrally accurate method for the direct and inverse scattering problems by multiple 3D dielectric obstacles. ANZIAM J. 59, E1–E49 (2018)
25. A. Maier, S. Steidl, V. Christlein, J. Hornegger, Medical imaging systems: An introductory guide. Springer (2018)
26. P. Marquet, B. Rappaz, P. J. Magistretti, E. Cuche, Y. Emery, T. Colomb and C. Depeursinge, Digital holographic microscopy: a noninvasive contrast imaging technique allowing quantitative visualization of living cells with subwavelength axial accuracy. Optics Letters 30, 468–478 (2005)
27. M. Masmoudi, J. Pommier and B. Samet, The topological asymptotic expansion for the Maxwell equations and some applications. Inverse Problems 21, 547–564 (2005)
28. S. W. McCandless and C. R. Jackson, Principles of synthetic aperture radar. In SAR Marine Users Manual, NOAA (2004)
29. S. Meddahi, F.-J. Sayas and V. Selgás, Nonsymmetric coupling of BEM and mixed FEM on polyhedral interfaces. Math. Comp. 80, 43–68 (2011)
30. R. M. Neal, MCMC using Hamiltonian dynamics. In Handbook of Markov Chain Monte Carlo, edited by S. Brooks, A. Gelman, G. L. Jones and X. L. Meng, Chapman & Hall/CRC Handb. Mod. Stat. Methods, CRC Press, Boca Raton, 113–162 (2011)
31. J.-C. Nédélec, Acoustic and electromagnetic equations. Appl. Math. Sci. 144, Springer, New York (2001)
32. J. Sokołowski and A. Żochowski, On the topological derivative in shape optimization. SIAM J. Control Optim. 37, 1251–1272 (1999)
33. A. Tarantola, Inverse problem theory and methods for model parameter estimation. Society for Industrial and Applied Mathematics (SIAM), Philadelphia (2005)
34. J. Tromp, Seismic wavefield imaging of Earth’s interior across scales. Nature Reviews Earth & Environment 1, 40–53 (2020)
35. T. Vincent, Introduction to holography. CRC Press (2012)
36. A. Wang, T. G. Dimiduk, J. Fung, S. Razavi, I. Kretzschmar, K. Chaudhary and V. N. Manoharan, Using the discrete dipole approximation and holographic microscopy to measure rotational dynamics of non-spherical colloidal particles. J. Quant. Spectroscopy Radiative Transfer 146, 499–509 (2014)
37. A. Yevick, M. Hannel and D. G. Grier, Machine-learning approach to holographic particle characterization. Optics Express 22, 26884–26890 (2014)
38. M. A. Yurkin and A. G. Hoekstra, The discrete-dipole-approximation code ADDA: Capabilities and known limitations. J. Quant. Spectroscopy Radiative Transfer 112, 2234–2247 (2011) | 2023-02-09 02:10:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 216, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5263742208480835, "perplexity": 1364.2661481846783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501066.53/warc/CC-MAIN-20230209014102-20230209044102-00030.warc.gz"} |
http://gmatclub.com/forum/adult-with-add-studying-for-the-gmat-long-post-44216.html | Find all School-related info fast with the new School-Specific MBA Forum
It is currently 24 Sep 2016, 14:47
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
Author Message
TAGS:
### Hide Tags
Manager
Joined: 04 Nov 2006
Posts: 158
Followers: 3
Kudos [?]: 11 [1] , given: 0
### Show Tags
08 Apr 2007, 18:29
1
KUDOS
1
This post was
BOOKMARKED
Cool,
Good post.
I think your attitude is great. You sound very determined and I'm going to bet that if you keep it up, you'll get your 700 on the GMAT.
Forget about pride and all that stuff about showing weakness by finding a tutor. Your goal is to do the best you can on this test, and if that means paying a few bucks for a learning environment that fits you well, then by all means go for it.
Forget about having ADD. I was "diagnosed" with it as well, and that fact never entered my mind while I was studying for the GMAT. Not to reduce the importance of the condition, but you can get past it. In fact, it seems like you already are.
Keep going and let us know how it turns out.
Manager
Joined: 03 Feb 2007
Posts: 164
Followers: 1
Kudos [?]: 3 [1] , given: 0
### Show Tags
08 Apr 2007, 21:24
1
KUDOS
Coolnapz wrote:
Hi guys,
I've been studying for the GMAT since about mid January, discovered this site along the way, and found that it has been great for tips, but more importantly for realizing that I am not the only one who faces anxiety about the exam. The entries that I've read through in this site helped me realize that there is no shame in seeking help for your weaknesses. I will take the GMAT on May 12th, and again if I have to, but I feel that I've hit a turning point in my preparation that might be able to help others.
First off, I'm an adult with ADD. I actually found out about it 4 years ago during my first attempt to take the GMAT. The diagnosis of 3 separate psychologists was unanimous, and it confirmed something I kind of thought I had all along. Throughout every stage of my academic career, grade school through college, I would get into honors programs based purely on the results of IQ tests, but then proceed to underperform miserably, and consistently linger in the bottom quartile of my classes. There were occasional curve blowing performances on exams, but it was really hit or miss.
For the past several months I have trained relentlessly for this exam, and have cut just about anything not related to work, study, sleep, or working out from my life. I've done all the Kaplan Premier problems, Manhattan GMAT, and am working my way through the OG 11. I study 2 hours during the weekdays, 8-10 hours on both Sat and Sun, and of course take my medication. Last week I took my 4th Kaplan practice CAT and finally showed a 20 point improvement over my first exam, I got a 560. I was dejected, angry, and I could not understand why my brain seemed to freeze on site of hard problems. After all that study time, practice problems, 3 months of not going out or socializing, and all I can show for it is a 560?
I would read the postings of the guys who would study for one month and then blow the doors off of the GMAT and just think to myself, maybe I really am an idiot. Maybe I'm just deluded about this entire idea of going to B school. After all, if there's a guy who can study 1 month and score a 750, and I'm studying 3 months for a 560, what chance do I really have at keeping up with all the brilliant non ADD people in b-school? Is it smart to give up a high paying job in hedge funds and go into debt, just to be at the bottom quartile again?
After some deep thinking I realized that I am obviously NOT one of those guys who can study for a month and get a 700+, but that doesn't make me a complete idiot.
You see the epiphany that I came to was that everyone has their OWN style of learning. This fact is well documented, and in fact part of the reason I am a top salesperson is because I am very good at assessing the "learning style" of a prospect and adapting to how they specifically process information in order to make the sale. Some people just need a pitchbook and they are ready to buy, while others need several demonstrations and have a ton of questions. The vast majority of salespeople I see, both in my company and outside of it, approach all prospects in the same way, and get frustrated and call the prospect an idiot if they don't buy.
Being cognizant of my condition, I realized after much blunder that just reading the text books, doing the problems, and reading the explanation, HAS NEVER WORKED FOR ME. After deeper reflection and review I realized that literally THE ONLY time I really performed to my potential was during one on one instruction. The only instances in the past in which I would seek one on one instruction was when I was about to fail a class again. Then, after acing an exam, I would stop the one on one instruction and fall back into under performance.
I had been resisting idea of getting a personal tutor, because it seemed to me that to do so would be an admission of weakness, and by extension an admission of stupidity. But then I realized this is really no different from using a personal trainer at the gym. I am a bit of a fitness enthusiast, and several years ago I hired a top class trainer who taught me how to workout so that I would get lean instead of just bulky like the powerlifting guys. We are all blind to our own faults, our recurring inefficiencies, our failure to follow proper form.
Last week I evaluated several tutors, all of whom scored 780's or better, had MBA's from Ivy league B-schools, and found one whom I felt to be the best match. I sent him 10 problems which I considered to be very difficult, and had been paralyzed by during the practice tests. (Mostly data sufficiency problems with inequalities, and stuff with lots of moving parts) We met last Monday, and he showed me how I should be approaching problems, what to focus on, what not to get distracted by, the mechanics of some of the trick problems that made them solvable within 45 seconds, etc. I shall omit the specifics, but the point is that I learned more in that 1 hour than I would have on my own during 2 weeks. What I needed was a person to directly interact with, to bring life to the solution manual, and most importantly to stay focused on the task at hand.
Sure, the tutor won't be there on exam day and he can't do my homework for me, just like my trainer can't do my pushups for me or be there for me in the ring during a boxing match. However, he can help me get the most out of my preparation and serve as an objective opinion on my state of readiness.
His comments were eerily familiar to that of many teachers I have had before, who saw in me raw potential to do extraordinarily well, upper 5%, if I do the work. As a salesperson I can smell bullshit from a mile a way, but I did not feel like this was not an attempt to get me to take more lessons.
Whether you have ADD or not, if you are putting in 20+ honest hours a week studying for the GMAT and find yourself struggling, I suggest that you reexamine your strengths and weaknesses. By that I don't simply mean, do you keep missing sentence correction problems, but think about your historical academic performance and if necessary get somebody to help you. Maybe you just need a couple of pointers and you'll be on your way or maybe you need fundamental work done across the board. Whatever the case, if you've put in the requisite time and effort, and you aren't seeing results, you're obviously not doing something right. Getting poor results on your GMAT practice tests doesn't make you stupid, but continually repeating the process that gives you the same bad score does.
I took the Kaplan course in the classroom four years ago, it's a good program but it did not fit my learning style. I looked on Craigslist here in NYC and found at least a dozen tutors ranging from $50 to$120 an hour.
If you think that you might have ADD, it couldn't hurt to have yourself checked out, if you get a positive diagnostics it will come more as a relief if anything. If you know you're an adult with ADD and are getting owned by the GMAT, you have to realize that this exam attacks some of our core weaknesses, staying focused, being particular about details, and moving from one structured task to another both quickly and seamlessly. In short you are going to struggle, but it's not impossible. There is a version of the GMAT for people with documented learning disabilities, but I myself refuse to take it.
In closing, I hope that this entry provides some insight to other journeyman GMAT takers. Perhaps some of you have faced the same challenges that I have. I will keep you all posted in the weeks to come, and wish you the best in your efforts to prepare.
I have A.D.D, and I'm too lazy to read the whole post. However, you should apply for extra time on the test. I got it. You deserve it. There is no way I can read a passage quick and compete with people who don't have memory disorders like me and A.D.D. I didn't discover it till I was 20. I'm 24 now. It showed in my ACT scores.(33 in math, 17 in reading). Nothing you shouldn't get help for
Manager
Joined: 14 Feb 2007
Posts: 76
Followers: 1
Kudos [?]: 2 [1] , given: 0
### Show Tags
23 Apr 2007, 14:50
1
KUDOS
Hey.
Everything you vented on the forum mirrors what I went through.
I am in my 4th month of studying. I was loaded up on Concerta and Lexapro. First 2 months I thought I was just stupid. Then thought I had ADD. Now, I have no doubt that I have lots of holes in my approach to the GMAT. Therefore, I got myself a tutor who scored in the 99th percentile. Just like a person who is a gym goer(I used to be a personal trainer), I was using the wrong form and injuring myself. It takes someone who is an expert to see flaws in your thought process.
No matter what I did, I plateaued between a 600 and a 630.
Overstudying without the correct approach to concepts only tires you out.
Now, I study in less time but at a more efficient level. I am taking another practice test next week and I have no doubt that I am floating around a 680 after 1 month of private tutoring. Also, I stopped taking any kinds of medication. A good tutor should be able to see holes in your thought process. In my case, I was reading from too many different sources on how to approach math problems. This inevitably distorted my understanding of the concepts. My tutor told me to not read from any other sources unless he approves of them. It does not take a 99th percentile scorer to make money selling his own GMAT book. So many books are flawed (an approach may work for easy/med questions but not work for hard questions). You want to read from a good source which gives you approaches that are "bullet proof" across all difficulty levels. Only then, will you have extra time to focus on what the GMAT traps are in answer choices. Gifted people can find clarity after reading from many sources, but I'm just not one of them...SO I learned to "K.I.S.S." (keep it simple stupid). The good thing about this test my tutor tells me is that his approach to problems is "bullet proof" enough to bring a person who has an average IQ to a 720-730 on the GMAT. He claims that IQ only matters when you are trying to break the 740/750 barrier. When you have clarity in the concepts...you will become more accurate. Accuracy will naturally drive speed.
Senior Manager
Joined: 11 Jun 2006
Posts: 254
Followers: 3
Kudos [?]: 8 [1] , given: 0
### Show Tags
29 Apr 2007, 18:12
1
KUDOS
Currently I am going thru the same process as the OP. I took a prep course last year, while studying 20+ hours a week, for 2 months. I eventually got burnt out and scrapped my GMAT hopes. I didn't even end up taking the exam at all. I felt very frustrated by my lack of progress, especially since I know I can do better. I too was putting in massive amounts of time without seeing progress.
Two weeks ago I decided to give the GMAT another go and started with a private tutor here in NYC. The results have been eye-opening... I've been diagnosed with ADD as well, and this type of learning process is much better suited to my needs. I feel that the time I am putting in now is higher quality and I am learning more... will keep you updated with my progress as well. It's refreshing to know there are others out there like myself.
Intern
Joined: 26 Nov 2005
Posts: 3
Followers: 0
Kudos [?]: 1 [1] , given: 0
### Show Tags
14 May 2007, 22:24
1
KUDOS
your story is quite similar to mine except that i am not talking any medication for my ADD ( i m frm india & hve no medical insurence yet). i am just drinking excess coffee .i am still scoring in 570's with 4 months of preparation . wanna score in 700's . is it possible ?? what strategies shld i apply for both verbal & quant ? plz help me as i m in the same boat...
Manager
Joined: 03 Feb 2007
Posts: 164
Followers: 1
Kudos [?]: 3 [0], given: 0
### Show Tags
15 May 2007, 08:45
tarunvij21 wrote:
your story is quite similar to mine except that i am not talking any medication for my ADD ( i m frm india & hve no medical insurence yet). i am just drinking excess coffee .i am still scoring in 570's with 4 months of preparation . wanna score in 700's . is it possible ?? what strategies shld i apply for both verbal & quant ? plz help me as i m in the same boat...
Not having medication for ADD is a problem. Even with extra time, I would have done horrible without adderol and my anxiety medicine. WIth the extra time, you should get 45 or higher on math and 25 or higher on verbal. Find some way to get adderol
Manager
Joined: 03 Jan 2004
Posts: 61
Location: Tel Aviv, Israel
Followers: 1
Kudos [?]: 1 [1] , given: 0
### Show Tags
30 May 2007, 23:48
1
KUDOS
Since there is so very much to know for this test, it may be worth your time and money to take the lessons online and do the review with the tutor.
See whether you like the GMAX Online approach by checking out the demo lessons here in the review, and on You Tube. Since you can pause, rewind, and even download the lessons, and since the lessons are taught carefully with a teacher using a whiteboard and teaching directly to you, you may be able to really follow everything being taught. Then, for extra help with the homework problems, the tutor will be invaluable.
Let me know whether this works for you.
Regards, and good luck.
Leanna
Director, GMAX Online
Intern
Joined: 11 Oct 2006
Posts: 22
Followers: 0
Kudos [?]: 1 [1] , given: 0
For those with extra time [#permalink]
### Show Tags
20 Jun 2007, 05:47
1
KUDOS
For those with Extra time, how have you approached taking practice tests when you cannot alter the amount of time given? Its hard to replicate test-taking conditions. Any thoughts?
CEO
Joined: 17 May 2007
Posts: 2989
Followers: 60
Kudos [?]: 563 [1] , given: 210
### Show Tags
20 Jun 2007, 06:19
1
KUDOS
Hi boggin,
Not quite sure what you are asking mate ? Are you trying to say that there are not enough actual CATs out there for you to test yourself with your extra time ?
If so, I beg to differ. 2 GmatPreps , 25 Gmat Club Challenges 3-4 Princeton Review tests , 2 Power Prep tests, 5 MGmat tests , and 5 Mcgraw Hill tests are plenty of practice , no matter how much free time you have on your hands (note you can throw in 4 Kaplan CATs to that list too)
I will throw in an extra 2 cents here, because mental preparation by taking simulated tests was a key part of my preparation strategy.
I simulated test 'like' conditions by using a book and solving 37 maths problems (20 PS and 17 DS) and 41 Verbal problems ( Keeping a balance between RC , CR and SC) and giving myself exactly 75 minutes for each and a 10 minute break in between. In fact its very easy to simulate a verbal 'test like' condition using the GMatter software.
Its true that you wont get an accurate GMAT like score this way, but the idea is to build "mental stamina and toughness" for test day, because on test day whatever can go wrong WILL go wrong. You wont get much sleep in the night because you will be nervous, the first question will throw you off and you will end up taking too much time on it, the essay will mentally drain you, the squeaky erasable writing pad will annoy the hell out of you and the center will be too cold and full of distractions.
Intern
Joined: 11 Oct 2006
Posts: 22
Followers: 0
Kudos [?]: 1 [0], given: 0
### Show Tags
21 Jun 2007, 10:17
I should have been a little more clear. If you are ADD, or have some learning disability that qualifies you for extra time (typically I think they give time and a half) and you receive that accommodation, is there any practice CAT (meaning on the computer) that doesn't time you to normal testing time, ie 75 mins per section.
I'd have a tough time replicating test like conditions if I'm doing cats designed for regular time. just wondering if anyone with extra time accomodations has come up with a way around this for CATs. thanks!
CEO
Joined: 17 May 2007
Posts: 2989
Followers: 60
Kudos [?]: 563 [0], given: 210
### Show Tags
21 Jun 2007, 17:01
Whoops, sorry for jumping the gun there mate. Yeah it will be difficult. I would just skip the CATs and use a pure book or book + gmatter strategy.
The only downside would be that GmatPrep which is a CAT that predicts your current level very accurately, might not be useful for ya.
boggin wrote:
I should have been a little more clear. If you are ADD, or have some learning disability that qualifies you for extra time (typically I think they give time and a half) and you receive that accommodation, is there any practice CAT (meaning on the computer) that doesn't time you to normal testing time, ie 75 mins per section.
I'd have a tough time replicating test like conditions if I'm doing cats designed for regular time. just wondering if anyone with extra time accomodations has come up with a way around this for CATs. thanks!
Intern
Joined: 24 Jun 2007
Posts: 2
Followers: 0
Kudos [?]: 1 [1] , given: 0
### Show Tags
24 Jun 2007, 22:40
1
KUDOS
boggin,
CAT Prep offers GMAT software that can simulate the actual options offered to students with qualifying disabilities. You can read about the software's support for ADD / ADHD on the CAT Prep blog or just visit their website for more information.
Cheers!
boggin wrote:
I should have been a little more clear. If you are ADD, or have some learning disability that qualifies you for extra time (typically I think they give time and a half) and you receive that accommodation, is there any practice CAT (meaning on the computer) that doesn't time you to normal testing time, ie 75 mins per section.
I'd have a tough time replicating test like conditions if I'm doing cats designed for regular time. just wondering if anyone with extra time accomodations has come up with a way around this for CATs. thanks!
Intern
Joined: 09 Mar 2007
Posts: 1
Followers: 0
Kudos [?]: 1 [1] , given: 0
### Show Tags
29 Jun 2007, 15:34
1
KUDOS
Great post, it is good to know that there are others out there with ADD trying to crack this exam and struggling with it in the same ways. I just took it for the second time and my score actually went down from 630 to 590. I've come to the realization that I need a tutor. I also live in NYC and would really appreicate any recomendations about tutors, espeacially from others with ADD.
Intern
Joined: 08 Sep 2007
Posts: 4
Followers: 0
Kudos [?]: 2 [1] , given: 0
### Show Tags
08 Sep 2007, 08:41
1
KUDOS
Just saw this, I have the same condition, though probably not as pronounced and even though the original poster did not come back after the test to give an update... I took the test yesterday and got a 710.
Intern
Joined: 03 Sep 2010
Posts: 43
Followers: 0
Kudos [?]: 0 [0], given: 0
### Show Tags
10 Oct 2010, 10:07
Super awesome. Thanks for the inspiration.
Similar topics Replies Last post
Similar
Topics:
3 I never knew I had A.D.D. until I studied for the GMAT 38 08 Jun 2011, 06:49
1 How long have you been studying for the GMAT? 15 08 Mar 2009, 12:42
10 770 w/ 1 month of studying! Thanks GMAT Club! (Very long) 21 30 Aug 2008, 17:00
Please add your gmat score data so we can study the trend 3 18 Jun 2008, 20:29
my road to 710 (long post) 2 21 Jun 2007, 15:59
Display posts from previous: Sort by | 2016-09-24 21:47:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38403868675231934, "perplexity": 1633.7577741802004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738659496.36/warc/CC-MAIN-20160924173739-00139-ip-10-143-35-109.ec2.internal.warc.gz"} |
http://clay6.com/qa/9516/a-spherical-snowball-is-melting-in-such-a-way-that-its-volume-is-decreasing | # A spherical snowball is melting in such a way that its volume is decreasing at a rate of $1 cm^{3}/min.$ The rate at which the diameter is decreasing when the diameter is $10 cms$ is
$\begin{array}{1 1}(1)\frac{-1}{50\pi}cm/min&(2)\frac{1}{50\pi}cm/min\\(3)\frac{-11}{75\pi}cm/min&(4)\frac{-2}{75\pi}cm/min\end{array}$
Let V be the volume of the spherical snow all and r be the radius at time 't'
$V= \large\frac{4}{3}$$\pi r^3 We know diameter d=2r V= \large\frac{8}{6} \pi r^3=\large\frac{\pi}{6}$$(2r)^3$
$V= \large\frac{\pi}{6}$$.d^3 \large\frac{dV}{dt} =\frac{\pi}{6}$$ \times 3d^2 \times \large\frac{d}{dt}$$(d) I=\large\frac{\pi}{2}$$ \times 10^2 \times \large\frac{d}{dt}$$(d) I=\large\frac{\pi}{2}$$ \times 10 \times 10 \times \large\frac{d}{dt}$$(d) \qquad= \large\frac{1}{50 \pi}$$cm/min$
Hence 2 is the correct answer. | 2018-04-24 08:31:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9735590815544128, "perplexity": 1155.121832496237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946578.68/warc/CC-MAIN-20180424080851-20180424100851-00200.warc.gz"} |
https://itectec.com/superuser/how-to-boot-to-uefi-shell/ | # Windows – How to “Boot to UEFI shell”
biosbootshelluefiwindows-server-2012-r2
I am trying to follow some instructions to update some firmware:
x64 UEFI environment:
• Boot to UEFI shell
• Run update.nsh
I don't know how to do this. When I boot do I have a choice to boot to UEFI Shell?
Do I need a separate bootable CD to go to UEFI shell or is this something like safe mode where I press a certain key to go to it?
My OS: Server 2012 R2
It depends on whether your UEFI has a shell builtin. If it does, there should be an option in its settings / boot menu for you to launch it. Some motherboard also provide an option to launch a shell from the EFI System Partition (ESP). You should consult the manual of your motherboard for the path it will look for (the instruction is often vague though). Usually they are looking for a file named Shell.efi in the ESP root folder.
Another way is to launch it just like you launch any other EFI binary (e.g. bootloader). Since it's not really accessible to register a EFI binary to your UEFI or put the shell binary to your ESP in Windows, so the easiest way is probably to put it as \EFI\Boot\bootx64.efi (also put the update.nsh you need to run and the files it requires under \EFI\Boot\) in a FAT(32)-formatted USB drive (It shouldn't matter whether it's MBR or GPT as long as your UEFI is standard-conforming enough). Then reboot and boot the USB in UEFI mode from your UEFI boot menu. | 2021-10-17 07:11:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37016066908836365, "perplexity": 3647.537771201929}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585121.30/warc/CC-MAIN-20211017052025-20211017082025-00185.warc.gz"} |
http://ieeexplore.ieee.org/xpl/tocresult.jsp?reload=true&isnumber=5446457&punumber=41 | By Topic
# IEEE Transactions on Industrial Electronics
## Filter Results
Displaying Results 1 - 25 of 42
Publication Year: 2010, Page(s):C1 - 1501
| PDF (50 KB)
• ### IEEE Transactions on Industrial Electronics publication information
Publication Year: 2010, Page(s): C2
| PDF (41 KB)
• ### Guest Editorial
Publication Year: 2010, Page(s):1502 - 1504
| PDF (96 KB) | HTML
• ### Effects of LO Phase and Amplitude Imbalances and Phase Noise on $M$-QAM Transceiver Performance
Publication Year: 2010, Page(s):1505 - 1517
Cited by: Papers (19)
| | PDF (1735 KB) | HTML
This paper presents a rigorous analytical model for analyzing the effects of local oscillator output imperfections such as phase/amplitude imbalances and phase noise on M -ary quadrature amplitude modulation (M-QAM) transceiver performance. A closed-form expression of the error vector magnitude (EVM) and an analytic expression of the symbol error rate (SER) are derived considering a single-... View full abstract»
• ### The Design Method and Performance Analysis of RF Subsampling Frontend for SDR/CR Receivers
Publication Year: 2010, Page(s):1518 - 1525
Cited by: Papers (16)
| | PDF (1289 KB) | HTML
RF subsampling can be used by radio receivers to directly down convert and digitize RF signals. The goal of software-defined radio (SDR) design is to place analog-to-digital converter (ADC) as near the antenna as possible. Based on this, an RF subsampling frontend (FE) for SDR is designed and verified by a hardware platform. The effects of timing jitter, ADC resolution, and folding noise dominatin... View full abstract»
• ### A Node-to-Node Location Verification Method
Publication Year: 2010, Page(s):1526 - 1537
Cited by: Papers (11)
| | PDF (766 KB) | HTML
In this paper, we study the problem of location claim verification of a mobile node in a wireless network. Existing verification methods rely primarily on cooperative approaches, which require the cooperation of several detecting nodes for the verification of a location claim from a target node. These methods all suffer from one or both of the drawbacks: 1) not able to cope with a sparse network s... View full abstract»
• ### Data-Aided Timing Synchronization for FM-DCSK UWB Communication Systems
Publication Year: 2010, Page(s):1538 - 1545
Cited by: Papers (29)
| | PDF (344 KB) | HTML
Frequency-modulated differential chaos shift keying (FM-DCSK) ultrawideband (UWB) communication systems convey information by transmitting ultrashort chaotic pulses (in the nanosecond scale). Since such pulses are ultrashort, timing offset may severely degrade the bit error rate (BER) performance. In this paper, a fast data-aided timing synchronization algorithm with low complexity is proposed for... View full abstract»
• ### A 52-mW 3.1–10.6-GHz Fully Integrated Correlator for IR-UWB Transceivers in 0.18 $muhbox{m}$ CMOS
Publication Year: 2010, Page(s):1546 - 1554
Cited by: Papers (7)
| | PDF (343 KB) | HTML
Correlators play key roles in impulse radio ultrawideband (IR-UWB) transceivers. Multiplier-based correlator performs correlation-type demodulation in addition to select desired UWB signals by correlating incoming pulses with templates. This paper reports design and implementation of a fully integrated low-power broadband multiplier-based correlator for a 3.1-10.6-GHz fullband IR-UWB receiver in 0... View full abstract»
• ### 1.8 pJ/Pulse Programmable Gaussian Pulse Generator for Full-Band Noncarrier Impulse-UWB Transceivers in 90-nm CMOS
Publication Year: 2010, Page(s):1555 - 1562
Cited by: Papers (13)
| | PDF (699 KB) | HTML
This paper presents a single-chip ultralow power programmable Gaussian pulse generator (PG) designed and implemented in the 90-nm CMOS for 3.1-10.6 GHz full-band impulse-radio ultrawideband (UWB) transmitters. Measurement shows that this novel simple two-inverter-based PG achieves the lowest reported power dissipation of merely 1.8 pJ/pulse with a 100-MHz pulse-repeating frequency at 1-V supply, e... View full abstract»
• ### A CMOS Transceiver for a Multistandard 13.56-MHz RFID Reader SoC
Publication Year: 2010, Page(s):1563 - 1572
Cited by: Papers (18) | Patents (2)
| | PDF (2036 KB) | HTML
A CMOS transceiver for a multistandard 13.56-MHz radio-frequency identification reader system-on-a-chip (SoC) is designed and fabricated. The SoC consists of an RF/analog part for modulation/demodulation and a digital part for controlling the transceiver functionality. Prior to designing the integrated circuit, pre-experiments using discrete components and commercial tags are performed. With the r... View full abstract»
• ### Hardware Implementation of RFID Mutual Authentication Protocol
Publication Year: 2010, Page(s):1573 - 1582
Cited by: Papers (35)
| | PDF (1115 KB) | HTML
Radio-frequency identification (RFID) is a wireless technology that utilizes radio communication to identify objects with a unique electrical identity. The widespread deployment of RFID technologies may generate new threats to security and user privacy. One of the main drawbacks of RFID technology is the weak authentication systems between a reader and a tag. In general, ??weak?? authentication sy... View full abstract»
• ### A Low-Cost Printed CP Patch Antenna for RFID Smart Bookshelf in Library
Publication Year: 2010, Page(s):1583 - 1589
Cited by: Papers (29)
| | PDF (1298 KB) | HTML
This paper presents a small wideband circularly polarized patch antenna printed on the low-cost FR-4 material for radio-frequency-identification smart bookshelves in libraries. The antenna is composed of four top-loaded patches sequentially rotated with a phase difference of 90?? and double shorted to the ground. It operates at a center frequency of 0.915 GHz. The impedance bandwidth (SWR < 2) ... View full abstract»
• ### Multiphase Pickups for Large Lateral Tolerance Contactless Power-Transfer Systems
Publication Year: 2010, Page(s):1590 - 1598
Cited by: Papers (111) | Patents (5)
| | PDF (428 KB) | HTML
The majority of commercial contactless power-transfer systems used in manufacturing applications can only tolerate limited movement of the power pickup relative to the track to which it is magnetically coupled. This paper describes a new multiphase (quadrature) pickup that significantly improves the tolerance of the power receiver to such relative movement, enabling expanded applications such as c... View full abstract»
• ### On the Stability of Full Adaptive Observer for Induction Motor in Regenerating Mode
Publication Year: 2010, Page(s):1599 - 1608
Cited by: Papers (24)
| | PDF (640 KB) | HTML
This paper, which deals with the stability of adaptive observers for induction motor in the regenerating mode, proposes a new approach that is consist of describing the error system in state space representation. With this formulation, it is possible to establish a cartography of unstable eigenvalues in the torque/speed plane, thus simplifying the stability analysis. Moreover, a new stability crit... View full abstract»
• ### Effective Dead-Time Compensation Using a Simple Vectorial Disturbance Estimator in PMSM Drives
Publication Year: 2010, Page(s):1609 - 1614
Cited by: Papers (50)
| | PDF (875 KB) | HTML
This paper presents an effective online approach for dead-time compensation using a simple vectorial disturbance estimator in permanent-magnet synchronous motor (PMSM) drives. The proposed estimator can calculate the disturbance voltages, which are induced by dead time, by the use of simple vector operations, i.e., the inner and outer products of flux linkage increments and a unit back electromoti... View full abstract»
• ### Integrated Magnetic Self-Driven ZVS Nonisolated Full-Bridge Converter
Publication Year: 2010, Page(s):1615 - 1623
Cited by: Papers (15)
| | PDF (2487 KB) | HTML
This paper proposes a high-efficiency high-power-density voltage regulator (VR). An integrated magnetic self-driven full-bridge topology is employed as the main circuit. The proposed VR runs at a 700-kHz switching frequency for a 1 Unit Height (1U) form factor. A novel synchronous rectifier drive method is used to achieve high efficiency. The direct current resistance (DCR) current-sensing method ... View full abstract»
• ### A Review of Switch-Mode Sustain Drivers With Resonant Networks for Plasma Display Panels
Publication Year: 2010, Page(s):1624 - 1634
Cited by: Papers (12)
| | PDF (1187 KB) | HTML
In the last 30 years, industrial and academic research work has matured plasma display panels (PDPs) to the successful product level for commercial flat-screen television sets. Along with the development of panel manufacturing technology, recent advances in the development of electronic circuitry drivers have paved the way for achieving better performance, higher efficiency, and lower cost. A subs... View full abstract»
• ### Single-Side Sustaining Technique for Plasma Display Panel Using Dual-Resonant Method
Publication Year: 2010, Page(s):1635 - 1643
Cited by: Papers (3)
| | PDF (1391 KB) | HTML
A new plasma display panel single-side sustaining driver with dual-resonant technique is proposed. Since this circuit enables one to keep the device voltage stresses the same as those of conventional circuit that generates alternating sustaining pulses, it is helpful to reduce driver cost in single-side sustaining driver that suffers from high-voltage stresses. To integrate the sustaining function... View full abstract»
• ### Zero-Voltage and Zero-Current-Switching PWM Combined Three-Level DC/DC Converter
Publication Year: 2010, Page(s):1644 - 1654
Cited by: Papers (77)
| | PDF (663 KB) | HTML
This paper proposes a zero-voltage and zero-current-switching (ZVZCS) PWM combined three-level (TL) dc/dc converter, which is a combination of a ZVZCS PWM TL converter with a ZVZCS PWM full-bridge converter. The proposed converter has the following advantages: all power switches suffer only half of the input voltage; the voltage across the output filter is very close to the output voltage, which c... View full abstract»
• ### Electric Dynamic Modeling of HID Lamps for Electronic Ballast Design
Publication Year: 2010, Page(s):1655 - 1662
Cited by: Papers (15)
| | PDF (1541 KB) | HTML
This paper describes a nonlinear model of high-intensity discharge (HID) lamps based on electrical variables. The proposal, oriented to the engineering area, has a special application for the design of electronic ballast. Parameters are obtained from straightforward measurement of electrical variables as power, current, and voltage in the lamp. The lamp resistance is obtained as a function of elec... View full abstract»
• ### Magnetic Component Model for Planar Structures Based on Transmission Lines
Publication Year: 2010, Page(s):1663 - 1669
Cited by: Papers (13)
| | PDF (627 KB) | HTML
Magnetic component models are quite complex if they take into consideration the variation of the field distribution in a three-dimensional (3-D) space. However, if the field distribution can be assumed to be one-dimensional (1-D), the magnetic component models can be drastically simplified because it is feasible to obtain accurate analytical expressions based on the solution of the Maxwell equatio... View full abstract»
• ### Digital Average Current-Mode Control of PWM DC–DC Converters Without Current Sensors
Publication Year: 2010, Page(s):1670 - 1677
Cited by: Papers (58)
| | PDF (603 KB) | HTML
This paper introduces a digital average current-mode control technique for pulsewidth modulation dc-dc converters which only rely on voltage sampling. The proposed approach is to estimate inductor current using first-order discrete-time low-pass filter; therefore, the controller can calculate average inductor current in every switching cycle. As a novel technique of predictive average current cont... View full abstract»
• ### Soft-Switching Converter With HF Transformer for Grid-Connected Photovoltaic Systems
Publication Year: 2010, Page(s):1678 - 1686
Cited by: Papers (57)
| | PDF (1611 KB) | HTML
In this paper, the design, realization, and performance evaluation of a single-phase 3-kW dc/ac power converter, using an active-bridge dc/dc converter and a full-bridge dc/ac, are introduced, presenting a novel solution on the industrial scenario for the considered application. Control algorithms, including the maximum power point tracking, paralleling to the grid, and converter switching signals... View full abstract»
• ### Multifunctional Intelligent Autonomous Parking Controllers for Carlike Mobile Robots
Publication Year: 2010, Page(s):1687 - 1700
Cited by: Papers (35)
| | PDF (1701 KB) | HTML
An increasing number of carlike mobile robot (CLMR) studies have addressed the issues of autonomous parking and obstacle avoidance. An autonomous parking controller can provide convenience to a novice driver. However, if the controller is not designed adequately, it may endanger the car and the driver. Therefore, this paper presents a novel multifunctional intelligent autonomous parking controller... View full abstract»
• ### ZMP-Based Online Jumping Pattern Generation for a One-Legged Robot
Publication Year: 2010, Page(s):1701 - 1709
Cited by: Papers (18)
| | PDF (1320 KB) | HTML
This paper is aimed at presenting a method to generate online jumping patterns, which can be applied to one-legged jumping robots and optionally to humanoid robots. Our proposed method is based on ensuring the overall dynamic balance through the complete jumping cycle. To be able to reach this goal, we discretized the zero moment point equation in polar coordinates so that we are able to include a... View full abstract»
## Aims & Scope
IEEE Transactions on Industrial Electronics encompasses the applications of electronics, controls and communications, instrumentation and computational intelligence for the enhancement of industrial and manufacturing systems and processes.
Full Aims & Scope
## Meet Our Editors
Editor-in-Chief
Leopoldo Garcia Franquelo
Escuela Superior de Ingenieros | 2017-06-28 01:03:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1729869842529297, "perplexity": 10391.02269345127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321961.50/warc/CC-MAIN-20170627235941-20170628015941-00219.warc.gz"} |
https://proofwiki.org/wiki/Maximal_Ideal_WRT_Filter_Complement_is_Prime_in_Distributive_Lattice/Lemma_3 | # Maximal Ideal WRT Filter Complement is Prime in Distributive Lattice/Lemma 3
## Lemma for Maximal Ideal WRT Filter Complement is Prime in Distributive Lattice
Let $\struct {S, \vee, \wedge, \preceq}$ be a distributive lattice.
Let $F$ be a filter in $L$.
Let $M$ be an ideal in $L$ which is disjoint from $F$ such that:
no ideal in $L$ larger than $M$ is disjoint from $F$.
Let $N = \set {x \in L: \exists m \in M: x \le m \vee a}$.
$M \subsetneq N$
## Proof
Let $m \in M$.
Then:
$m \le \paren {m \vee a}$
so $m \in N$.
Thus $M \subseteq N$.
We have:
$a \le \paren {m \vee a}$
so:
$a \in N$
but:
$a \notin M$
Thus:
$M \subsetneq N$
$\blacksquare$ | 2022-05-19 21:36:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9818623065948486, "perplexity": 1358.5786144208942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662530066.45/warc/CC-MAIN-20220519204127-20220519234127-00469.warc.gz"} |
https://biosignalsplux.com/learn/notebooks/Categories/Load/open_txt_rev.html | Load acquired data from .txt file
Difficulty Level:
Tags open☁load☁txt
A text file is one of the simplest means to store information, being a format outputted by OpenSignals .
In this Jupyter Notebook it will be explained how to load/transpose the data inside .txt file to a Python list, which consists in a step that precedes all processing operations.
1 - Importation of the needed packages
In [1]:
# Package used for loading data from the input text file
from numpy import loadtxt
# biosignalsnotebooks python package
import biosignalsnotebooks as bsnb
2 - Access to electrophysiological signals list
2.1 - Enter biosignalsplux url
2.2 - Navigate through biosignalsplux main page menu and enter in "Signal Samples" page
2.3 - Interactive buttons for accessing each signal sample file
2.4 - File url copy (right-click of the mouse in the desired signal file icon)
In [2]:
copy_link = 'https://www.biosignalsplux.com/downloads/samples/sensor_samples/biosignalsplux_Electrodermal_Activity_EDA_Sample.txt'
In [3]:
# File download.
bsnb.download(copy_link, out="download_file_name.txt")
4 - Transposition of data to a Python list
In [4]:
data = loadtxt("download_file_name.txt")
5 - Identification of acquisition sampling rate in the file header ("sampling rate" key)
In [5]:
# Embedding of .pdf file
from IPython.display import IFrame
IFrame(src="//biosignalsplux.com/images/load/open_txt/biosignalsplux_Blood_Volume_Pulse_(BVP)_Sample.txt", width="100%", height="350")
Out[5]:
In [6]:
sampling_rate = 1000
6 - Generation of time axis for signal plotting
In [7]:
time = bsnb.generate_time(data, sampling_rate)
7 - Final Output of the loaded data
In [8]:
print (data)
[[0.0000e+00 0.0000e+00 4.6888e+04]
[1.0000e+00 0.0000e+00 4.6958e+04]
[2.0000e+00 0.0000e+00 4.6872e+04]
...
[2.8170e+03 0.0000e+00 3.6728e+04]
[2.8180e+03 0.0000e+00 3.6773e+04]
[2.8190e+03 0.0000e+00 3.6737e+04]]
Each line of the list defines a sample acquired at a specific time instant and each column can be the sample number ( nSeq ), digital input ( DI ) or a sample value ( CH1 ), like described in the file header bellow.
In [9]:
# Embedding of .pdf file
from IPython.display import IFrame
IFrame(src="//biosignalsplux.com/images/load/open_txt/biosignalsplux_Blood_Volume_Pulse_(BVP)_Sample.txt", width="100%", height="350")
Out[9]:
The samples of the signal under analysis are stored at the third entry of each list element (index 2).
In [10]:
channel_column = 2
8 - Graphical representation of the signal (raw data)
In [11]:
bsnb.plot(time, data[:, channel_column])
This procedure can be automatically done by load function of biosignalsnotebooks package
Text files are very popular and, like the name suggests, almost all type of contents can be stored here if they can be translated into a text format.
Numpy loadtxt function is very simple and efficient, so, it can be used even for text files not returned by OpenSignals .
We hope that you have enjoyed this guide. biosignalsnotebooks is an environment in continuous expansion, so don"t stop your journey and learn more with the remaining Notebooks !
☌ Project Presentation ☌ GitHub Repository ☌ How to install biosignalsnotebooks Python package ? ☌ Signal Library ☌ Notebook Categories ☌ Notebooks by Difficulty ☌ Notebooks by Signal Type ☌ Notebooks by Tag
In [12]:
from biosignalsnotebooks.__notebook_support__ import css_style_apply
css_style_apply()
.................... CSS Style Applied to Jupyter Notebook .........................
Out[12]: | 2020-10-20 12:13:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22590526938438416, "perplexity": 8561.108367194834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107872686.18/warc/CC-MAIN-20201020105000-20201020135000-00408.warc.gz"} |
http://www.varsitytutors.com/common_core_8th_grade_math-help/understand-the-difference-between-rational-and-irrational-numbers-ccss-math-content-8-ns-a-1 | # Common Core: 8th Grade Math : Understand the Difference Between Rational and Irrational Numbers: CCSS.Math.Content.8.NS.A.1
## Example Questions
← Previous 1 3
### Example Question #1 : Irrational Numbers
Which of the following expressions is irrational?
Explanation:
An irrational number is defined as any number that cannot be expressed as a simple fraction or does not have terminating or repeating decimals. Of the answer choices given, the only number that cannot be expressed as a simple fraction or with repeating or terminating decimals is .
### Example Question #2 : Irrational Numbers
Which of the following is an irrational number?
Explanation:
An irrational number is any number that can not be expressed as a ratio of integers, i.e. a fraction. Therefore, the only irrational number listed is .
### Example Question #1 : Understand The Difference Between Rational And Irrational Numbers: Ccss.Math.Content.8.Ns.A.1
Which of these expressions is not irrational?
Explanation:
The square root of an integer is either an irrational number or an integer. The latter is the case if and only if there is an integer which, when multiplied by itself, or squared, yields the number inside the symbol (the radicand) as the product. Of , only 81 is the square of an integer (9).
### Example Question #5 : Irrational Numbers
Which of the following represents an irrational number?
All of the answers are irrational
Explanation:
Pi is the only irrational number listed. Irrational numbers are in the form of infinite non-repeating decimals.
### Example Question #5 : Irrational Numbers
Which of the following is not an irrational number?
Explanation:
A root of an integer is one of two things, an integer or an irrational number. By testing all five on a calculator, only comes up an exact integer - 5. This is the correct choice.
### Example Question #8 : Irrational Numbers
Which of the following is an irrational number?
Explanation:
An irrational number is any number that cannot be written as a fraction of whole numbers. The number pi and square roots of non-perfect squares are examples of irrational numbers.
can be written as the fraction . The term is a whole number. The square root of is , also a rational number. , however, is not a perfect square, and its square root, therefore, is irrational.
### Example Question #1 : The Number System
Of the following, which is a rational number?
Explanation:
A rational number is any number that can be expressed as a fraction/ratio, with both the numerator and denominator being integers. The one limitation to this definition is that the denominator cannot be equal to .
Using the above definition, we see , and (which is ) cannot be expressed as fractions. These are non-terminating numbers that are not repeating, meaning the decimal has no pattern and constantly changes. When a decimal is non-terminating and constantly changes, it cannot be expressed as a fraction.
is the correct answer because , which can be expressed as , fullfilling our above defintion of a rational number.
### Example Question #2 : Understand The Difference Between Rational And Irrational Numbers: Ccss.Math.Content.8.Ns.A.1
Of the following, which is an irrational number?
Explanation:
The definition of an irrational number is a number which cannot be expressed in a simple fraction, or a number that is not rational.
Using the above definition, we see that is already expressed as a simple fraction.
any number and
. All of these options can be expressed as simple fractions, making them all rational numbers, and the incorrect answers.
cannot be expressed as a simple fraction and is equal to a non-terminating, non-repeating (ever-changing) decimal, begining with
This is an irrational number and our correct answer.
### Example Question #3 : Understand The Difference Between Rational And Irrational Numbers: Ccss.Math.Content.8.Ns.A.1
Which of the following is NOT an irrational number?
Explanation:
Rational numbers are those which can be written as a ratio of two integers, or simply, as a fraction.
The solution of is , which can be written as . Each of the other answers would have a solution with an infinite number of decimal points, and therefore cannot be written as a simple ratio. They are irrational numbers.
### Example Question #4 : Understand The Difference Between Rational And Irrational Numbers: Ccss.Math.Content.8.Ns.A.1
Which of the following numbers is considered to be an irrational number?
Explanation:
An irrational number cannot be represented as the quotient of two integers.
Irrational numbers do not terminate and are not repeat numbers.
can be reduced to , therefore it is an integer.
by definition is a quotient of two integers and thus it is not an irrational number.
can be rewritten as and by definition is a quotient of two integers and thus it is not an irrational number.
is a terminated decimal and therefore can be written as a fraction. Thus it is not an irrational number.
is the number for and does not terminate, therefore it is irrational.
← Previous 1 3 | 2016-12-08 02:13:12 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9387197494506836, "perplexity": 458.0452201676977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542323.80/warc/CC-MAIN-20161202170902-00289-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://direct.mit.edu/neco/article/32/8/1499/95626/Stochastic-Multichannel-Ranking-with-Brain?searchresult=1 | ## Abstract
A driver's cognitive state of mental fatigue significantly affects his or her driving performance and more important, public safety. Previous studies have leveraged reaction time (RT) as the metric for mental fatigue and aim at estimating the exact value of RT using electroencephalogram (EEG) signals within a regression model. However, due to the easily corrupted and also nonsmooth properties of RTs during data collection, methods focusing on predicting the exact value of a noisy measurement, RT generally suffer from poor generalization performance. Considering that human RT is the reflection of brain dynamics preference (BDP) rather than a single regression output of EEG signals, we propose a novel channel-reliability-aware ranking (CArank) model for the multichannel ranking problem. CArank learns from BDPs using EEG data robustly and aims at preserving the ordering corresponding to RTs. In particular, we introduce a transition matrix to characterize the reliability of each channel used in the EEG data, which helps in learning with BDPs only from informative EEG channels. To handle large-scale EEG signals, we propose a stochastic-generalized expectation maximum (SGEM) algorithm to update CArank in an online fashion. Comprehensive empirical analysis on EEG signals from 40 participants shows that our CArank achieves substantial improvements in reliability while simultaneously detecting noisy or less informative EEG channels.
## 1 Introduction
According to the Sleep Health Foundation report by Adams et al. (2017), mental fatigue is a major cause in 33% to 45% of all road accidents. In general, mental fatigue (Boksem & Tops, 2008) refers to the inability to maintain optimal cognitive performance in a task with a high demand of cognitive activity. Such inability in the context of driver could lead to accidents with severe consequences (Adams et al., 2017). Individuals may find themselves in a mentally fatigued because of lack of sleep, continuous driving for an extended period, driving, monotonous driving late at night or before dawn, and driving while under the influence of sleeping drugs or with sleep disorders (Ji, Zhu, & Lan, 2004; Ting, Hwang, Doong, & Jeng, 2008). (See Zhang, Yao, Wang, Monaghan, & Mcalpine, 2019, for recent advances and references in brain dynamic analysis.)
In response to these critical issues, several methods (Cook, O'Connor, Lange, & Steffener, 2007; Blankertz et al., 2009; Fazli et al., 2009; Wascher et al., 2014; Tian, Wang, Dong, Pei, & Chen, 2018; Kaji, Iizuka, & Sugiyama, 2019) have been proposed to estimate and predict mental fatigue based on electroencephalography (EEG) and reaction time (RT) (see Figure 1a). Some of these methods, however, performed considerably well for some participants but failed for others due to lack of generalization. One of the challenges behind such poor generalization is determining how to use RT effectively. RT is easily affected by the instrumental error, wandering attention, or any other task unrelated factors. A previous study (Wei et al., 2015) tried to overcome this problem by adopting different techniques to smooth RTs but still failed to make it work for all participants. Note that humans' RT is usually the result of preference (Izuma & Adolphs, 2013) in brain dynamics during the task rather than just a single value. Such preferences can be affected by changing levels of attention (Möckel et al., 2015) like wandering mind (Lin et al., 2016), or a lower level of attention (Chuang et al., 2018). Therefore, the relationship between EEG signals and RTs, including extreme or abnormal RTs, should be attended to in a way that reflects human brain dynamics preferences (BDPs).
Figure 1:
(a) Regression model with EEG signals. (b) Proposed channel-reliability aware ranking (CArank) model with brain dynamics preferences.
Figure 1:
(a) Regression model with EEG signals. (b) Proposed channel-reliability aware ranking (CArank) model with brain dynamics preferences.
Another important problem lies in the heterogeneous channels extracted from different brain regions, which are normally responsible for different functionalities. There was an attempt to choose different brain regions (Wascher et al., 2014) for a method during evaluation of mental fatigue, but these regions of the brain are not necessarily the same for all participants (Gramann, Müller, Schönebeck, & Debus, 2006). For example, Wascher et al. (2014) heuristically used frontal theta to represent a different level of mental fatigue for all participants. In such a case, the reliability of the learning model would inevitably degrade because of possibly noisy or less informative channels chosen, on different brain regions, by the method. Some previous work (de Naurois, Bourdin, Stratulat, Diaz, & Vercher, 2017), attempted to solve this issue by using artificial neural network models but still failed to provide convincing results. This previous work impels us to pursue a purely data-driven approach to predict mental fatigue while getting rid of the low versatility caused by various1 heuristic tricks.
To overcome these problems, we first formulate mental fatigue task of monitoring mental fatigue into a multichannel ranking problem and solve it with our proposed channel-reliability aware ranking (CArank) model. In particular, CArank could learn from brain dynamics preference (BDPs) using EEG data robustly, while effectively preserving the exact ordering of RTs (see Figure 1b). This approach surprisingly corrects the defects of previous models and their performance caused by noisy and extreme RTs. Furthermore, our model also proposes using a transition matrix to evaluate the high-confidence sources among heterogeneous EEG channels, which contributes highly toward task performance. In order to handle large-scale EEG signals and obtain higher generalization, we propose a stochastic generalized expectation-maximization (SGEM) algorithm is. More precisely, we make the following key contributions:
• We formulate the task of monitoring mental fatigue into a multichannel ranking problem and tackle it with the CArank model. CArank is a purely data-driven approach to detect mental fatigue using informative channels only.
• We propose a stochastic generalized expectation-maximumzation algorithm for CArank, which extends CArank to large-scale applications.
• We conduct empirical experiments on EEG signals from 40 participants to demonstrate the superior reliability of CArank in terms of mental fatigue monitoring.
This letter is organized as follows. Section 2 introduces the topic of mental fatigue monitoring and motivates the practice of using brain dynamics preferences. In section 3, we address the multichannel ranking problem and introduce our channel-reliability aware ranking to solve it. Section 4 describes a stochastic generalized expectation-maximization algorithm. Section 5 demonstrates the reliability of the proposed CArank with EEG signals from 40 participants. Section 6 envisions the future work, and section 7 concludes.
## 2 Background
In this section, we introduce some preliminary information about mental fatigue monitoring and then discuss our motivation for learning from brain dynamics preferences.
Reaction time is an intuitive indicator used to assess human mental fatigue. Therefore, a common practice for monitoring mental fatigue is to find a robust way of mapping humans' reaction time to an emergent situation using previously recorded EEG signals (Lal, Craig, Boord, Kirkup, & Nguyen, 2003; Kohlmorgen et al., 2007; Jap, Lal, Fischer, & Bekiaris, 2009).
### 2.1 Overfitting of the Regression Model
A natural way to forecast the RT with EEG signals is to formulate it as a regression task (see Figure 2), namely, finding a (non)linear mapping (e.g., neural networks, SVR) from the EEG signals $x$ to the corresponding RT. However, due to the existence of extreme values in RTs during data collection (Wei et al., 2015; Huang, Pal, Chuang, & Lin, 2015), the scale of the regression loss with regard to various RTs varies significantly. Therefore, the regression loss, without discriminating the peculiarity of the RTs, would be dominated by the few extreme RTs while omitting normal RTs. This then leads to the overfitting of the regression model on the training data, with poor generalization performance on the test data (see Figures 2 and 5 and Table 1).
Figure 2:
Overfitting of the two-layer regression model for mental fatigue monitoring. EEG signals from multiple channels are simply concatenated into a long feature vector, and the corresponding regression model is trained using this feature vector. The difference between the ground truth and the prediction is calculated with the root mean squared error. We collect the results only from the first participant for a showcase.
Figure 2:
Overfitting of the two-layer regression model for mental fatigue monitoring. EEG signals from multiple channels are simply concatenated into a long feature vector, and the corresponding regression model is trained using this feature vector. The difference between the ground truth and the prediction is calculated with the root mean squared error. We collect the results only from the first participant for a showcase.
Table 1:
Test Accuracy (in $%$).
ParticipantP1P2P3P4P5P6P7P8P9P10P11P12P13P14P15P16P17P18P19P20
Test ACC SVR 71.74 78.92 85.79 69.76 84.17 66.61 76.38 80.41 71.10 58.52 77.12 87.01 73.92 83.79 69.10 73.65 63.77 62.49 72.64 68.66
LR 69.80 70.77 85.63 69.01 63.77 53.62 79.69 55.87 74.15 21.32 77.55 87.44 74.17 70.79 41.03 53.11 58.15 59.93 41.88 66.20
Regression (C) 71.63 79.21 80.22 72.39 83.65 68.38 60.31 54.99 77.98 59.01 82.72 89.80 79.56 85.45 68.60 65.88 54.30 50.58 68.65 61.80
Regression (A) 71.71 72.97 79.81 70.90 82.80 57.42 61.88 60.96 66.38 52.96 79.37 73.87 67.70 80.54 66.03 54.47 51.01 65.07 62.33 54.80
Classification (C) 76.85 82.48 82.40 74.77 83.12 65.69 76.12 70.84 83.02 63.74 76.41 85.08 77.74 88.03 69.09 71.80 58.44 77.31 80.85 63.56
Classification (A) 79.97 77.61 79.87 68.69 82.55 63.86 49.85 51.47 51.78 53.03 75.79 79.69 66.40 89.39 68.10 53.07 50.00 52.81 61.19 52.02
CArank 82.29 80.97 83.78 77.50 87.42 76.62 82.34 79.16 91.40 78.25 81.74 84.17 83.23 90.53 76.66 80.40 88.69 81.13 80.42 78.35
Participant P21 P22 P23 P24 P25 P26 P27 P28 P29 P30 P31 P32 P33 P34 P35 P36 P37 P38 P39 P40
Test ACC SVR 72.71 73.43 78.98 67.00 76.72 72.56 75.94 85.95 81.63 82.19 67.78 87.73 71.21 76.69 80.61 88.76 77.47 74.22 63.52 46.24
LR 52.97 30.10 78.23 40.02 60.25 73.15 40.67 53.23 46.66 79.32 73.71 53.29 48.55 48.36 75.71 88.53 78.06 62.05 60.28 43.53
Regression (C) 69.84 50.58 80.73 56.85 78.72 67.76 65.06 84.75 79.59 82.59 63.41 66.46 56.78 61.81 66.70 87.21 81.98 57.71 84.41 67.48
Regression (A) 53.44 58.27 78.29 54.25 77.46 53.31 51.33 77.73 69.92 77.06 58.46 64.09 53.09 59.69 72.45 85.64 73.21 62.83 50.55 46.35
Classification (C) 68.22 79.82 84.36 68.10 84.28 69.60 77.09 86.46 82.11 86.85 74.22 85.05 60.49 71.58 73.03 90.40 83.51 75.62 80.37 69.07
Classification (A) 49.86 74.65 72.45 59.46 75.35 49.80 52.20 76.89 51.88 73.62 59.50 61.30 50.00 53.07 60.46 90.24 72.15 60.79 78.30 65.76
CArank 72.83 85.33 82.70 89.35 84.57 76.52 85.02 83.58 86.56 85.64 92.74 85.74 79.24 84.77 90.53 90.96 86.05 77.12 93.48 75.56
ParticipantP1P2P3P4P5P6P7P8P9P10P11P12P13P14P15P16P17P18P19P20
Test ACC SVR 71.74 78.92 85.79 69.76 84.17 66.61 76.38 80.41 71.10 58.52 77.12 87.01 73.92 83.79 69.10 73.65 63.77 62.49 72.64 68.66
LR 69.80 70.77 85.63 69.01 63.77 53.62 79.69 55.87 74.15 21.32 77.55 87.44 74.17 70.79 41.03 53.11 58.15 59.93 41.88 66.20
Regression (C) 71.63 79.21 80.22 72.39 83.65 68.38 60.31 54.99 77.98 59.01 82.72 89.80 79.56 85.45 68.60 65.88 54.30 50.58 68.65 61.80
Regression (A) 71.71 72.97 79.81 70.90 82.80 57.42 61.88 60.96 66.38 52.96 79.37 73.87 67.70 80.54 66.03 54.47 51.01 65.07 62.33 54.80
Classification (C) 76.85 82.48 82.40 74.77 83.12 65.69 76.12 70.84 83.02 63.74 76.41 85.08 77.74 88.03 69.09 71.80 58.44 77.31 80.85 63.56
Classification (A) 79.97 77.61 79.87 68.69 82.55 63.86 49.85 51.47 51.78 53.03 75.79 79.69 66.40 89.39 68.10 53.07 50.00 52.81 61.19 52.02
CArank 82.29 80.97 83.78 77.50 87.42 76.62 82.34 79.16 91.40 78.25 81.74 84.17 83.23 90.53 76.66 80.40 88.69 81.13 80.42 78.35
Participant P21 P22 P23 P24 P25 P26 P27 P28 P29 P30 P31 P32 P33 P34 P35 P36 P37 P38 P39 P40
Test ACC SVR 72.71 73.43 78.98 67.00 76.72 72.56 75.94 85.95 81.63 82.19 67.78 87.73 71.21 76.69 80.61 88.76 77.47 74.22 63.52 46.24
LR 52.97 30.10 78.23 40.02 60.25 73.15 40.67 53.23 46.66 79.32 73.71 53.29 48.55 48.36 75.71 88.53 78.06 62.05 60.28 43.53
Regression (C) 69.84 50.58 80.73 56.85 78.72 67.76 65.06 84.75 79.59 82.59 63.41 66.46 56.78 61.81 66.70 87.21 81.98 57.71 84.41 67.48
Regression (A) 53.44 58.27 78.29 54.25 77.46 53.31 51.33 77.73 69.92 77.06 58.46 64.09 53.09 59.69 72.45 85.64 73.21 62.83 50.55 46.35
Classification (C) 68.22 79.82 84.36 68.10 84.28 69.60 77.09 86.46 82.11 86.85 74.22 85.05 60.49 71.58 73.03 90.40 83.51 75.62 80.37 69.07
Classification (A) 49.86 74.65 72.45 59.46 75.35 49.80 52.20 76.89 51.88 73.62 59.50 61.30 50.00 53.07 60.46 90.24 72.15 60.79 78.30 65.76
CArank 72.83 85.33 82.70 89.35 84.57 76.52 85.02 83.58 86.56 85.64 92.74 85.74 79.24 84.77 90.53 90.96 86.05 77.12 93.48 75.56
Notes: Higher is better. The shaded numbers indicate the best results.
This creates a dilemma: it requires a reliable learning model to predict RT with the complex EEG signals (indeed, it is exactly our target), but it is not required to excessively approximate the exact value of RT, especially the extreme values. The problem, then, is how to find an efficient way to learn from the noisy RT or non-smooth while the exact value is not necessary.
As shown in Figure 2, the extreme or abnormal RTs wildly exist during data collection. The issues of overfitting arise in the regression model since the regression loss excessively forces the learning model to fit the extreme RTs yet underrates the regular RTs. Although various regularization methods (e.g., $L2$ norm, $L1$ norm, and Laplace priors) could alleviate the overfitting of the learning model (Hastie, Tibshirani, & Friedman, 2009; Zhang et al., 2015; Jin, Zhou, Gao, & Zhang, 2018), they cannot solve the overfitting issue if regression loss is still adopted. The same is true for other heuristic approaches (e.g., early stopping) used for alleviating overfitting.
Wei et al. (2015) tried to overcome this problem by adopting different techniques to smooth RTs, but still failed to make it work for all participants. Meanwhile, the performance varies significantly from different choices of the mapping function. The predefined smooth techniques would excessively weigh down or, or simply clip, the extreme or abnormal RTs in the MSE loss, which instead fails to reveal the real relationship between the EEG signals and RTs, especially the extreme or abnormal RTs (Möckel, Beste, & Wascher, 2015; Lin et al., 2016; Chuang et al., 2018).
For the sake of comparison, we apply the $L2$ norm regularization to all baselines in this letter.
### 2.2 Consistency of the Ordinal Regression Model
Instead of using regression, we propose to transform the problem into an ordinal regression problem. In particular, the RTs are defined in the totally ordered space $R$. This space owns its structure meanings, which are preserved by the pairwise comparisons between the RTs. The pairwise comparisons indeed preserve the whole relative structure information between the RTs while ignoring their absolute numerical information. Therefore, predicting the orderings of the pairwise comparisons may be regarded as a relaxed alternative of the previous regression model (see Figure 3).
Figure 3:
Consistency of the two-layer ordinal regression model using brain dynamics preferences. EEG signals from multiple channels are concatenated into a long feature vector, and the corresponding ordinal regression model is trained using this feature vector. In-degree sequences for the ground truth and the prediction are calculated. The root-mean-squared error (RMSE) was also measured between the indegree sequences of the ground truth and the prediction. We collected the results only from the first participant for a showcase.
Figure 3:
Consistency of the two-layer ordinal regression model using brain dynamics preferences. EEG signals from multiple channels are concatenated into a long feature vector, and the corresponding ordinal regression model is trained using this feature vector. In-degree sequences for the ground truth and the prediction are calculated. The root-mean-squared error (RMSE) was also measured between the indegree sequences of the ground truth and the prediction. We collected the results only from the first participant for a showcase.
We showcase our motivation using a naive ordinal regression model for mental fatigue monitoring and present the results in Figure 3: that even the naive ordinal regression model could capture some meaningful results compared to the regression model. In particular, the relative structure information between the RTs is somewhat preserved: the boundary between large RTs and small RTs is clear. Meanwhile, large RTs could serve as an indicator for monitoring mental fatigue.
#### 2.2.1 Comparison between Ordinal Regression and Regression
The difference between ordinal regression and regression lies in the objective they aim to minimize. Ordinal regression aims to preserve the whole ordering of RT, while regression aims to excessively approximate the exact value of RT. Therefore, ordinal regression is less sensitive to outliers, that is, the scale of RTs in mental fatigue monitoring.
#### 2.2.2 Reliability Issues Caused by Heterogeneous Channels
A naive ordinal regression method still suffers from overfitting, mainly because of the simple concatenation of the EEG signals. Since the EEG signals are from heterogeneous channels, if we simply concatenate the EEG signals without discriminating the reliability of each channel, the model's generalization would be degraded.
Remark 1
(Deficiencies of $L1$ and $L2,1$ Regularization for Eliminating Noisy Channels). In order to eliminate the noisy channels, the weight of the features should be set to zero regarding each channel as the whole. However, the $L1$ regularization could only push partial instead of all weights of one channel to zero. $L2,1$ regularization first performs $L2$ norm over the weight of each channel and then calculates $L1$ norm of all $L2$ norm. The $L2,1$ regularization could be used to eliminate the noisy channels. However, $L2,1$ regularization suffers from the following deficiencies: (1) it is difficult to extend to the nonlinear model, such as deep neural networks, and (2) it would heavily rely on parameter tuning of the balance factor for the $L2,1$ regularization term.
We next explore data-driven methods that can automatically weigh up reliable channels and down unreliable channels.
## 3 Model and Methodology
In this section, we formulate the mental fatigue monitoring task as a multichannel ranking problem. Furthermore, we extend the ordinal classification model for brain dynamics preferences and introduce a transition matrix to evaluate the channel reliability. Then, we propose the CArank model to tackle the multichannel ranking problem.
Note that we used the term preference intentionally to show that brain dynamics keep changing with regard to human behaviors, and it happens because the human brain prefers one decision over others (Ekman & Davidson, 1994; Izuma & Adolphs, 2013; Franks, 2019). Therefore, we prefer preference to classification. We then refer to the pairwise comparison between brain dynamics as the brain dynamics preference (BDP).
### 3.1 Multichannel Ranking
Our aim is to correctly preserve the whole orderings between the pairwise RT comparisons (see Figure 1b). In particular, the collection of the pairwise RT comparisons $D$, which we call preference propositions, can be constructed as follows,
$D={(Ti,Tj)|Ti,Tj∈T,i≠j},$
(3.1)
where $T$ is the set of reaction times. Note that the ground truth of each pairwise RT comparison is accessible since RTs are known. Since the connection between RT and BDP is based on human intuition, we call the ground truth of the pairwise RT comparison a preference proposition with regard to BDP.
For brevity of notation, we use the new notation to represent the preference propositions as
$D={ρm:(Tm,1,Tm,2)}m=1M,$
(3.2)
where $M$ denotes the number of preference propositions and $ρm(∈D)$ denotes the $m$th preference proposition. There are usually two types of preference propositions: (1) $ρm=1/-1$, in which the orderings between the RTs are significant, that is, $Tm,1≥Tm,2$ or $Tm,1≤Tm,2$, and (2) $ρm=0$, ion which the RTs in each comparison are comparable, that is, $Tm,1≈Tm,2$.1
Then the BDP could be constructed for each proposition using the corresponding pairwise EEG signals recorded from each channel, respectively:
$preferencepropositionsρm:(Tm,1,Tm,2)⟺BDP(xn,m1,xn,m2),$
(3.3)
where $n=1,2,…,N$. The BDP $(xn,m1,xn,m2)$ denotes the EEG signals recorded within the $n$th channel for each preference proposition $ρm∀m=1,2,…,M$.
In summary, our problem is formulated as predicting the preference propositions (the ordering of the pairwise RT comparisons) by aggregating the BDPs from multiple channels:
$f({xn,m1,xn,m2}n=1N)⟶ρm,∀m=1,2,…,M.$
(3.4)
### 3.2 Beyond Ordinal Classification
For a BDP $(x1,x2)$,2 the popular Bradley-Terry model, which is based on logistic regression, can be formulated as follows:
$P(ρ|w,x1,x2)=σ(wTΔx)ρ=1,σ(-wTΔx)ρ=-1,$
(3.5)
where $σ(z)=1/(1+e-z)$ is the sigmoid function and $σ(-z)=1-σ(z)$. Let $Δx$ denote the subtraction $(x1-x2)$ between the BDP $(x1,x2)$.
However, a preference proposition $ρ$ has three states: $1,0,-1$, denoting win ($T1>T2$), tie ($T1≈T2$), and loss ($T1), respectively. Since binary classification fails to model the state of a tie ($T1≈T2$), binary classification (e.g., see equation 3.5) is therefore very sensitive to the subtle difference of the reaction time. It means that other classification models, such as support vector machines, are also infeasible for our problem due to lack of a normalized probability definition for three states. Meanwhile, the softmax function, a straightforward extension of binary classification, models different states equally. It also does not serve as a good candidate since it fails to capture the intrinsic connection of these two types of preference propositions.
Therefore, we define a normalized probability for the three states while considering the two types of preference propositions, first normalizing the probability over states $(1,-1)$ (exclusively to the significant preference proposition) to 1 and then generalizing the probability definition to state 0. This can be mathematically formulated as
$P(ρ|w,x1,x2)=σ(wTΔx)[1-κ(wTΔx)]ρ=1,κ(wTΔx)ρ=0,σ(-wTΔx)[1-κ(wTΔx)]ρ=-1.$
(3.6)
Following Weng and Lin (2011), the probability of a tie is modeled as the geometric mean between a win and a loss:
$κ(wTΔx)=σ(wTΔx)σ(-wTΔx).$
(3.7)
Note that we consider the linear mapping $wTΔx$ here since the EEG data are usually high-dimensional with low sample size.
Remark 2
(Ternary Classification versus Binary Classification). Ternary classification (see equation 3.6) is less sensitive to the subtle difference of reaction time. In terms of binary classification, a subtle discrepancy around the classification boundary would lead to the steepest gradient. However, the tie state (i.e., $ρ=0$), introduced in ternary classification, would flatten the steepest gradient and enhance the model robustness regarding the subtle difference of RT.
Remark 3
(Extension to Deep Models). For the sake of clarity, we elaborate our three-states ordinal classification with a linear formulation (see equation 3.6). In the case of a deep learning model, we can consider either (1) replacing the linear difference $wTx1-wTx2$ with the difference of the neural network output $g(x1)-g(x2)$ or (2) replacing the raw feature $x$ in equation 3.6 with the output of the last layer of the encoder. To ensure end-to-end training, we chose the first approach in our experiment.
### 3.3 Channel Reliability
Because different regions in the human brain have different functions, relative contributions of different channels to human RT may vary a lot. The state of each channel can be classified as informative and noisy according to its contribution with regard to human RT. Note that a channel is called “noise” if the algorithms could not extract useful brain information with EEG signals from this channel (Alharbi, 2018; Lin et al., 2018). Therefore, if we directly model the EEG preferences recorded in each channel without any distinctions among the channels regarding channel reliability (i.e., informative and noisy), the model's reliability would inevitably degrade.
In the following, a transition matrix $Πn$ is introduced to characterize the reliability of each channel $n$ with regard to the learning task. Let $ρ$ denote the preference proposition and $ρ(n)$ denote the prediction from the $n$th channel. $ρ$ and $ρ(n)$ are all defined on a finite state space $S={1,0,-1}$. Then we have
$Πn=P(ρ|ρ(n))=π11nπ12nπ13nπ21nπ22nπ23nπ31nπ32nπ33n,$
(3.8)
where $Pi,j(ρ|ρ(n))=P(ρ=Sj|ρ(n)=Si)$. According to the definition of the transition matrix, $Πn$ should satisfy three constraints: (1) each entry of $Πn$ should be constrained in [0,1]; (2) each row of $Πn$ should be summed up to be 1; and (3) each column of $Πn$ should be summed up to be 1.
However, it is usually costly and redundant to estimate $Πn$ (see equation 3.8) directly. In the following, we consider imposing more constraints on equation 3.8, so as to simplify the inference while enhancing interpretability. First, the transition between states $(1,-1)$ is constrained to be symmetric, since states $(1,-1)$ are exclusive to the preference proposition where the orderings between the RTs are significant, that is, $P(ρ=1|ρ(n)=-1)=P(ρ=-1|ρ(n)=1)$. Second, since the equal case between two real values is hard to measure when conducting prediction, the transition from the significant RT pairwise comparisons to comparable RT ones is not considered,3 that is, $P(ρ=0|ρ(n)={1,-1})=0$. Therefore, a simplified transition matrix can be represented as follows:
$Πn=P(ρ|ρ(n))=πn0(1-πn)010(1-πn)0πn.$
(3.9)
The parameter $πn$ in the transition matrix $Πn$, equation 3.9, actually indicates the reliability of the $n$th channel $∀n=1,2,…,N$. It also helps to divide the channels into three states:
1. Positive channels with $πn$ close to 1: The ranking model, equation 3.6, can extract enough information from the $n$th channel and exactly predict the state of the preference proposition.
2. Noisy channels with $πn$ approximating 0.5: The ranking model cannot extract any useful information from the $n$th channel.
3. Negative channels with $πn$ close to 0: The ranking model can extract enough information from the $n$th channel, but the prediction states are exactly opposite the proposition states.
The identified positive and negative channels are all considered as informative EEG channels, which helps in learning reliable models for the corresponding task.
### 3.4 Channel-Reliability Aware Ranking
With the incorporation of transition matrix $Πn$, equation 3.9, on top of the introduced three states learning to rank model, equation 3.6, the likelihood function for each preference proposition $ρ$ can be represented as
$P(ρ|w,Πn,xn1,xn2)=Eρ(n)P(ρ|ρ(n))P(ρ(n)|w,xn1,xn2)=[πnσ(wTΔxn)+(1-πn)σ(-wTΔxn)][1-κ(wTΔxn)]ρ=1,κ(wTΔxn)ρ=0,[(1-πn)σ(wTΔxn)+πnσ(-wTΔxn)][1-κ(wTΔxn)]ρ=-1,$
(3.10)
where the subscripts $m$, indicating the index of preference proposition, are omitted for simplicity.
Let $D$ denote the collection of preference propositions and $X$ represent the recorded EEG signals from $N$ different channels. We further extend equation 3.10 to a Bayesian formulation. A gaussian prior is introduced for $w$ (i.e., $w∼N(μ,Σ)$). Since the transition matrix $Πn$ depends only on the parameter $πn$, we focus on estimating the parameter $πn∀n=1,2,…,N$ in the following. Let $π$ denote ${πn}n=1N$, and we introduce a beta prior for each $πn$ (i.e., $π∼B(α,β)=∏n=1NB(αn,βn)$). Then, our CArank model, equation 3.11, for the multichannel ranking problem, equation 3.4, can be represented as
$P(D,w,π|X)=P0(π)P0(w)P(D|w,π,X)=B(π|α,β)N(w|μ,Σ)∏m=1M∏n=1NP(ρm|w,πn,Δxn,m).$
(3.11)
Let $M$ denote the number of preference propositions, $|D|=M$. The variable $n$ iterates over the channels. $m$ iterates over preference propositions. Due to the symmetry of the state probability, equation 3.6, and the transition matrix, equation 3.9, with regard to states 1 and $-1$, the resulting marginal likelihood, equation 3.10, and the corresponding Bayesian formulation, equation 3.11, remain symmetric with regard to states 1 and $-1$.
Now our aim is to estimate the model parameters ($w$ and $π$) by maximizing equation 3.11. In principle, any solution strategies for MAP estimation can be considered to solve this problem. (See section 4 for optimization details.)
### 3.5 Reliability Analysis and Channel State Estimation
CArank (see equation 3.11) indeed trains a mixture of two complementary classifiers, which share the same parameter $w$. It is different from classical mixture models since it clusters at the channel level instead of the sample level.
In particular, in terms of the positive channels with $πn$ close to 1, CArank relies as the first classifier to update the shared parameter $w$. In terms of the negative channels with $πn$ close to 0, equation 3.11 automatically switches to the opposite classifier, which can extract correct information from the negative channels and update the shared parameter $w$ accordingly. Furthermore, CArank is robust to the noisy channels with $πn$ approximately equal to 0.5, because equation 3.11 gives up extracting information from the noisy channels by assigning a constant likelihood (i.e., 0.5) to each BDP. The estimated $πn$ can be leveraged as an indicator to detect noisy channels with $πn≈0.5,∀n=1,2,…,N$. (See Figure 6 for more details.)
Figure 4:
Sustained-attention driving task. (A) Different participants are independent during the data collection process. (B) Different EEG sensors used for recording are recorded independently from the scalp without influencing other sensors (Homan, Herman, & Purdy, 1987; Teplan, 2002). (C) Different trials are conducted independently during the data collection process. (D) The collected reaction time is slightly corrupted by inherent (basically irremovable) sources of noise, but the ranking relationships are preserved to some extent.
Figure 4:
Sustained-attention driving task. (A) Different participants are independent during the data collection process. (B) Different EEG sensors used for recording are recorded independently from the scalp without influencing other sensors (Homan, Herman, & Purdy, 1987; Teplan, 2002). (C) Different trials are conducted independently during the data collection process. (D) The collected reaction time is slightly corrupted by inherent (basically irremovable) sources of noise, but the ranking relationships are preserved to some extent.
Figure 5:
In-degree sequence for CArank and other baselines (closer is better). The root-mean-squared error (RMSE) was also measured according to equation 5.3.
Figure 5:
In-degree sequence for CArank and other baselines (closer is better). The root-mean-squared error (RMSE) was also measured according to equation 5.3.
Figure 6:
Reliability of different channels for 40 participants estimated by CArank. Each column denotes the states of 33 channels for each participant. The channels with estimated reliability $0.15≤πn≤0.85$ and marked in red are considered noisy channels.
Figure 6:
Reliability of different channels for 40 participants estimated by CArank. Each column denotes the states of 33 channels for each participant. The channels with estimated reliability $0.15≤πn≤0.85$ and marked in red are considered noisy channels.
### 3.6 Superiority of CArank over Previous Methods
CArank is superior in two ways: (1) using ordinal regression instead of regression enables it to be less sensitive to the scale of RTs, and (2) the data-driven noisy channel detection ensures performing mental fatigue monitoring using informative channels only.
In terms of the overfitting caused by extreme values, Wei et al. (2015) adopted different techniques to smooth RTs but failed to make it work for all participants. Meanwhile, the predefined smooth techniques would excessively weigh down the extreme or abnormal RTs in the MSE loss, which instead fails to reveal the real relationship between the EEG signals and RTs, especially the extreme or abnormal RTs. In terms of the lower reliability caused by heterogeneous channels, Wascher et al. (2014) heuristically used frontal theta to represent a different level of mental fatigue, but specific regions of the brain are not necessarily the same for all participants (Gramann et al., 2006).
Different from existing work, which heavily relies on various heuristic tricks, CArank is the first purely data-driven approach to predict mental fatigue and therefore offers high versatility. Specifically, it first formulates the mental fatigue monitoring task as a multichannel ranking problem. Next, it evaluates the channel reliability of each EEG channel via a transition matrix. CArank therefore performs reliable mental fatigue prediction using informative channels only.
## 4 Stochastic Generalized Expectation-Maximization
In this section, we describe a generalized expectation-maximization (GEM) algorithm (Dempster, Laird, & Rubin, 1977) to solve the proposed CArank, equation 3.11. Since the feasible region of $πn$ is restricted to [0,1], the gradient-based optimization methods would make our solution inaccurate and inefficient. The GEM algorithm is an efficient iterative procedure to compute the MAP solution in the presence of latent variables ($ρm(n)$ in equation 3.11. GEM avoids directly calculating the derivative to the expectation of latent variables and resorts to a surrogate lower bound to optimize. Therefore, GEM, a silver bullet for MAP with latent variables, can significantly simplify the optimization over parameter $πn$ for equation 3.11.
### 4.1 GEM for CArank
For each preference proposition $ρm$, we introduce an auxiliary variable $δm(n)∈{1,0}$ for the $n$th channel, representing the consistency between the preference proposition $ρm$ and the prediction $ρm(n)$ given by the $n$th channel. Specifically, $δm(n)=1$ denotes that the prediction $ρm(n)$ given by the first classifier is consistent with the preference proposition $ρm$, and $δm(n)=0$ denotes that the prediction $ρm(n)$ estimated by the second classifier is consistent with the preference proposition $ρm$. We can therefore find an equivalent formulation of equation 3.10 for each preference proposition $ρm$ involving the auxiliary variable $Ξm={δm(n)}n=1N$:
$P(ρm,Ξm|π,w,X)=∏n=1NP(ρm,δm(n)|πn,w,Δxn,m)=∏n=1N[πnσ(wTΔxn,m)]δm(n)[(1-πn)σ(-wTΔxn,m)]1-δm(n)ρm=1,×[1-κ(wTΔxn,m)]∏n=1Nκ(wTΔxn,m)ρm=0,∏n=1N[(1-πn)σ(wTΔxn,m)]δm(n)[πnσ(-wTΔxn,m)]1-δm(n)ρm=-1.×[1-κ(wTΔxn,m)].$
(4.1)
This shows that we can deal with the joint distribution directly, which leads to significant simplifications for optimization. The complete log likelihood of CArank, equation 3.11, can be written as
$logP(D,Ξ,w,π|X)=logP0(π)+logP0(w)+∑m=1M∑n=1NlogP(ρm,δm(n)|w,πn,Δxn,m).$
(4.2)
In the expectation step, we first calculate the expected value of the auxiliary variable $δm(n)$ with regard to its posterior distribution $P(δm(n)|π,w,ρm,xn,m)∀n=1,2,…,N,∀m=1,2,…,M$:
$E[δm(n)]=P(ρm,δm(n)|w,πn,Δxn,m)P(ρm|w,πn,Δxn,m)=1+(1-πn)σ(-wTΔxn,m)πnσ(wTΔxn,m)-1ρm=1,1ρm=0,1+πnσ(-wTΔxn,m)(1-πn)σ(wTΔxn,m)-1ρm=-1,$
(4.3)
where $E[δm(n)]$ denotes the degree of the consistency between the prediction $ρm(n)$ and the preference proposition $ρm$. Then the expectation of equation 3.11 with regard to the posterior distribution $P(δm(n)|π,w,ρm,xn,m)∀n=1,2,…,N,∀m=1,2,…,M$ can be represented as
$L(w,π)=E[logP(D,Ξ,w,π|X)]=∑n=1N[(αn-1)logπn+(βn-1)log(1-πn)]-12(w-μ)TΣ-1(w-μ)+∑m=1M∑n=1N[I(ρm=0)logκ(wTΔxn,m)+I(ρm≠0)log[1-κ(wTΔxn,m)]+I(ρm=1)[E[δm(n)]logπnσ(wTΔxn,m)+(1-E[δm(n)])log(1-πn)σ(-wTΔxn,m)]+I(ρm=-1)[E[δm(n)]log(1-πn)σ(wTΔxn,m)+(1-E[δm(n)])logπnσ(-wTΔxn,m)]],$
(4.4)
where $I(*)$ is the indicator function that equals one if the condition is true and zero otherwise.
In the generalized maximization step, we increase the objective function, equation 4.4 with regard to the model parameters $π$ and $w$, respectively. In terms of $π$, we set the gradient of equation 4.4 with regard to $πn$ to zero and obtain the following estimate for $πn$:
$πnnew=∑m=1MI(ρm=1)E[δm(n)]+I(ρm=-1)(1-E[δm(n)])+αn-1∑m=1MI(ρm=1)+I(ρm=-1)+αn+βn-2,$
(4.5)
where $n=1,2,…,N$.
In terms of $w$, due to the complexity of the sigmoid function, we cannot have a closed-form solution for $w$ and need to use gradient-based methods to optimize equation 4.4 with regard to $w$. In particular, the gradient function $g(w)$ can be represented as follows:
$g(w)=-Σ-1(w-μ)+∑m=1M∑n=1N[I(ρm=0)+I(ρm≠0)1-[κ(wTΔxn,m)]-11-2σ(wTΔxn,m)2+I(ρm≠0)(E[δm(n)]-σ(wTΔxn,m))]Δxn,m.$
(4.6)
Regarding the linear rank mapping, we adopt the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) (Byrd, Lu, Nocedal, & Zhu, 1995) to optimize $w$. $wnew$ can be obtained with L-BFGS using $L(w)$ and $g(w)$:
$wnew=L-BFGS(L(w),g(w),D).$
(4.7)
The GEM algorithm (see algorithm 1) then iterates the E-step and the generalized M-step until convergence is achieved.
Remark 4
(Extension to Deep Models). The L-BFGS optimization method used in equation 4.7 aims to find the optimum $w$. It is easy to find its alternatives in deep learning literature, such as vanilla stochastic gradient descent (SGD) and its various variants (Kasai, 2017), if we replace the raw EEG feature $x$ with neural embedding.
Remark 5
(Computational Efficiency Stochastic GEM for CArank). According to algorithm 1, the optimization of CArank involved $T$ iterations between E-step with regard to $E[δm(n)]$, and M-step with regard to $πn$ and $w$. Note that the computation cost of calculating $E[δm(n)]$ and $πn$ is marginal compared to that of optimizing $w$. Thus, the computation cost for CArank at each iteration is dominated by the subclassification problem, is, optimizing $w$. Accordingly, the computation efficiency of GEM for CArank is T times of that for optimizing a regular classification problem. Note that only a few iterations ($T<10$) are required for GEM to converge. This analysis also applies to stochastic GEM where we update the model parameter with minibatch samples.
Note that this cost of computation is for the training stage only, while the computation costs of all methods for the test stage are similar. Our CArank enjoys the lowest storage during the test stage, since we can safely abandon the EEG signals from noisy channels after the training stage. It would not sacrifice the model performance since CArank rejects extracting information from noisy channels for decision making.
### 4.2 Stochastic GEM for CArank
The GEM approach introduced in section 4.1 is inefficient for large-scale data sets, because we need to iteratively calculate the gradient with regard to parameters $π$ and $w$ over all samples during each generalized maximization step. Motivated by the stochastic approximation literature (Roche, 2011), we introduce a stochastic generalized expectation-maximization (SGEM) approach, which resorts to stochastic minibatch optimization to learn the parameters. To be specific, SGEM approximates the updated $π$ and $w$ in batch EM with a single sample or minibatch samples. Since minibatch samples cannot be a perfect approximation to the whole data set, we interpolate between the new and former estimators with a decreasing step size4$ηk$, as in Liang and Klein (2009).
#### 4.2.1 Sampling Step
Before the $t$th iteration, we randomly sample a minibatch $Dt$ from $D$. The number of preference propositions in $Dt$, denoted by $Mt$, is much smaller than the corresponding total data set size $M$.
#### 4.2.2 Expectation Step
The expectation step remains similar. The only difference is that we need to calculate the posterior expectation of the auxiliary variable $δm(n)$ over the mini-batch $Dt$.
#### 4.2.3 Generalized Maximization Step
In the generalized maximization step, we increase the objective function, calculated on the minibatch $Dt$, with regard to model parameters $π$ and $w$. In terms of parameter $πn$, since its marginal distribution belongs to the exponential family, we perform the stochastic update in the space of sufficient statistics (Cappé & Moulines, 2009). Let $φ˜n$ denote the noisy estimate of the sufficient statistic for $πn$:
$φ˜n=MMt∑m∈DtI(ρm=1)E[δm(n)]+I(ρm=-1)(1-E[δm(n)]),$
(4.8a)
$φnt=(1-ηt)φnt-1+ηkφ˜n,$
(4.8b)
$πnnew=φnt+αn-1∑m∈DtI(ρm=1)]+I(ρm=-1)+αn+βn-2,n=1,2,…,N.$
(4.8c)
In terms of parameter $w$, the above practice is infeasible due to its nonexponential marginal distribution. Inspired by the stochastic gradient EM algorithms in Cappé and Moulines (2009), we perform the stochastic update in the original space. First, a local optimal regression weight $wt$ can be obtained via iterative optimization over the minibatch $Dt$ using L-BFGS. Then we interpolate between a local optimum and the former estimations to form a global approximation with regard to the parameter $w$:
$wt=L-BFGS(L(w),g(w),Dt),$
(4.9a)
$wnew=(1-ηk)wold+ηkwt.$
(4.9b)
Remark 6
(Convergence Analysis). The convergence issues of the proposed stochastic GEM algorithm are analogous to the discussion given by Cappé and Moulines (2009) for their stochastic gradient EM algorithms. The existence of such links is hardly surprising. In view of the discussions in section 3 of Cappé and Moulines (2009), the online update rule, equation 4.9b, could also be seen as a stochastic gradient recursion formula, namely, $wnew=wold+ηk(wt-wold)$.
## 5 Empirical Analysis
In this section, we demonstrate the reliability of the proposed CArank, equation 3.11, with EEG signals from 40 participants.
We used the 33-channel EEG data recorded in Huang et al. (2015) from 40 adult participants while performing a long, sustained attention task.5 These data contain one intrinsic non-EEG channel, the 33rd channel, which contains the information about only one axis in the direction of deviation. The experiment has been conducted using a virtual-reality dynamic driving simulator (see Figures 4D and 4E). The task involves driving on a four-lane highway while lane-departure events were a randomly induced deviation toward the side of the road from the original position. Each participant was instructed to quickly respond to steer back to the original position. A complete trial in this study (see Figure 4A), includes a 10 s baseline, deviation onset, response onset, and response offset (see Figures 4B and 4C). The next trial occurs within an interval of 5 s to 10 s after finishing the current trial. Each participant completed $T$ trials within 1.5 h. For each trial $i$, the EEG signals ${xn,i}n=1N$ from $N$ different channels were recorded simultaneously, and the corresponding reaction time $RTi$ was also collected afterward. If a participant fell asleep during the experiment, there was no feedback to wake him up. The NuAmps amplifier (Compumedics Limited, Australia) was used to collect EEG data with a maximum sampling rate of 1000 Hz, 200 HZ bandwidth (DC), and 22-bit resolution.
In this letter, the 10 s baseline (see Figure 4B) as the feature vector has been adopted, which is assumed to be long enough to detect any significant changes in brain activity (Zhang, 2000). This was followed by exploring the relationship between the 10 s baseline $x(∈Rk)$ and the preference proposition $ρm$ under the following four assumptions:
#### 5.1.1 Data Preprocessing
Brain dynamics preferences for each participant have been generated as follows: the trials of each participant were randomly divided into two parts, $50%$ for training and $50%$ for test, and the EEG preferences were constructed according to the pairwise comparisons between the RTs. To be specific, two types of RT comparisons could be constructed: (1) significant RT pairwise comparisons $(Tm,1,Tm,2)$, where $Tm,1≫Tm,2$ or $Tm,2≫Tm,1$, and (2) comparable RT pairwise comparisons $(Tm,1,Tm,2)$, where $Tm,1≈Tm,2$. Considering the time delay among the channels in the time domain, Fourier transform has been applied to EEG signals to transform time series into frequency domain. Fast Fourier transform (FFT) has been applied using the Welch method (Welch, 1967) with a window size of 128 (such that spectral decomposition over 0.5 seconds) and a pad ratio of 2 without any overlap, which yields twice the output feature as the sampling rate. Further, to avoid overhead computation, EEG power within 0.5 Hz to 30 Hz has been selected, which is considered to be the most relevant to the RTs (Huang et al., 2015). Meanwhile, we can also adopt other feature transformations for feature extraction if necessary. (See Hammon and de Sa, 2007, for an example of other features typically used for EEG.)
#### 5.1.2 Baselines
First, we considered two popular linear methods: support vector regression (SVR) (Chang & Lin, 2011) and linear regression (LR) with the features from the multiple channels being simply concatenated into a long feature vector. Then we compared CArank with widely adopted nonlinear methods, regression and classification methods, under the multiple channel concatenation formulation and the multiple channel aggregation formulation, respectively. In particular, we considered two regression models (Lin et al., 2014; Hajinoroozi, Mao, Jung, Lin, & Huang, 2016). (1) In regression (C), with the EEG signals from multiple channels are simply concatenated into a long feature vector and the corresponding regression model is trained using this feature vector. (2) regression (A), the EEG signals from multiple channels, are considered independently, and the regression results are aggregated using majority voting afterward. Two ordinal classification models (Zarei, 2017; Zeng et al., 2018) are considered: (1) classification (C), where the EEG signals from multiple channels are simply concatenated into a long feature vector and the corresponding classification model is trained using this feature vector, and (2) classification (A), where the EEG signals from multiple channels are considered independently and the classification results are aggregated using majority voting afterward.
#### 5.1.3 Metrics
First, we aggregate the predictions from different channels using a simple voting scheme,
$ρ^m=sign∑n=1Nρm(n)I(πn>κ)-I(πn<1-κ),$
where $ρm(n)$ denotes the predicted state (1 means win and −1 means loss) for the pairwise RT comparison ($Tm,1,Tm,2$) by the $n$th channel, using the brain dynamics preference ($xn,m1,xn,m2$). $ρ^m$ is the final estimated order for ($Tm,1,Tm,2$) by aggregating the predictions $ρm(n)$ over all channels. $I(*)$ is an indicator that returns one if the argument is valid and zero otherwise.
Then we introduce two metrics to measure the performance of CArank model from different perspectives. First, we adapted the Wilcoxon-Mann-Whitney statistics (Yan, Dodier, Mozer, & Wolniewicz, 2003) to evaluate the accuracy (in $%$, higher is better) over all pairs:
$Acc=1M¯∑m=1MI(ρm=ρ^m),M¯=∑m=1MI(ρm≠0).$
(5.1)
Further, we investigate the reliability of CArank in terms of preserving the global ordering with regard to RTs. Note that a totally ordered set could be equally represented by a fully directed graph, where the graph can be further encoded by its degree sequence. We only consider the in-degree sequence because the in-degree and out-degree of a vertex can be uniquely determined when the overall degree of each vertex is fixed. The in-degree of vertex $vi$ can be calculated as
$Indeg^(vi)=∑m∈N1(vi)I(ρ^m=1)+∑m∈N2(vi)I(ρ^m=-1)+∑m∈N1(vi)∪N2(vi)0.5×I(ρ^m=0),$
(5.2)
where $N1(vi),N2(vi)$ denote the index set of the pairwise comparisons with the RT of trial $i$ (vertex $vi$) appearing in the first and second positions, respectively. Further, we collected the in-degree sequences (Becirovic, 2017) of the constructed directed graph using the predicted RTs. Then the discrepancy between the predicted in-degree sequences and ground truth can be measured using the root-mean-squared errors (smaller is better), namely,
$RMSE=1T∑i=1T[Indeg(vi)-Indeg^(vi)]2,$
(5.3)
where $T$ denotes the number of trials for each participant. $Indeg(vi)$ is the ground truth in-degree of vertex $vi$ while $Indeg^(vi)$ is the predicted in-degree of vertex $vi$.
Note that we only trust the predictions from informative channels with reliability $πn>κ$ or $πn<1-κ$. $κ$ is set to 0.85 for all participants in our experiment. In terms of SVR, LR, regression (C)/classification (C), it is a simple regression/regression/regression/classification problem, since the EEG signals from multiple channels are simply concatenated into a long vector. In terms of regression (A)/classification (A), considering the high-dimensional feature with low sample size, we train a nonlinear model shared by all channels and aggregate the results from different channels to calculate the final predictions using the majority voting scheme. Since there is no mechanism for SVR, LR, regression (C/A), classification (C/A) to evaluate the channel state, we trust all the channels by default. Further, we calculate only the two metrics on the preference propositions wherein the orderings between the RT pair are significant, since it is hard to evaluate when the orderings between the RT pair are comparable.
#### 5.1.4 Parameter Initialization
We implemented SVR using Libsvm6 with the parameter set to -s 3 -t 0. Other methods are implemented in PyTorch (Paszke et al., 2017). We train a one-layer neural network for LR. For the sake of a fair comparison, we implemented a two-layer neural network for all nonlinear methods. In particular, we set network dimensions to d-100-1, where d is the input feature dimension, which varies between different baselines. All layers are densely (fully) connected. In terms of the channel reliability $πn$, we aimed to eliminate the effects of noisy channels during the training process and therefore initialized the channel reliability $πn$ to 0.5, $∀n=1,2,…,N$. The L2 norm is used, which equals adopting the standard gaussian distribution for $w$: $w∼N(0,1)$. In terms of the hyperparameters $(αn,βn)$, as we intended to eliminate the effects of noisy channels, we adopted a strong noninformative prior for $πn$: $αn=βn=100$, $∀n=1,2,…,N$, according to Bishop (2006). The Adam method is used to optimize the weight7$w$. In terms of the maximum iteration number for our CArank, we set $MaxIter=7$ in our experiment to ensure the algorithm converged for each participant. The minibatch size is set to 256, and the learning rate is 0.001. In terms of LR and regression (C/A), the common mean square error (MSE) is adopted as the loss function. In terms of classification (C/A), the negative log-likelihood (see equation 4.4) is adopted as the loss function, except that $πn$ is fixed to 1, $∀n=1,2,…,N$.
### 5.2 Empirical Results of CArank on Brain Dynamics Preferences
In this section, we compare the performance of our CArank and other baselines based on the two metrics, equations 5.1 and 5.3.
#### 5.2.1 Compassion Based on Wilcoxon-Mann-Whitney Statistics
The Wilcoxon-Mann-Whitney statistics of all methods on the test BDPs are presented in Table 1. In terms of SVR, LR, and regression (C), the Wilcoxon-Mann-Whitney statistics of the predicted RTs is calculated with regard to the ground truth on the test BDPs. In terms of regression (A), we first collected the predicted RTs on the test BDPs by aggregating the prediction from each channel using majority voting. Then we calculated the Wilcoxon-Mann-Whitney statistics of the predicted RTs following equation 5.1.
From Table 1, we offer the following observations:
1. $CArank>otherbaselines$. CArank exhibits consistent improvements over other baselines. In particular, it achieves the highest test accuracy on 30 participants and comparable results on the rest of the participants. This is consistent with our motivation that classification serves as a relaxed alternative for regression, can effectively circumvent the overfitting caused by nonsmooth or extreme RTs, and preserves the ordering with regard to RTs. Meanwhile, our channel-reliability-aware formulation could also automatically eliminate the effects of the EEG signals from a noisy channel during the training process, compared with using simple concatenation.
2. $Classification>SVR>Regression$. The test accuracy of classification-based methods for most participants is higher than their regression-based counterparts, namely, classification (C) outperforms SVR and regression (C) on 24 and 33 participants, respectively, and classification (A) outperforms regression (A) on 26 participants. This observation is consistent with our statement that regression-based models are easily overfitting, especially when extreme values (RTs in our problem) exist.
3. $Concatenation>Aggregation$. It is interesting to note that the test accuracy based on multiple channel aggregation is significantly inferior to their counterparts based on simple feature concatenation. Specifically, regression (C) outperforms regression (A) on 33 participants, while classification (C) outperforms classification (A) on 38 participants. This is quite impressive but reasonable. Since a shared regression/ classification model is trained in the case of the multiple channel aggregation formulation, the generalization performance would inevitably degenerate when learning with noisy channels. Meanwhile. the noisy channels universally exist, and at least one noisy channel is detected for each participant according to Figure 6.
4. $SVR>NonlinearRegression>LinearRegression$. Note that linear SVR shows superior performance to do nonlinear regression (C) and LR over 26 and 32 participants, respectively. Since the input of SVR, LR, and Regression (C) is the same, the only difference lies in the choice of the loss function. SVR adopts hinge loss, which is robust to outliers away from the boundary (Basak, Pal, & Patranabis, 2007). This is consistent with our analysis about the deficiency of the MSE loss used in the regression model (see section 2.1). Meanwhile, the performance of SVR is not stable and may achieve worse results on some participants (e.g., P10, P18, P31, P39, P40). Therefore, the hinge loss is also not the best choice compared to the classification setting, where our CArank can universally achieve accuracy above $75%$ for the corresponding participants.
#### 5.2.2 Compassion Based on In-Degree Preservation
To further investigate the reliability of CArank in terms of preserving the global ordering corresponding to RTs, we first collected the in-degree sequences according to equation 5.2 using the predicted RTs and then measured the in-degree discrepancy between the calculated in-degree sequences and the ground truth using the root-mean-squared error, equation 5.3. The RMSEs for all participants are shown in Table 2.
Table 2:
Test RMSE (in Numbers). Smaller Is Better.
ParticipantP1P2P3P4P5P6P7P8P9P10P11P12P13P14P15P16P17P18P19P20
Test RMSE SVR 12.76 17.87 9.98 21.40 23.99 23.59 17.21 23.05 40.75 19.72 14.35 7.28 18.78 15.48 27.98 10.67 14.85 43.93 41.59 34.90
LR 13.26 22.81 10.14 21.69 54.85 32.89 13.56 46.83 38.37 34.34 14.01 7.34 18.65 25.34 47.00 18.76 18.26 45.59 73.37 38.72
Regression (C) 13.11 17.40 13.06 17.92 25.98 22.14 22.46 42.16 30.63 18.16 12.04 5.40 13.88 14.03 27.87 12.47 17.93 45.85 43.96 36.64
Regression (A) 11.92 19.12 13.46 18.27 24.65 26.14 21.17 38.24 37.44 20.53 11.82 11.56 19.15 16.71 26.91 14.82 17.30 35.59 46.23 41.59
Classification (C) 10.53 15.35 12.51 16.78 27.36 22.85 16.32 33.48 25.18 17.45 15.21 8.64 15.58 12.63 25.24 10.54 15.98 27.56 31.11 35.76
Classification (A) 9.20 18.28 13.54 18.35 27.15 22.83 25.98 44.48 49.43 20.36 14.61 9.65 18.83 12.23 25.20 15.19 17.42 42.07 48.42 44.38
CArank 8.66 16.97 12.02 16.27 20.71 19.97 13.98 25.71 12.52 13.70 12.73 9.41 12.43 10.06 23.95 8.99 5.74 25.00 31.39 28.00
Participant P21 P22 P23 P24 P25 P26 P27 P28 P29 P30 P31 P32 P33 P34 P35 P36 P37 P38 P39 P40
Test RMSE SVR 33.58 16.29 32.49 33.89 40.51 35.73 32.63 24.47 29.83 34.87 23.58 10.37 18.67 19.49 13.07 8.35 15.17 38.28 14.50 28.71
LR 56.49 38.90 31.20 58.26 60.63 34.56 64.32 61.78 70.42 36.94 20.29 32.28 29.31 36.47 14.54 7.96 14.62 55.18 15.84 30.40
Regression (C) 38.13 25.26 26.88 40.08 38.27 36.48 38.22 26.56 36.39 31.98 26.09 22.99 24.67 25.53 13.85 9.94 13.74 52.71 6.66 18.75
Regression (A) 48.77 22.10 27.19 41.04 37.01 46.19 46.01 31.61 41.87 36.54 26.77 23.33 23.51 25.03 12.49 9.23 17.07 46.63 17.52 24.70
Classification (C) 40.79 13.76 23.97 31.98 30.39 36.36 28.85 24.44 33.09 26.21 19.45 13.54 22.04 20.48 13.41 7.22 13.46 39.68 9.65 17.33
Classification (A) 51.50 15.99 30.35 37.33 37.94 47.82 44.97 37.25 57.80 42.96 26.74 23.69 24.75 26.88 19.49 6.59 16.26 46.13 7.61 16.97
CArank 37.77 11.77 25.49 16.44 29.67 30.72 19.38 26.32 26.00 28.49 8.00 11.65 12.94 12.06 5.34 7.03 11.17 36.77 3.83 15.14
ParticipantP1P2P3P4P5P6P7P8P9P10P11P12P13P14P15P16P17P18P19P20
Test RMSE SVR 12.76 17.87 9.98 21.40 23.99 23.59 17.21 23.05 40.75 19.72 14.35 7.28 18.78 15.48 27.98 10.67 14.85 43.93 41.59 34.90
LR 13.26 22.81 10.14 21.69 54.85 32.89 13.56 46.83 38.37 34.34 14.01 7.34 18.65 25.34 47.00 18.76 18.26 45.59 73.37 38.72
Regression (C) 13.11 17.40 13.06 17.92 25.98 22.14 22.46 42.16 30.63 18.16 12.04 5.40 13.88 14.03 27.87 12.47 17.93 45.85 43.96 36.64
Regression (A) 11.92 19.12 13.46 18.27 24.65 26.14 21.17 38.24 37.44 20.53 11.82 11.56 19.15 16.71 26.91 14.82 17.30 35.59 46.23 41.59
Classification (C) 10.53 15.35 12.51 16.78 27.36 22.85 16.32 33.48 25.18 17.45 15.21 8.64 15.58 12.63 25.24 10.54 15.98 27.56 31.11 35.76
Classification (A) 9.20 18.28 13.54 18.35 27.15 22.83 25.98 44.48 49.43 20.36 14.61 9.65 18.83 12.23 25.20 15.19 17.42 42.07 48.42 44.38
CArank 8.66 16.97 12.02 16.27 20.71 19.97 13.98 25.71 12.52 13.70 12.73 9.41 12.43 10.06 23.95 8.99 5.74 25.00 31.39 28.00
Participant P21 P22 P23 P24 P25 P26 P27 P28 P29 P30 P31 P32 P33 P34 P35 P36 P37 P38 P39 P40
Test RMSE SVR 33.58 16.29 32.49 33.89 40.51 35.73 32.63 24.47 29.83 34.87 23.58 10.37 18.67 19.49 13.07 8.35 15.17 38.28 14.50 28.71
LR 56.49 38.90 31.20 58.26 60.63 34.56 64.32 61.78 70.42 36.94 20.29 32.28 29.31 36.47 14.54 7.96 14.62 55.18 15.84 30.40
Regression (C) 38.13 25.26 26.88 40.08 38.27 36.48 38.22 26.56 36.39 31.98 26.09 22.99 24.67 25.53 13.85 9.94 13.74 52.71 6.66 18.75
Regression (A) 48.77 22.10 27.19 41.04 37.01 46.19 46.01 31.61 41.87 36.54 26.77 23.33 23.51 25.03 12.49 9.23 17.07 46.63 17.52 24.70
Classification (C) 40.79 13.76 23.97 31.98 30.39 36.36 28.85 24.44 33.09 26.21 19.45 13.54 22.04 20.48 13.41 7.22 13.46 39.68 9.65 17.33
Classification (A) 51.50 15.99 30.35 37.33 37.94 47.82 44.97 37.25 57.80 42.96 26.74 23.69 24.75 26.88 19.49 6.59 16.26 46.13 7.61 16.97
CArank 37.77 11.77 25.49 16.44 29.67 30.72 19.38 26.32 26.00 28.49 8.00 11.65 12.94 12.06 5.34 7.03 11.17 36.77 3.83 15.14
Note: The shaded numbers indicate the best results.
From Table 2, we could draw similar conclusions. (1) Our CArank consistently achieves lower RMSE compared to other baselines. In particular, CArank achieves the lowest test RMSE on 27 over 40 participants. (2) Except for our CArank, classification (C) shows better performance over the resting baselines. This is reasonable, since classification is robust to extreme RTs while the concatenation approach is less affected by the noisy channels compared to simple aggregation. (3) The difference between other baseline methods becomes ambiguous. This is because RMSE assigned higher punishment to an estimation with a larger error.
#### 5.2.3 Visualization of Predicted In-Degrees
To further explore the superiority of our CArank, we visualized Table 2 using the indegree sequences. For the sake of intuitive interpretation, we particularly showcase participants P9, P13, P22, P24, and P31 with the most representative performance in Figure 5. Regarding the rest of the participants, our CArank also achieves superior performance with the lowest RMSE (see Table 2).
From Figure 5, we make five observations. First, overall, the in-degree sequences predicted by CArank closely align to the ground truth with slight fluctuations (small RMSE), while the in-degree sequences predicted by other baselines fluctuate significantly and fail to maintain the trend with the ground truth (large RMS). Second, the points located in the northeast denote the trials with high RTs (also called extreme RTs). The in-degree sequences predicted by CArank show slighter fluctuations compared to those of other baselines. It denotes that CArank could accurately detect the mental fatigue associated with higher RTs. However, other baselines either show large fluctuations (e.g., P9, P13, P24), leading to a high false-negative rate, or completely fail to maintain the trend, leading to a high error rate. Third, the points located in the southwest denote the trials with small RTs. The in-degree sequences predicted by other baselines show large fluctuations (e.g., P22), a high false-positive rate. (4) It is worth noting that the in-degree sequences predicted by regression(C/A) usually fluctuates heavily for low in-degree trials (small RTs) and high in-degree trials (large RTs). It means that regression (C/A) overestimates the RTs with small values and underestimates the RTs with large values. It is consistent with our claim that the regression-based model is not suitable for tasks with a nonsmooth response variable (RT). Fifth, a simple classification using multichannel aggregation, that is, classification (M), also shows heavy fluctuations, since it lacks an effective mechanism to aggregate the predication from multiple channels. Classification (C) shows better performance but is just as likely to be overfitting, since classification (C) also could not eliminate the effects of noisy channels during the training process.
### 5.3 Noisy Channel Detection
We also investigated the reliability of our CArank from the perspective of noisy channel detection. According to our analysis, the parameter $πn$ in the transition matrix $Πn$ indicates channel reliability. Hereafter, we leverage $πn$ as the channel reliability indicator to detect noisy channels. Figure 6 lists the noisy channels (marked in red) detected with $0.15≤πn≤0.85$, $∀n=1,2,…,N$.
Figure 6 shows, first, that the noisy channels universally exist among the EEG signals. At least one noisy channel is detected for each participant. For example, the 33rd channel is recognized as the noisy channel by CArank for almost all participants. It is reasonable since the 33rd channel is generally acknowledged as the nonrelevant channel to any tasks (Lin et al., 2014). Second, for each participant, most channels are reliable, which ensures we can always find enough support to training our CArank. Third, the detected noisy channels vary from participant to participant and do not possess the transitivity property between participants. The noise can arise due to the intrinsic noninformative EEG channel (e.g., the 33rd channel for all participants); channels for lateral mastoid references (e.g., the 23rd and 29th channels for majority participants) (Chatrian, Lettich, & Nelson, 1985); and improper experimentation or artifacts (for P13, P39, and P40) (Lin et al., 2018).
## 6 Limitations and Future Work
In this work, the cooperation mechanism among channels is simplified as a weighted majority voting system, while different trials are viewed independently. We intend to formulate it with more complex mechanisms, such as the Markov decision process (MDP), to conduct learning and decision making simultaneously. Some previous work (Chen, Jiao, & Lin, 2016; Chen, Lin, & Zhou, 2015) has studied the decision-making process among crowd (noisy) workers, which is promising to our setting to investigate the cooperation mechanism among noisy channels. Efforts are underway to apply this approach in future work.
Furthermore, Brain dynamics are nonstationary and characterized by significant trial-by-trial variability (Yarkoni, Barch, Gray, Conturo, & Braver, 2009). Due to this variability, CArank would suffer from repeated training and updating costs with respect to all new data. We consider extending CArank to a real-time mental fatigue monitoring system by online calibrating CArank. Inspired by the work of Weng and Lin (2011) and Jaini et al. (2017), Bayesian moment matching offers promise to sequentially update the nonconjugate likelihood function (e.g., CArank) with analytic update rules.
## 7 Conclusion
This work proposes a CArank model to assess the state of mental fatigue. The efficacy of the model was demonstrated using EEG data collected in a sustained driving task from 40 participants. This model has been combined with a stochastic-generalized expectation-maximization (SGEM) algorithm to provide an efficient update in the large-scale setting. CArank uses a unique methodology with a relaxed alternative, ordinal classification, to circumvent overfitting to the extreme values of RTs. It has been demonstrated that the overall performance of CArank can be significantly improved with the introduction of a transition matrix, which enables the technique to evaluate the reliability of informative EEG channels while detecting noisy EEG channels. Empirical results show that CArank delivers significant improvements over simple classification and regression methods in terms of global ranking preservation.
## Notes
1
When applied to BDP, the subtle difference between the RTs may be caused not by the intrinsic difference between BDP but the unknown noise.
2
In the following, we omitted the subscripts for simplicity.
3
A promising approach to generalize the transition matrix $Πn$, equation 3.9, is to introduce the concept of the confidence region to measure the equal cases (Pregibon, 1981).
4
Here, the stepsize is set to $ηt=(t+2)-τ0$, where $t$ is the number of iterations and $0.5<τ0<1$. The smaller the $τ0$ is, the larger the update $ηt$ is, and the more quickly we forget (decay) our old parameters. This can lead to swift progress but also generates instability.
5
According to Huang et al. (2015), the couplings between pairs of MCC, ACC, lSMC, rSMC, PCC, and ESC regions increased at the intermediate level of attention. It reveals that an enhancement of the cortico-cortical interaction is necessary to maintain task performance and prevent mental fatigue. Further, it shows that higher connectivity shows optimal performance, while very few connected nodes show poor performance. See Huang et al. (2015) for more information.
6
https://www.csie.ntu.edu.tw/$∼$cjlin/libsvm/.
7
In terms of the L-BFGS implementation, a Matlab code can be downloaded from Granzow (2017).
## Acknowledgments
I.W.T. is supported by ARC under grant DP180100106 and DP200101328. M.S. was supported by the International Research Center for Neurointelligence (WPI-IRCN) at the University of Tokyo Institutes for Advanced Study.
## References
,
R. J.
,
Appleton
,
S. L.
,
Taylor
,
A. W.
,
Gill
,
T. K.
,
Lang
,
C.
,
McEvoy
,
R. D.
, &
Antic
,
N. A.
(
2017
).
Sleep health of Australian adults in 2016: Results of the 2016 Sleep Health Foundation national survey.
Sleep Health: Journal of the National Sleep Foundation
,
3
(
1
),
35
42
.
Alharbi
,
N.
(
2018
).
A novel approach for noise removal and distinction of EEG recordings.
Biomedical Signal Processing and Control
,
39
,
23
33
.
Basak
,
D.
,
Pal
,
S.
, &
Patranabis
,
D. C.
(
2007
).
Support vector regression.
Neural Information Processing–Letters and Reviews
,
11
(10)
.
Becirovic
,
E.
(
2017
).
On social choice in social networks
. Master's thesis, Linköping University. http://www.diva-portal.org/smash/record.jsf?pid=diva2%3A1117841&dswid=1470.
Bishop
,
C. M.
(
2006
).
Pattern recognition and machine learning
.
Berlin
:
Springer
.
Blankertz
,
B.
,
Tangermann
,
M.
,
Vidaurre
,
C.
,
Dickhaus
,
T.
,
Sannelli
,
C.
,
Popescu
,
F.
, …
Müller
,
K.-R.
(
2009
). Detecting mental states by machine learning techniques: The Berlin brain–computer interface. In
B.
Graimann
(Ed.),
Brain-computer interfaces
(pp.
113
135
).
Berlin
:
Springer
.
Boksem
,
M. A.
, &
Tops
,
M.
(
2008
).
Mental fatigue: Costs and benefits.
Brain Research Reviews
,
59
(
1
),
125
139
.
Byrd
,
R. H.
,
Lu
,
P.
,
Nocedal
,
J.
, &
Zhu
,
C.
(
1995
).
A limited memory algorithm for bound constrained optimization.
SIAM Journal on Scientific Computing
,
16
(
5
),
1190
1208
.
Cappé
,
O.
, &
Moulines
,
E.
(
2009
).
On-line expectation–maximization algorithm for latent data models
.
Journal of the Royal Statistical Society: Series B (Statistical Methodology)
,
71
(
3
),
593
613
.
Chang
,
C.-C.
, &
Lin
,
C.-J.
(
2011
).
LIBSVM: A library for support vector machines
.
ACM Transactions on Intelligent Systems and Technology
,
2
(
3
),
1
27
.
Chatrian
,
G.
,
Lettich
,
E.
, &
Nelson
,
P.
(
1985
).
Ten percent electrode system for topographic studies of spontaneous and evoked EEG activities.
American Journal of EEG Technology
,
25
(
2
),
83
92
.
Chen
,
X.
,
Jiao
,
K.
, &
Lin
,
Q.
(
2016
).
Bayesian decision process for cost-efficient dynamic ranking via crowdsourcing.
Journal of Machine Learning Research
,
17
(
217
),
1
40
.
Chen
,
X.
,
Lin
,
Q.
, &
Zhou
,
D.
(
2015
).
Statistical decision making for optimal budget allocation in crowd labeling.
Journal of Machine Learning Research
,
16
(
1
),
1
46
.
Chuang
,
C.-H.
,
Cao
,
Z.
,
King
,
J.-T.
,
Wu
,
B.-S.
,
Wang
,
Y.-K.
, &
Lin
,
C.-T.
(
2018
).
Brain electrodynamic and hemodynamic signatures against fatigue during driving
.
Frontiers in Neuroscience
,
12
,
181
.
Cook
,
D. B.
,
O'Connor
,
P. J.
,
Lange
,
G.
, &
Steffener
,
J.
(
2007
).
Functional neuroimaging correlates of mental fatigue induced by cognition among chronic fatigue syndrome patients and controls
.
NeuroImage
,
36
(
1
),
108
122
.
de
Naurois
,
C. J.
,
Bourdin
,
C.
,
Stratulat
,
A.
,
Diaz
,
E.
, &
Vercher
,
J.-L.
(
2017
).
Detection and prediction of driver drowsiness using artificial neural network models.
Accident Analysis and Prevention
,
126
,
95
104
.
Dempster
,
A. P.
,
Laird
,
N. M.
, &
Rubin
,
D. B.
(
1977
).
Maximum likelihood from incomplete data via the EM algorithm
.
Journal of the Royal Statistical Society Series B (Methodological)
,
39
,
1
38
.
Ekman
,
P. E.
, &
Davidson
,
R. J.
(
1994
).
The nature of emotion: Fundamental questions
.
New York
:
Oxford University Press
.
Fazli
,
S.
,
Popescu
,
F.
,
Danóczy
M.
,
Blankertz
,
B.
,
Müller
,
K.-R.
, &
Grozea
,
C.
(
2009
).
Subject-independent mental state classification in single trials
.
Neural Networks
,
22
(
9
),
1305
1312
.
Franks
,
D. D.
(
2019
).
Neurosociology: Fundamentals and current findings
.
Berlin
:
Springer
.
Gramann
,
K.
,
Müller
H.
,
Schönebeck
,
B.
, &
Debus
,
G.
(
2006
).
The neural basis of ego- and allocentric reference frames in spatial navigation: Evidence from spatiotemporal coupled current density reconstruction.
Brain Research
,
1118
(1)
,
116
129
.
Granzow
,
B.
(
2017
).
A Matlab implementation of L-BFGS-B
. https://github.com/bgranzow/L-BFGS-B.
Hajinoroozi
,
M.
,
Mao
,
Z.
,
Jung
,
T.-P.
,
Lin
,
C.-T.
, &
Huang
,
Y.
(
2016
).
EEG-based prediction of driver's cognitive performance by deep convolutional neural network.
Signal Processing: Image Communication
,
47
,
549
555
.
Hammon
,
P. S.
, &
de Sa
,
V. R.
(
2007
).
Preprocessing and meta-classification for brain-computer interfaces.
IEEE Transactions on Biomedical Engineering
,
54
(
3
),
518
525
.
Hastie
,
T.
,
Tibshirani
,
R.
, &
Friedman
,
J.
(
2009
).
The elements of statistical learning: Data mining, inference, and prediction
.
New York
:
.
Homan
,
R. W.
,
Herman
,
J.
, &
Purdy
,
P.
(
1987
).
Cerebral location of international 10–20 system electrode placement.
Electroencephalography and Clinical Neurophysiology
,
66
(
4
),
376
382
.
Huang
,
C.-S.
,
Pal
,
N. R.
,
Chuang
,
C.-H.
, &
Lin
,
C.-T.
(
2015
).
Identifying changes in EEG information transfer during drowsy driving by transfer entropy.
Frontiers in Human Neuroscience
,
9
,
570
.
Izuma
,
K.
, &
,
R.
(
2013
).
Social manipulation of preference in the human brain
.
Neuron
,
78
(
3
),
563
573
.
Jaini
,
P.
,
Chen
,
Z.
,
Carbajal
,
P.
,
Law
,
E.
,
Middleton
,
L.
,
Regan
,
K.
, …
Poupart
,
P.
(
2017
).
Online Bayesian transfer learning for sequential data modeling.
In
International Conference on Learning Representations
. https://openreview.net/forum?id=HygBZnRctX.
Jap
,
B. T.
,
Lal
,
S.
,
Fischer
,
P.
, &
Bekiaris
,
E.
(
2009
).
Using EEG spectral components to assess algorithms for detecting fatigue
.
Expert Systems with Applications
,
36
(
2
),
352
2359
.
Ji
,
Q.
,
Zhu
,
Z.
, &
Lan
,
P.
(
2004
).
Real-time nonintrusive monitoring and prediction of driver fatigue.
IEEE Transactions on Vehicular Technology
,
53
(
4
),
1052
1068
.
Jin
,
Z.
,
Zhou
,
G.
,
Gao
,
D.
, &
Zhang
,
Y.
(
2018
).
EEG classification using sparse Bayesian extreme learning machine for brain–computer interface.
Neural Computing and Applications
, pp.
1
9
. https://doi.org/10.1007/s00521-018-3735-3.
Kaji
,
H.
,
Iizuka
,
H.
, &
Sugiyama
,
M.
(
2019
).
ECG-based concentration recognition with multi-task regression
.
IEEE Transactions on Biomedical Engineering
,
66
(
1
),
101
110
.
Kasai
,
H.
(
2017
).
SGDLibrary: A MATLAB library for stochastic gradient descent algorithms
.
arXiv:1710.10951
.
Kohlmorgen
,
J.
,
Dornhege
,
G.
,
Braun
,
M.
,
Blankertz
,
B.
,
Curio
,
G.
,
Hagemann
,
K.
, …
Kincses
,
W.
(
2007
).
Improving human performance in a real operating environment through real-time mental workload detection.
In
G.
Dornhege
,
J.
del R. Millán
,
T.
Hinterberger
,
D.
McFarland
, &
K.-R.
Müller
(Eds.),
Toward brain-computer interfacing
(pp.
409
422
).
Cambridge, MA
:
MIT Press
.
Lal
,
S. K.
,
Craig
,
A.
,
Boord
,
P.
,
Kirkup
,
L.
, &
Nguyen
,
H.
(
2003
).
Development of an algorithm for an EEG-based driver fatigue countermeasure.
Journal of Safety Research
,
34
(
3
),
321
328
.
Liang
,
P.
, &
Klein
,
D.
(
2009
).
Online EM for unsupervised models.
In
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics
(pp.
611
619
).
Stroudsburg, PA
:
Association for Computational Linguistics
.
Lin
,
C.-T.
,
Chuang
,
C.-H.
,
Huang
,
C.-S.
,
Tsai
,
S.-F.
,
Lu
,
S.-W.
,
Chen
,
Y.-H.
, &
Ko
,
L.- W.
(
2014
).
Wireless and wearable EEG system for evaluating driver vigilance.
IEEE Transactions on Biomedical Circuits and Systems
,
8
(
2
),
165
176
.
Lin
,
C.-T.
,
Chuang
,
C.-H.
,
Kerick
,
S.
,
Mullen
,
T.
,
Jung
,
T.-P.
,
Ko
,
L.-W.
, …
McDowell
,
K.
(
2016
).
Mind-wandering tends to occur under low perceptual demands during driving.
Scientific Reports
,
6
,
21353
.
Lin
,
C.-T.
,
Huang
,
C.-S.
,
Yang
,
W.-Y.
,
Singh
,
A. K.
,
Chuang
,
C.-H.
, &
Wang
,
Y.-K.
(
2018
).
Real-time EEG signal enhancement using canonical correlation analysis and gaussian mixture clustering.
Journal of Healthcare Engineering
.
Möckel
,
T.
,
Beste
,
C.
, &
Wascher
E.
(
2015
).
The effects of time on task in response selection: An ERP study of mental fatigue
.
Scientific Reports
,
5
,
10113
.
Paszke
,
A.
,
Gross
,
S.
,
Chintala
,
S.
,
Chanan
,
G.
,
Yang
,
E.
,
DeVito
,
Z.
, …
Lerer
,
A.
(
2017
).
Automatic differentiation in PyTorch
. https://openreview.net/forum?id=BJJsrmfCZ.
Pregibon
,
D.
(
1981
).
Logistic regression diagnostics
.
Annals of Statistics
,
9
(
4
),
705
724
.
Roche
,
A.
(
2011
).
EM algorithm and variants: An informal tutorial
.
arXiv:1105.1476
.
Teplan
,
M.
(
2002
).
Fundamentals of EEG measurement.
Measurement Science Review
,
2
(
2
),
1
11
.
Tian
,
S.
,
Wang
,
Y.
,
Dong
,
G.
,
Pei
,
W.
, &
Chen
,
H.
(
2018
).
Mental fatigue estimation using EEG in a vigilance task and resting states.
In
Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society
(pp.
1980
1983
).
Piscataway, NJ
:
IEEE
.
Ting
,
P.-H.
,
Hwang
,
J.-R.
,
Doong
,
J.-L.
, &
Jeng
,
M.-C.
(
2008
).
Driver fatigue and highway driving: A simulator study.
Physiology and Behavior
,
94
(3)
,
448
453
.
Wascher
E.
,
Rasch
,
B.
,
Sänger
J.
,
Hoffmann
,
S.
,
Schneider
D.
,
Rinkenauer
G.
, …
Gutberlet
,
I.
(
2014
).
Frontal theta activity reflects distinct aspects of mental fatigue.
Biological Psychology
,
96
,
57
65
.
Wei
,
C.-S.
,
Lin
,
Y.-P.
,
Wang
,
Y.-T.
,
Jung
,
T.-P.
,
Bigdely-Shamlo
,
N.
, &
Lin
,
C.-T.
(
2015
).
Selective transfer learning for EEG-based drowsiness detection.
In
Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics
(pp.
3229
3232
).
Piscataway, NJ
:
IEEE
.
Welch
,
P.
(
1967
).
The use of fast Fourier transform for the estimation of power spectra: A method based on time averaging over short, modified periodograms.
IEEE Transactions on Audio and Electroacoustics
,
15
(2)
,
70
73
.
Weng
,
R. C.
, &
Lin
,
C.-J.
(
2011
).
A Bayesian approximation method for online ranking.
Journal of Machine Learning Research
,
1
,
267
300
.
Yan
,
L.
,
Dodier
,
R. H.
,
Mozer
,
M.
, &
Wolniewicz
,
R. H.
(
2003
).
Optimizing classifier performance via an approximation to the Wilcoxon-Mann-Whitney statistic.
In
Proceedings of the 20th International Conference on Machine Learning
(pp.
848
855
).
Palo Alto, CA
:
AAAI
.
Yarkoni
,
T.
,
Barch
,
D. M.
,
Gray
,
J. R.
,
Conturo
,
T. E.
, &
Braver
,
T. S.
(
2009
).
Bold correlates of trial-by-trial reaction time variability in gray and white matter: A multi-study FMRI analysis.
PLOS One
,
4
(1)
,
e4257
.
Zarei
,
R.
(
2017
).
Developing enhanced classification methods for ECG and EEG signals
. PhD diss., Victoria University.
Zeng
,
H.
,
Yang
,
C.
,
Dai
,
G.
,
Qin
,
F.
,
Zhang
,
J.
, &
Kong
,
W.
(
2018
).
EEG classification of driver mental states by deep learning.
Cognitive Neurodynamics
,
12
(6)
,
597
606
.
Zhang
,
G. P.
(
2000
).
Neural networks for classification: A survey.
IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews)
,
30
(4)
,
451
462
.
Zhang
,
X.
,
Yao
,
L.
,
Wang
,
X.
,
Monaghan
,
J.
, &
Mcalpine
,
D.
(
2019
).
A survey on deep learning based brain computer interface: Recent advances and new frontiers
.
arXiv:1905.04149
.
Zhang
,
Y.
,
Zhou
,
G.
,
Jin
,
J.
,
Zhao
,
Q.
,
Wang
,
X.
, &
Cichocki
,
A.
(
2015
).
Sparse Bayesian classification of EEG for brain–computer interface
.
IEEE Transactions on Neural Networks and Learning Systems
,
27
(
11
),
2256
2267
. | 2021-07-23 16:30:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 274, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6594704985618591, "perplexity": 1063.2418602872708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046149929.88/warc/CC-MAIN-20210723143921-20210723173921-00348.warc.gz"} |
http://math.stackexchange.com/questions/107378/integral-and-density-cumulative-distribution-function | # Integral and Density/Cumulative Distribution Function
I am working on this question:
If we think of the electron as a particle, the function $P(r):=1-(2r^2+2r+1)e^{-2r}$ is the cumulative distribution function of the distance $r$ of the electron in a hydrogen atom from the center of the atom (The distance is measured in Bohr radii). For example, $P(1)=1-5e^{-2}\approx 0.32$ means that the electron is within 1 Bohr radius from the center of the atom 32% of the time.
(a) Find a formula for the density function of this distribution. Sketch the density function and the cumulative distribution function.
(b) Find the median distance and the mean distance. Near what value of r is an electron most likely to be found?
Is the density function the derivative of the cumulative distribution function?
$$P'(r)=4r^{2}e^{-2r}$$
To find the mean distance I believe I use the formula:
$$\mu =\int_{-\infty }^{\infty}rP'(r)\cdot dr$$
For the median I am looking for the number $m$ such that:
$$\int_{m }^{\infty}P'(r)\cdot dr=\frac{1}{2}$$
I am thinking to find where the value of r that an electron is most likely to be found involves the max of $P'(r)$.
-
What you have written is sensible, but since you have the cumulative distribution function you can find the median directly by solving (numerically or looking at your graph) $P(r)=\frac{1}{2}.$
Likewise, the mean distance is also the integral of $1-P(r)$ on $r\geqslant0$. – Did Feb 9 '12 at 7:26 | 2016-02-11 15:44:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8050896525382996, "perplexity": 85.76986149763995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701162035.80/warc/CC-MAIN-20160205193922-00147-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://mathoverflow.net/questions/149358/are-there-trees-for-sigma2-1-textub | Are there trees for $(\Sigma^2_1)^{\text{uB}}$?
If there is a proper class of Woodin cardinals, then Woodin showed (using stationary towers) that $(\Sigma^2_1)^{\text{uB}}$ statements are generically absolute, where $\text{uB}$ denotes the pointclass of universally Baire sets of reals. This generic absoluteness result has a more local version: if $\lambda$ is a limit of Woodin cardinals, then $(\Sigma^2_1)^{\text{uB}_\lambda}$ statements are generically absolute for posets of size less than $\lambda$, where $\text{uB}_\lambda$ denotes the pointclass of $\lambda$-universally Baire sets of reals (or what some people would call $\mathord{<}\lambda$-universally Baire sets of reals.)
The "local" generic absoluteness for $(\Sigma^2_1)^{\text{uB}_\lambda}$ can be explained in terms of trees (although the proof uses stationary towers instead.) More precisely, let $\varphi(v)$ be a formula in the language of set theory expanded by a unary predicate symbol. For every limit $\lambda$ of Woodin cardinals there is a tree $T_{\varphi,\lambda}$ such that in every generic extension $V[g]$ by a poset of size less than $\lambda$ we have
$$V[g] \models p[T_{\varphi,\lambda}] = \{x \in \mathbb{R} : \exists A \in \text{uB}_\lambda\,(\text{HC}; \mathord{\in},A) \models \varphi[x]\}.$$
This tree is obtained from the scale property for the pointclass $\Sigma^2_1$ of the derived model of $V$ at $\lambda$. Given these trees $T_{\varphi,\lambda}$, the "local" generic absoluteness follows by a standard argument using the absoluteness of well-foundedness.
My question is, can the "global" generic absoluteness for $(\Sigma^2_1)^{\text{uB}}$ also be explained in terms of trees, assuming that there is a proper class of Woodin cardinals? More precisely, is there a single proper-class-sized tree $T_\varphi$ such that in every generic extension $V[g]$ we have
$$V[g] \models p[T_{\varphi}] = \{x \in \mathbb{R} : \exists A \in \text{uB}\,(\text{HC}; \mathord{\in},A) \models \varphi[x]\}?$$
I can think of two possible approaches, both with apparently serious problems.
1. Consider the "derived model at $\text{Ord}$." Problem: this doesn't really exist.
2. Define $T_\varphi$ as the amalgamation of the trees $T_{\varphi,\lambda}$ for various $\lambda$, e.g. all limits of Woodin cardinals, or all limit of Woodin cardinals above some point. Problem: I don't see any way to show that the projection of such an amalgamated tree in some generic extension $V[g]$ is not too large.
• Is there a way to define "bigger and bigger" derived models for larger and larger $\lambda$ limits of Woodin cardinals and look at the corresponding trees? Along the way the derived models would have to cohere in some specific way as to ensure that the successive projections of the trees agree. Basically more and more statements would have to be verified. The final tree could be a lim inf of the construction. This just a quick guess which might turned out to be naive. In the 5th line there is a small typo: you meant to write "where $UB_{\lambda}$ denotes". Nice question by the way. Nov 19 '13 at 23:51
• @CarloVonSchnitzel Thanks. Fixing any particular generic extension $V[g]$, the trees $T_{\varphi,\lambda}$ for sufficiently large $\lambda$ all have the correct projection. (This is because in $V[g]$ we have $\text{uB}_\lambda = \text{uB}$ for all sufficiently large $\lambda$. Unfortunately the meaning of "sufficiently large" depends on $g$.) So if there were a tree whose projection in any generic extension was the limit of the projections of the trees $T_{\varphi,\lambda}$ as $\lambda \to \text{Ord}$ then the answer to my question would be "yes". Nov 20 '13 at 0:49
• ...but I don't know of any general construction of a tree whose projection is the limit (or lim sup or lim inf, if we don't want to assume that the limit exists) of the projections of a given uncountable sequence of trees, however. Nov 20 '13 at 0:51
• Also, note that for any given $\lambda$ the tree $T_{\varphi,\lambda}$ is a set, so in sufficiently large generic extensions $V[g]$ it is countable and its projection is analytic and therefore too simple to be the desired $(\Sigma^2_1)^{\text{uB}_\lambda}$ set. But one approach would be trying to show that this analytic set (the projection) is always contained in the desired $(\Sigma^2_1)^{\text{uB}_\lambda}$ set. I have no idea whether this is true. (The analogous containment is true if you consider various sizes of Shoenfield tree for $\Sigma^1_2$, so maybe there is hope.) Nov 20 '13 at 0:57
The answer is yes. Hugh Woodin showed me the following argument, which I post here with his permission.
Let $\varphi(v)$ be a formula in the language of set theory expanded by a unary predicate symbol. Given a pair of ordinals $(\alpha, \beta)$, working in $V^{\text{Col}(\omega,\alpha)}$ we let $B$ be a universally Baire set of reals having Wadge rank $\beta$ in the model $L(B,\mathbb{R})$, which satisfies $\mathsf{AD}^+$. Note that this model depends only on $\beta$ and not on $B$, and also that every set of reals in $L(B,\mathbb{R})$ is universally Baire because $B^\sharp$ exists and is universally Baire. Let $T_{\alpha,\beta}$ be the tree of a $(\Sigma^2_1)^{L(B,\mathbb{R})}$-scale on the set $$\{x \in \mathbb{R} : \exists C \in L(B,\mathbb{R})\, (\text{HC}; \in, C) \models \varphi[x]\}.$$ By the homogeneity of $\text{Col}(\omega,\alpha)$ this tree is is independent of the choice of generic filter and we have $T_{\alpha,\beta} \in V$. Let $T$ by the amalgamation of all the trees $T_{\alpha,\beta}$, so that $T$ is a tree on $\omega \times \text{Ord}$ and $p[T] = \bigcup_{\alpha,\beta \in \text{Ord}} p[T_{\alpha,\beta}]$ in every generic extension of $V$.
We claim that $$V^{\text{Col}(\omega,\alpha)} \models p[T] = \{x \in \mathbb{R} : \exists C \in \text{uB}\, (\text{HC}; \in, C) \models \varphi[x]\},$$ for every ordinal $\alpha$. The right-to-left inclusion follows immediately from the definition of the trees $T_{\alpha,\beta}$, so it remains to prove the left-to-right inclusion. Let $G \subset \text{Col}(\omega,\alpha)$ be a $V$-generic filter and let $x \in p[T]^{V[G]}$, say $x \in p[T_{\alpha',\beta'}]$ for ordinals $\alpha'$ and $\beta'$. We want to show
\begin{equation*}\tag{$*$} \exists C \in \text{uB}^{V[G]}\, (\text{HC}^{V[G]}; \in, C) \models \varphi[x]. \end{equation*}
If $\alpha' = \alpha$, this is easy. There are two remaining cases to consider:
1. $\alpha' > \alpha$.
2. $\alpha' < \alpha$.
In case (1), we have ($*$) by $(\Sigma^1_2)^{\text{uB}}$ generic absoluteness for $\text{Col}(\omega,\alpha')$. In case (2), we use the fact that if $B \in V[G \restriction \alpha']$ is a universally Baire set as in the definition of the tree $T_{\alpha', \beta'}$, then $B^\sharp$ exists and is universally Baire, so there is an elementary embedding $$j : L(B, \mathbb{R}^{V[G \restriction \alpha]}) \to L(B^{V[G]}, \mathbb{R}^{V[G]}),$$ and we have $j(T_{\alpha', \beta'}) = T_{\alpha, \beta}$ where $\beta$ is the Wadge rank of $B^{V[G]}$. Considering the pointwise image of a branch witnessing $x \in p[T_{\alpha',\beta'}]$, we have $x \in p[T_{\alpha,\beta}]$ . Therefore ($*$) is witnessed by a set of reals $C \in L(B^{V[G]}, \mathbb{R}^{V[G]})$. | 2021-11-29 00:23:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9527413845062256, "perplexity": 149.75736813245558}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358673.74/warc/CC-MAIN-20211128224316-20211129014316-00369.warc.gz"} |
http://mathhelpforum.com/calculus/35082-proving.html | 1. ## Proving...
Q: Using the definitions of $\mathrm{cosh} x$ and $\mathrm{sinh} x$ in terms of $e^x$ and $e^{-x}$, prove that $\mathrm{cosh} 2x = 2 \mathrm{cosh}2x - 1$.
2. Originally Posted by Air
Q: Using the definitions of $\mathrm{cosh} x$ and $\mathrm{sinh} x$ in terms of $e^x$ and $e^{-x}$, prove that $\mathrm{cosh} 2x = 2 \mathrm{cosh}2x - 1$.
Prove: $\mathrm{cosh} 2x = 2 \mathrm{cosh}^2 x - 1$.
$\cosh (2x) = \frac{e^{2x} + e^{-2x}}{2} = \frac{(e^x + e^{-x})^2 - 2}{2} = \frac{(e^x + e^{-x})^2}{2} - \frac{2}{2} = 2 \left(\frac{e^x + e^{-x}}{2}\right)^2 - 1$ .......
3. Originally Posted by mr fantastic
Prove: $\mathrm{cosh} 2x = 2 \mathrm{cosh}^2 x - 1$.
$\cosh (2x) = \frac{e^{2x} + e^{-2x}}{2} = \frac{(e^x + e^{-x})^2 \mathbf{- 2}}{2} = \frac{(e^x + e^{-x})^2}{2} - \frac{2}{2} = 2 \left(\frac{e^x + e^{-x}}{2}\right)^2 - 1$ .......
How did you get $-2$?
4. Originally Posted by Air
How did you get $-2$?
$(e^{x} + e^{-x})^2 = e^{2x} + 2 (e^x)(e^{-x}) + e^{-2x} = e^{2x} + 2 + e^{-2x}$.
Therefore $e^{2x} + e^{-2x} = (e^{x} + e^{-x})^2 - 2$. | 2018-02-24 21:02:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9828229546546936, "perplexity": 642.4370221959252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815934.81/warc/CC-MAIN-20180224191934-20180224211934-00437.warc.gz"} |
https://mathematica.stackexchange.com/questions/179476/how-to-rationalize-a-fraction-in-mathematica | # How to “rationalize” a fraction in Mathematica
When I try to rationalize the following number $$1\over{2^{1/4}+4^{1/4}+8^{1/4}}$$
FullSimplify[1/( 2^(1/4)+4^(1/4)+8^(1/4) )]
I get the same expression, and not my hand-calculation result which is
$${(\sqrt{4+3\sqrt{2}}-\sqrt 2) (3\sqrt 2 -2)}\over 14$$ What command should I use, if there is one?
Edit: "rationalize" meaning as in ordinary algebra where roots are moved from denominator to numerator, and not as writing a decimal as a fraction
You can use ToRadicals and RootReduce instead:
Simplify @ ToRadicals @ RootReduce[1/(2^(1/4)+4^(1/4)+8^(1/4))] //TeXForm
$\frac{1}{14} \left(-6+2 \sqrt{2}+\sqrt{2 \left(8+9 \sqrt{2}\right)}\right)$
• Very nice! Though we should make it clear to the OP that, while the solutions offered here work for his problem, none of them are general. Consider, for instance: test = {1/(Sqrt[2] + Sqrt[3] + Sqrt[5]), (3 + Sqrt[11])/(4 + Sqrt[11]), (2 + Sqrt[3])/(1 + Sqrt[5 + Sqrt[11]]), Sqrt[(1 + Sqrt[2])/(1 + Sqrt[3])], Sqrt[3 + 2 Sqrt[2]]/(1 + Sqrt[2])}; Given that this is not an obscure problem, yet MMA nevertheless doesn't have a built-in function for this, I gather it must be difficult to create a solution that is sufficiently general for Wolfram to offer it. – theorist Aug 4 '18 at 1:47
• ....Though, having said that, there is such a function in Maple: maplesoft.com/support/help/maple/view.aspx?path=rationalize – theorist Aug 8 '18 at 0:27
• @theorist Thanks for the link to Maple. It actually gives answer for variables a,b,c, but oddly enough answer does not look symmetric! – Maesumi Aug 9 '18 at 21:05
In this case ToNumberField gives a denested form:
ToRadicals[ToNumberField[1/(2^(1/4) + 4^(1/4) + 8^(1/4))]] // Together // TeXForm
$\frac{1}{14} \left(-6+4 \sqrt[4]{2}+2 \sqrt{2}+2^{3/4}\right)$ | 2019-06-27 09:24:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.704035758972168, "perplexity": 2945.7846593100785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628001014.85/warc/CC-MAIN-20190627075525-20190627101525-00001.warc.gz"} |
https://aimsciences.org/article/doi/10.3934/cpaa.2004.3.75 | # American Institute of Mathematical Sciences
March 2004, 3(1): 75-84. doi: 10.3934/cpaa.2004.3.75
## Nonlinear functionals in oscillation theory of matrix differential systems
1 School of Mathematics and Statistics, Carleton University, Ottawa, Ontario, Canada, K1S 5B6, Canada
Received November 2002 Revised July 2003 Published January 2004
General oscillation criteria for second order two-term linear differential systems and, as a consequence, a more general class of Hamiltonian systems with symmetric coefficients are established using nonlinear functionals on a suitable matrix space. This extends and unifies most known results dealing with oscillation criteria using the particular maximum eigenvalue functional.
Citation: Angelo B. Mingarelli. Nonlinear functionals in oscillation theory of matrix differential systems. Communications on Pure & Applied Analysis, 2004, 3 (1) : 75-84. doi: 10.3934/cpaa.2004.3.75
[1] Saroj Panigrahi, Rakhee Basu. Oscillation results for second order nonlinear neutral differential equations with delay. Conference Publications, 2015, 2015 (special) : 906-912. doi: 10.3934/proc.2015.0906 [2] Yuri V. Rogovchenko, Fatoş Tuncay. Interval oscillation of a second order nonlinear differential equation with a damping term. Conference Publications, 2007, 2007 (Special) : 883-891. doi: 10.3934/proc.2007.2007.883 [3] Norimichi Hirano, Zhi-Qiang Wang. Subharmonic solutions for second order Hamiltonian systems. Discrete & Continuous Dynamical Systems - A, 1998, 4 (3) : 467-474. doi: 10.3934/dcds.1998.4.467 [4] Marissa Condon, Alfredo Deaño, Arieh Iserles. On systems of differential equations with extrinsic oscillation. Discrete & Continuous Dynamical Systems - A, 2010, 28 (4) : 1345-1367. doi: 10.3934/dcds.2010.28.1345 [5] Addolorata Salvatore. Multiple homoclinic orbits for a class of second order perturbed Hamiltonian systems. Conference Publications, 2003, 2003 (Special) : 778-787. doi: 10.3934/proc.2003.2003.778 [6] Antonio Marigonda. Second order conditions for the controllability of nonlinear systems with drift. Communications on Pure & Applied Analysis, 2006, 5 (4) : 861-885. doi: 10.3934/cpaa.2006.5.861 [7] Anna Capietto, Walter Dambrosio. A topological degree approach to sublinear systems of second order differential equations. Discrete & Continuous Dynamical Systems - A, 2000, 6 (4) : 861-874. doi: 10.3934/dcds.2000.6.861 [8] John R. Graef, János Karsai. Oscillation and nonoscillation in nonlinear impulsive systems with increasing energy. Conference Publications, 2001, 2001 (Special) : 166-173. doi: 10.3934/proc.2001.2001.166 [9] Dong-Lun Wu, Chun-Lei Tang, Xing-Ping Wu. Existence and nonuniqueness of homoclinic solutions for second-order Hamiltonian systems with mixed nonlinearities. Communications on Pure & Applied Analysis, 2016, 15 (1) : 57-72. doi: 10.3934/cpaa.2016.15.57 [10] Xingyong Zhang, Xianhua Tang. Some united existence results of periodic solutions for non-quadratic second order Hamiltonian systems. Communications on Pure & Applied Analysis, 2014, 13 (1) : 75-95. doi: 10.3934/cpaa.2014.13.75 [11] Li-Li Wan, Chun-Lei Tang. Existence and multiplicity of homoclinic orbits for second order Hamiltonian systems without (AR) condition. Discrete & Continuous Dynamical Systems - B, 2011, 15 (1) : 255-271. doi: 10.3934/dcdsb.2011.15.255 [12] Kyeong-Hun Kim, Kijung Lee. A weighted $L_p$-theory for second-order parabolic and elliptic partial differential systems on a half space. Communications on Pure & Applied Analysis, 2016, 15 (3) : 761-794. doi: 10.3934/cpaa.2016.15.761 [13] Abdullah Özbekler, A. Zafer. Second order oscillation of mixed nonlinear dynamic equations with several positive and negative coefficients. Conference Publications, 2011, 2011 (Special) : 1167-1175. doi: 10.3934/proc.2011.2011.1167 [14] Bi Ping, Maoan Han. Oscillation of second order difference equations with advanced argument. Conference Publications, 2003, 2003 (Special) : 108-112. doi: 10.3934/proc.2003.2003.108 [15] Ahmed Y. Abdallah. Exponential attractors for second order lattice dynamical systems. Communications on Pure & Applied Analysis, 2009, 8 (3) : 803-813. doi: 10.3934/cpaa.2009.8.803 [16] Y. Peng, X. Xiang. Second order nonlinear impulsive time-variant systems with unbounded perturbation and optimal controls. Journal of Industrial & Management Optimization, 2008, 4 (1) : 17-32. doi: 10.3934/jimo.2008.4.17 [17] Alessandro Fonda, Fabio Zanolin. Bounded solutions of nonlinear second order ordinary differential equations. Discrete & Continuous Dynamical Systems - A, 1998, 4 (1) : 91-98. doi: 10.3934/dcds.1998.4.91 [18] Martin Redmann, Peter Benner. Approximation and model order reduction for second order systems with Levy-noise. Conference Publications, 2015, 2015 (special) : 945-953. doi: 10.3934/proc.2015.0945 [19] Pham Huu Anh Ngoc. Stability of nonlinear differential systems with delay. Evolution Equations & Control Theory, 2015, 4 (4) : 493-505. doi: 10.3934/eect.2015.4.493 [20] Nassif Ghoussoub. Superposition of selfdual functionals in non-homogeneous boundary value problems and differential systems. Discrete & Continuous Dynamical Systems - A, 2008, 21 (1) : 187-220. doi: 10.3934/dcds.2008.21.187
2018 Impact Factor: 0.925 | 2019-08-22 11:13:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4980359375476837, "perplexity": 5244.048496610465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317113.27/warc/CC-MAIN-20190822110215-20190822132215-00508.warc.gz"} |
http://openstudy.com/updates/5089ebeae4b077c2ef2e0f16 | ## ChuckNora16 Group Title The graph of a function f(x) is as shown. Select the graph of the inverse function. 2 years ago 2 years ago
1. ChuckNora16
2. ChuckNora16
3. ChuckNora16
I don't know where to start.
4. JoãoVitorMC
|dw:1351216347183:dw|
5. ChuckNora16
Is that the inverse? (sorry I took a minute to reply)
6. satellite73
take your graph, and reflect it about the line $$y=x$$
7. ChuckNora16
Is that diagonal? Like instead of reflecting across the x or y axis directly?
8. satellite73
draw tool is not working for me (not much here is, actually) so i will try to attach a picture
9. satellite73
but yes, right across the line $$y=x) 10. ChuckNora16 Okay. Thank you for helping me by the way. I really appreciate it. 11. ChuckNora16 So would it look like this graph? 12. satellite73 here is a graph that looks something like your function http://www.wolframalpha.com/input/?i=1%2Fx%2B5+domain+-5..5 13. ChuckNora16 To get the inverse of the graph, would I flip 1/x to just x + 5? 14. satellite73 here is the graph of the inverse http://www.wolframalpha.com/input/?i=1%2Fx%2B5+domain+-5..5 15. satellite73 oh no!! 16. satellite73 inverse does not mean "reciprocal" that is for numbers inverse means inverse function 17. ChuckNora16 OH! Sorry lol 18. satellite73 so if \(f(x)=\frac{1}{x}+5$$ that means a) take the reciprocal b) add 5 inverse is a) subtract 5 b) take the reciprocal i.e. $$f^{-1}(x)=\frac{1}{x-5}$$
19. satellite73
looks like you picked the right one
20. ChuckNora16
Ooh, okay. Well thank you!
21. satellite73
yw | 2014-10-30 12:20:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.804835855960846, "perplexity": 2985.193822160837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637897717.20/warc/CC-MAIN-20141030025817-00005-ip-10-16-133-185.ec2.internal.warc.gz"} |
http://mathhelpforum.com/advanced-applied-math/18226-electrical-systems-paper-mainly-trig.html | # Math Help - Electrical Systems Paper (Mainly Trig)
1. ## Electrical Systems Paper (Mainly Trig)
Hi guys, thanks for the help given with the last paper I was looking for help with, I think I just may have past that one, however next comes the hard paper, this one is Electrical Systems and there is nothing "Electrical" about it in my view, the entire paper is math theory and the majority of it is trigonometry (with which i am increadibly uncomfortable).
I will start from the beginning with Q2 and Q3 of the paper. (Q1 was simple) any help on what i need to look for and how to manipulate the questions for the answer would be much appreciated, thanks.
Q2:
A circuit breaker is designed to stop fault current flowing in a circuit at a point in time where the current in the circuit is 0.
If the fault curent in the circuit is: $i(t) = 50cos(314t - 60^\circ)$ Amperes
i) At what time would the first current 0 in the circuit occur?
ii) Given the circuit breaker will take 40mS to operate, at what time would the next current 0 occur after that?
For the first part I am assuming i should make i(t) = 0 so:
$50cos(314t - 60^\circ) = 0$
Now since I am unsure how one should correctly manipule the cos trig function in this case, i would do the following ... is this correct?:
$314t-60^\circ = 50cos(0)$
or should i first have divided both sides by 50 to cancel the 50 out?
2. Originally Posted by Alias_NeO
...
Q2:
A circuit breaker is designed to stop fault current flowing in a circuit at a point in time where the current in the circuit is 0.
If the fault curent in the circuit is: $i(t) = 50cos(314t - 60^\circ)$ Amperes
i) At what time would the first current 0 in the circuit occur?
ii) Given the circuit breaker will take 40mS to operate, at what time would the next current 0 occur after that?
For the first part I am assuming i should make i(t) = 0 so:
$50cos(314t - 60^\circ) = 0$
Now since I am unsure how one should correctly manipule the cos trig function in this case, i would do the following ... is this correct?:
...
Hello,
you start 100% correctly but you should go on maybe like this. (First a personal remark: I assume that the 314 is a degree value too - and that is quite a unusual notation)
$50cos(314t - 60^\circ) = 0$ Divide by 50
$cos(314t - 60^\circ) = 0$ . Now you should know that the first (positive) zero of the cosine function is at α = 90°. Use the arccos function (or maybe you say the cos^(-1) function or the inverse function of the cosine):
$314^\circ \cdot t -60^\circ = 90^\circ~\Longrightarrow~ 314^\circ \cdot t = 150^\circ~\Longrightarrow~t\approx 0.4777\ s$
to ii)
The next positive zero occurs at 270°. Your equation becomes now:
$314^\circ \cdot t -60^\circ = 270^\circ~\Longrightarrow~ 314^\circ \cdot t = 330^\circ~\Longrightarrow~t\approx 1.0560\ s$
The first operation took place at 0.4777 s. The operation time is 40 ms = 0.04 s. The next operation takes place at 1.0560 s. The elapsed time is:
1.0560 s - (0.4777 s + 0.04 s) = 0.5383 s
3. ## Thanks
Hey, thanks for the reply, unfortunately I sat the paper yesterday before i got the reply and failed it miserably
I think it was unfair because he (the lecturer - who is hated by students and staff alike) gave us no study material, no past papers or anything, then to make it worse, he totally changed the structure of this exam which hasn't been done by any of the lecturers in a long time.
Not to worry, what's done is done. | 2015-03-27 02:20:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6831619739532471, "perplexity": 745.9703702290802}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131293580.17/warc/CC-MAIN-20150323172133-00218-ip-10-168-14-71.ec2.internal.warc.gz"} |
http://gmatclub.com/forum/m11-72303.html | Find all School-related info fast with the new School-Specific MBA Forum
It is currently 06 Oct 2015, 16:21
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# m11#9
Question banks Downloads My Bookmarks Reviews Important topics
Author Message
VP
Joined: 30 Jun 2008
Posts: 1047
Followers: 12
Kudos [?]: 404 [2] , given: 1
m11#9 [#permalink] 01 Nov 2008, 07:23
2
KUDOS
2
This post was
BOOKMARKED
Which of the following sets must have the same standard deviation as set {a, b, c}?
A. {ab, b^2, cb}
B. {2a, b + a, c + b}
C. {0, b + a, c - a}
D. {ab, bc, ac}
E. {ab + c, a(1 + b), b(1+a)}
(C) 2008 GMAT Club - m11#9
Source: GMAT Club Tests - hardest GMAT questions
_________________
"You have to find it. No one else can find it for you." - Bjorn Borg
Kaplan GMAT Prep Discount Codes Knewton GMAT Discount Codes Veritas Prep GMAT Discount Codes
SVP
Joined: 17 Jun 2008
Posts: 1570
Followers: 12
Kudos [?]: 213 [17] , given: 0
Re: m11#9 [#permalink] 01 Nov 2008, 10:51
17
KUDOS
The standard deviation of a set does not change if a constant is added to all the members.
Thus, standard deviation of (a,b,c) will be the same as of (a+ab, b+ab, c+ab).
And, option E is the same as (a+ab, b+ab, c+ab).
SVP
Joined: 29 Aug 2007
Posts: 2493
Followers: 62
Kudos [?]: 605 [0], given: 19
Re: m11#9 [#permalink] 01 Nov 2008, 11:53
scthakur wrote:
The standard deviation of a set does not change if a constant is added to all the members.
Thus, standard deviation of (a,b,c) will be the same as of (a+ab, b+ab, c+ab).
And, option E is the same as (a+ab, b+ab, c+ab).
Beautiful approach by scthakur. Thats the best approach to this question. +1.
SD of a, b and c and (a+x), (b+x) and (c + x) is the same.
Trying to find exactly what is the SD of a, b and c and the same of each of the options in the question doesnot help solve this question. What helps is understanding the question.
_________________
Joined: 31 Dec 1969
Location: India
Concentration: Strategy, Operations
GMAT 1: 710 Q49 V0
GMAT 2: 740 Q40 V50
GMAT 3: 700 Q48 V38
GMAT 4: 710 Q45 V41
GPA: 3.3
WE: Sales (Investment Banking)
Followers: 0
Kudos [?]: 115 [0], given: 90628
Re: m11#9 [#permalink] 22 Dec 2010, 06:02
My approach was actually using real numbers such as a=2, b=3 and c=4. Though abit lengthy it worked since I applied the rule, the less spread out my answers were the closer my answer was making it E. Thanx now I know another rule;The standard deviation of a set does not change if a constant is added to all the members.
Intern
Joined: 28 Jul 2010
Posts: 9
Followers: 0
Kudos [?]: 1 [0], given: 1
Re: m11#9 [#permalink] 29 Dec 2010, 05:38
E, addition or subtraction of Same constant term does not change standard deviation of the numbers
Manager
Joined: 21 Nov 2010
Posts: 133
Followers: 0
Kudos [?]: 5 [0], given: 12
Re: m11#9 [#permalink] 27 Dec 2011, 21:07
Standard Deviation is the spread of numbers. question is asking which spread of letters equals a, b, c.
I picked numbers 2, 4, 6 for a, b, c.
Plugged in to find another set that has the same SD of 2. E is the only one that worked.
------
Please give me kudos if my post helps you.
Math Expert
Joined: 02 Sep 2009
Posts: 29750
Followers: 4894
Kudos [?]: 53367 [1] , given: 8155
Re: m11#9 [#permalink] 26 Dec 2012, 05:12
1
KUDOS
Expert's post
amitdgr wrote:
Which of the following sets has the same standard deviation as set (a, b, c)?
(C) 2008 GMAT Club - m11#9
* $$(ab, b^2, cb)$$
* $$(2a, b + a, c + b)$$
* $$(0, b + a, c - a)$$
* $$(ab, bc, ac)$$
* $$(ab + c, a(1 + b), b(1+a))$$
[Reveal] Spoiler: OA
E
Source: GMAT Club Tests - hardest GMAT questions
http://gmatclub.com/tests/m11#expl9
I think the explanation is missing/incomplete. Please help me through this problem.
Which of the following sets must have the same standard deviation as set {a, b, c}?
A. {ab, b^2, cb}
B. {2a, b + a, c + b}
C. {0, b + a, c - a}
D. {ab, bc, ac}
E. {ab + c, a(1 + b), b(1+a)}
If we add or subtract a constant to each term in a set the standard deviation will not change.
Notice that set {(ab + c, a(1 + b), b(1+a)}={c+ab, a+ab, b+ab}, so this set is obtained by adding some number ab to each term of set {a, b, c}, which means that those sets must have the same standard deviation.
_________________
Manager
Joined: 13 Feb 2012
Posts: 147
Location: Italy
Concentration: General Management, Entrepreneurship
GMAT 1: 560 Q36 V34
GPA: 3.1
WE: Sales (Transportation)
Followers: 4
Kudos [?]: 4 [0], given: 85
Re: m11#9 [#permalink] 19 Jan 2013, 06:41
Plugging numbers in it's not so time wasting, even though it is prone to errors.
I put a=1, b=2, c=3 with a S.D of +/- 1
A = 2,4,6
B = 2,3,5
C = 0,3,2
D = 2,6,3
E = 5,3,4
E is the only set that has its numbers spread one integer apart.
_________________
"The Burnout" - My Debrief
Kudos if I helped you
Andy
Intern
Status: At the end all are winners, Some just take a little more time to win.
Joined: 08 Oct 2013
Posts: 23
Location: India
Concentration: Finance, Accounting
GMAT Date: 11-20-2013
GPA: 3.97
WE: Consulting (Computer Software)
Followers: 0
Kudos [?]: 7 [1] , given: 45
Re: m11#9 [#permalink] 28 Oct 2013, 06:22
1
KUDOS
Awesome and crisp approach by scthakur and Bunuel..Great work
Current Student
Joined: 05 Dec 2013
Posts: 16
Followers: 0
Kudos [?]: 8 [0], given: 1
Re: m11#9 [#permalink] 20 Dec 2013, 18:33
I took a similar, although longer, approach to solving this problem as the person above me. Immediately understanding that this problem was evaluating the spread, I relied on the use of "plugging" numbers in for a,b,c (1,2,3) and then looked for a similar spread amongst the answer choices.
Having read and followed the Manhattan Advanced Quant books, I first started with E and realized that this is the right answer --> matches to my "target"
Would of been even quicker if I would of realized that "ab" is consistent, a constant, throughout the 3 terms; and adding a constant to the terms does not alter the spread. Thanks for the clarification on this one guys!
Re: m11#9 [#permalink] 20 Dec 2013, 18:33
Display posts from previous: Sort by
# m11#9
Question banks Downloads My Bookmarks Reviews Important topics
Moderators: WoundedTiger, Bunuel
Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®. | 2015-10-07 00:21:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5257681608200073, "perplexity": 7756.896168849842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736679756.40/warc/CC-MAIN-20151001215759-00050-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://lqp2.org/node/1129 | # The Casimir effect from the point of view of algebraic quantum field theory
Claudio Dappiaggi, Gabriele Nosari, Nicola Pinamonti
December 03, 2014
We consider a region of Minkowski spacetime bounded either by one or by two parallel, infinitely extended plates orthogonal to a spatial direction and a real Klein-Gordon field satisfying Dirichlet boundary conditions. We quantize these two systems within the algebraic approach to quantum field theory using the so-called functional formalism. As a first step we construct a suitable unital ${}^*$-algebra of observables whose generating functionals are characterized by a labeling space which is at the same time optimal and separating. Subsequently we give a definition for these systems of Hadamard states and we investigate explicit examples. In the case of a single plate, it turns out that one can build algebraic states via a pull-back of those on the whole Minkowski spacetime, moreover inheriting from them the Hadamard property. When we consider instead two plates, algebraic states can be put in correspondence with those on flat spacetime via the so-called method of images, which we translate to the algebraic setting. For a massless scalar field we show that this procedure works perfectly for a large class of quasi-free states including the Poincaré vacuum and KMS states. Eventually we use our results in both systems to introduce the notion of Wick polynomials, showing that a global extended algebra does not exist. Furthermore we construct explicitly the two-point function and the regularized energy density, showing, moreover, that the outcome is consistent with the standard results of the Casimir effect. | 2019-02-19 11:26:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7612619996070862, "perplexity": 272.9140498542829}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247489933.47/warc/CC-MAIN-20190219101953-20190219123953-00146.warc.gz"} |
https://www.physicsforums.com/threads/munkres-text-question.519984/ | # Munkres text question.
1. Aug 8, 2011
### Fisicks
I cant seem to find a answer for 20.7 anywhere. Unfourtantly, I do not have the skills to latex the problem out, so I only hope someone looks in the book.
My solution is that the supremum of the set of a_i 's must be finite above and the infinium is finite and greater then zero , and the b_i 's have no restraints.
2. Aug 8, 2011
### micromass
Staff Emeritus
Hi Fisicks!
What is that an answer to?? To the continuity of h or to h being a homeomorphism.
For h to be continuous, you are correct: we only need to demand that $\sup{a_i}<+\infty$.
But for h to be homeomorphism, it is also correct, we demand that $\sup{a_i}<+\infty$ and $\inf{a_i}>0$.
Note, the map in this exercise is often called a "diagonal operator". So you can search it by that name | 2017-11-20 07:52:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9622913599014282, "perplexity": 1013.4427304289933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805923.26/warc/CC-MAIN-20171120071401-20171120091401-00365.warc.gz"} |
https://en.wikipedia.org/wiki/Quasiconvex_function | # Quasiconvex function
A quasiconvex function that is not convex.
A function that is not quasiconvex: the set of points in the domain of the function for which the function values are below the dashed red line is the union of the two red intervals, which is not a convex set.
The probability density function of the normal distribution is quasiconcave but not concave.
In mathematics, a quasiconvex function is a real-valued function defined on an interval or on a convex subset of a real vector space such that the inverse image of any set of the form $(-\infty,a)$ is a convex set. Informally, along any stretch of the curve the highest point is one of the endpoints. The negative of a quasiconvex function is said to be quasiconcave.
All convex functions are also quasiconvex, but not all quasiconvex functions are convex, so quasiconvexity is a generalization of convexity. Quasiconvexity and quasiconcavity extend to functions with multiple arguments the notion of unimodality of functions with a single real argument.
## Definition and properties
A function $f:S \to \mathbb{R}$ defined on a convex subset S of a real vector space is quasiconvex if for all $x, y \in S$ and $\lambda \in [0,1]$ we have
$f(\lambda x + (1 - \lambda)y)\leq\max\big\{f(x),f(y)\big\}.$
In words, if f is such that it is always true that a point directly between two other points does not give a higher value of the function than both of the other points do, then f is quasiconvex. Note that the points x and y, and the point directly between them, can be points on a line or more generally points in n-dimensional space.
A quasilinear function is both quasiconvex and quasiconcave.
The graph of a function that is both concave and quasi-convex on the nonnegative real numbers.
An alternative way (see introduction) of defining a quasi-convex function $f(x)$ is to require that each sub-levelset $S_\alpha(f) = \{x|f(x) \leq \alpha\}$ is a convex set.
If furthermore
$f(\lambda x + (1 - \lambda)y)<\max\big\{f(x),f(y)\big\}$
for all $x \neq y$ and $\lambda \in (0,1)$, then $f$ is strictly quasiconvex. That is, strict quasiconvexity requires that a point directly between two other points must give a lower value of the function than one of the other points does.
A quasiconcave function is a function whose negative is quasiconvex, and a strictly quasiconcave function is a function whose negative is strictly quasiconvex. Equivalently a function $f$ is quasiconcave if
$f(\lambda x + (1 - \lambda)y)\geq\min\big\{f(x),f(y)\big\}.$
and strictly quasiconcave if
$f(\lambda x + (1 - \lambda)y)>\min\big\{f(x),f(y)\big\}$
A (strictly) quasiconvex function has (strictly) convex lower contour sets, while a (strictly) quasiconcave function has (strictly) convex upper contour sets.
A function that is both quasiconvex and quasiconcave is quasilinear.
A particular case of quasi-concavity, if $S \subset \mathbb{R}$, is unimodality, in which there is a locally maximal value.
## Applications
Quasiconvex functions have applications in mathematical analysis, in mathematical optimization, and in game theory and economics.
### Mathematical optimization
In nonlinear optimization, quasiconvex programming studies iterative methods that converge to a minimum (if one exists) for quasiconvex functions. Quasiconvex programming is a generalization of convex programming.[1] Quasiconvex programming is used in the solution of "surrogate" dual problems, whose biduals provide quasiconvex closures of the primal problem, which therefore provide tighter bounds than do the convex closures provided by Lagrangian dual problems.[2] In theory, quasiconvex programming and convex programming problems can be solved in reasonable amount of time, where the number of iterations grows like a polynomial in the dimension of the problem (and in the reciprocal of the approximation error tolerated);[3] however, such theoretically "efficient" methods use "divergent-series" stepsize rules, which were first developed for classical subgradient methods. Classical subgradient methods using divergent-series rules are much slower than modern methods of convex minimization, such as subgradient projection methods, bundle methods of descent, and nonsmooth filter methods.
### Economics and partial differential equations: Minimax theorems
In microeconomics, quasiconcave utility functions imply that consumers have convex preferences. Quasiconvex functions are important also in game theory, industrial organization, and general equilibrium theory, particularly for applications of Sion's minimax theorem. Generalizing a minimax theorem of John von Neumann, Sion's theorem is also used in the theory of partial differential equations.
## Preservation of quasiconvexity
### Operations preserving quasiconvexity
• non-negative weighted maximum of quasiconvex functions (i.e. $f = \max \left\lbrace w_1 f_1 , \ldots , w_n f_n \right\rbrace$ with $w_i$ non-negative)
• composition with a non-decreasing function (i.e. $g : \mathbb{R}^{n} \rightarrow \mathbb{R}$ quasiconvex, $h : \mathbb{R} \rightarrow \mathbb{R}$ non-decreasing, then $f = h \circ g$ is quasiconvex)
• minimization (i.e. $f(x,y)$ quasiconvex, $C$ convex set, then $h(x) = \inf_{y \in C} f(x,y)$ is quasiconvex)
### Operations not preserving quasiconvexity
• The sum of quasiconvex functions defined on the same domain need not be quasiconvex: In other words, if $f(x), g(x)$ are quasiconvex, then $(f+g)(x) = f(x) + g(x)$ need not be quasiconvex.
• The sum of quasiconvex functions defined on different domains (i.e. if $f(x), g(y)$ are quasiconvex, $h(x,y) = f(x) + g(y)$) need not be quasiconvex. Such functions are called "additively decomposed" in economics and "separable" in mathematical optimization.
## Examples
• Every convex function is quasiconvex.
• A concave function can be quasiconvex function. For example $x \mapsto \log(x)$ is concave, and it is quasiconvex.
• Any monotonic function is both quasiconvex and quasiconcave. More generally, a function which decreases up to a point and increases from that point on is quasiconvex (compare unimodality).
• The floor function $x\mapsto \lfloor x\rfloor$ is an example of a quasiconvex function that is neither convex nor continuous.
• If $x \mapsto f(x)$ and $y \mapsto g(y)$ are positive convex decreasing functions, then $(x,y) \mapsto f(x)g(y)$ is quasiconvex.
## References
1. ^ Di Guglielmo (1977, pp. 287–288): Di Guglielmo, F. (1977). "Nonconvex duality in multiobjective optimization". Mathematics of Operations Research 2 (3): 285–291. doi:10.1287/moor.2.3.285. JSTOR 3689518. MR 484418.
2. ^ Di Guglielmo, F. (1981). "Estimates of the duality gap for discrete and quasiconvex optimization problems". In Schaible, Siegfried; Ziemba, William T. Generalized concavity in optimization and economics: Proceedings of the NATO Advanced Study Institute held at the University of British Columbia, Vancouver, B.C., August 4–15, 1980. New York: Academic Press, Inc. [Harcourt Brace Jovanovich, Publishers]. pp. 281–298. ISBN 0-12-621120-5. MR 652702.
3. ^ Kiwiel, Krzysztof C. (2001). "Convergence and efficiency of subgradient methods for quasiconvex minimization". Mathematical Programming (Series A) 90 (1) (Berlin, Heidelberg: Springer). pp. 1–25. doi:10.1007/PL00011414. ISSN 0025-5610. MR 1819784. Kiwiel acknowledges that Yuri Nesterov first established that quasiconvex minimization problems can be solved efficiently.
• Avriel, M., Diewert, W.E., Schaible, S. and Zang, I., Generalized Concavity, Plenum Press, 1988.
• Crouzeix, J.-P. (2008). "Quasi-concavity". In Durlauf, Steven N.; Blume, Lawrence E. The New Palgrave Dictionary of Economics (Second ed.). Palgrave Macmillan. doi:10.1057/9780230226203.1375.
• Singer, Ivan Abstract convex analysis. Canadian Mathematical Society Series of Monographs and Advanced Texts. A Wiley-Interscience Publication. John Wiley & Sons, Inc., New York, 1997. xxii+491 pp. ISBN 0-471-16015-6 | 2015-12-02 02:30:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 32, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8475003242492676, "perplexity": 852.4155513308864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398525032.0/warc/CC-MAIN-20151124205525-00001-ip-10-71-132-137.ec2.internal.warc.gz"} |
http://forum.allaboutcircuits.com/threads/thevenin-equivelant-circuit.77564/ | # Thevenin equivelant circuit
Discussion in 'Homework Help' started by Erica, Nov 29, 2012.
1. ### Erica Thread Starter New Member
May 18, 2011
15
1
See attached steps I used for obtaining an open-circuit voltage Vab. The voltage I got is zero.
Did I make any mistakes on this?
File size:
30.6 KB
Views:
37
2. ### The Electrician AAC Fanatic!
Oct 9, 2007
2,255
311
No. Have you got a calculator that can do complex arithmetic?
3. ### WBahn Moderator
Mar 31, 2012
17,461
4,701
One of the beautiful things about circuit analysis is that there is almost always a way to check your work.
Once you got this result and question whether it could possibly be right (which is a reasonable reaction, so good for you), you can set it up a bit differently to directly answer that question. If the voltages at 'a' and 'b' are the same, that means that the two voltage dividers have to have the same ratio, so the question becomes:
$
\frac{j10 \Omega}{5 \Omega+j10 \Omega}\;=^?\;\frac{6 \Omega}{6 \Omega-j3 \Omega}
$
Notice how I track the units above. Get in the habit of doing that. I originally kept the units to the very end, but decided that it made things a bit clear if I let them cancel out. Note that they DID cancel, I didn't just decide to drop them.
Simplifying the fractions above
$
\frac{j2}{1+j2}\;=^?\;\frac{2}{2-j1}
$
Dividing both sides by 2
$
\frac{j1}{1+j2}\;=^?\;\frac{1}{2-j1}
$
Multiply the RHS by j/j
$
\frac{j1}{1+j2}\;=^?\;\frac{j1}{1+j2}
$
So, yes, the thevenin voltage will be identically zero.
This is actually a nice problem. I would not have expected it was possible from a casual glance. Only after seeing the result and looking at the problem with that in mind does it become obvious that this is, in fact, the case. There are a couple of ways of seeing this. If you were asked on a quiz to describe, in words, why this is possible, what would you say?
4. ### Erica Thread Starter New Member
May 18, 2011
15
1
Thanks for your help. This was a problem in a practice exam. The problem also asked to determine a load (Zload) to be connected between terminals a and b to get the maximum power output in Zload.
There would be no power output to Zload if the terminal voltage is zero. It appears this problem is questionable.
5. ### WBahn Moderator
Mar 31, 2012
17,461
4,701
While you make a very valid point (and one that I recommend you point out in your write-up), the Zload that produces maximum power transfer to the load is dependent only on the effective impedance of the source and not on the thevenin voltage. So, for the purposes of finding that value of Zload, assume that the thevenin voltage is nonzero. | 2016-10-28 18:33:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 4, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6632996201515198, "perplexity": 848.967254663267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988725451.13/warc/CC-MAIN-20161020183845-00229-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://economics.stackexchange.com/questions/28959/why-is-marginal-cost-price-better-than-marginal-cost-price-for-maximizing-pr | # Why is Marginal Cost = Price better than Marginal Cost > Price for maximizing profit? [closed]
Isn't MC>P a better aim? Given the revenue you earn from each unit is more than cost in producing each unit?
• I think you mean MC < P, don't you? i.e. marginal cost less than price? (not greater than) – EnergyNumbers Apr 24 '19 at 5:31
• Also, aim in terms of what...? My aim is to have MC = \$0, P = \$10 billion and Q = 1, but perhaps that is not possible given the model? – Giskard Apr 24 '19 at 5:45
• The MC is not the cost of producing each unit. Rather, it is the change in total cost when you produce one more unit. – David Apr 24 '19 at 8:28
If $$MC, then the producer would want to produce one more unit of the good.
If $$MC>P$$, then she would want to produce one less unit of the good.
If $$MC=P$$, then she is maximizing her profits.
Your intuition is correct (assuming that you mean $$MC < P$$ rather than $$MC > P$$ as you wrote). For a given level of production, profits are indeed higher when $$MC < P$$ than when $$MC = P$$, as you'd expect from an increase in the price holding everything else constant.
So why do we say that a firm in a competitive market maximizes profit by setting $$P = MC$$? Because we're not looking at an increase in the price holding everything else constant, we're looking at a change in the quantity, holding price constant.
It may help to think first about the traditional model of monopoly, where a monopolist will maximize profit by choosing a quantity $$Q$$ that sets $$MC = MR$$, i.e. marginal cost = marginal revenue. Then, because they're a monopolist, they can sell that quantity $$Q$$ at a price $$P > MR$$ (and so we get $$P > MR = MC$$ or $$P > MC$$). This is because the monopolist has some control over their price: they can choose to go high price/low quantity or low price/high quantity. In the process of selling an additional unit, their revenue goes up ($$MR$$) by less than the price ($$MR < P$$), since they're moving towards the "low price/high quantity" end of the scale - they get paid the price $$P$$ by selling an additional unit, but they had to lower their price too and so make less on the other units they sell, making $$MR < P$$.
That takes us back to the competitive market. In the competitive market, each firm has no control over the price and so there's no price/quantity tradeoff - whatever quantity they pick, the price stays the same. So, in a competitive market, $$P = MR$$ by definition. So when they maximize their profit by choosing a quantity that sets $$MR = MC$$, since $$P = MR$$ we get $$P = MR = MC$$ or $$P = MC$$.
Intuitively, the reason the firm in the competitive market doesn't shoot for $$P > MC$$ is that the price is real darn stubborn. You can't coax it to go higher by picking a lower quantity (like you could if you were a monopoly). So it's not "I'd like to pick a price higher than my $$MC$$" because you can't pick a price. Instead, it's "I may as well continue producing more and more until my $$MC$$ meets the price". So, for a firm in a competitive market, profit is maximized at $$P = MC$$. | 2020-02-27 07:15:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 27, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5352038145065308, "perplexity": 4207.403958893659}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146665.7/warc/CC-MAIN-20200227063824-20200227093824-00528.warc.gz"} |
https://www.groundai.com/project/modelling-top-partner-vector-resonance-phenomenology/ | Contents
hep-ph/***
Modelling top partner-vector resonance phenomenology
Juan Yepes and Alfonso Zerwekh
Department of Physics and Centro Científico-Tecnológico de Valparaíso
Universidad Técnica Federico Santa María, Valparaíso, Chile
We have analysed the observable consequences from the interactions of spin-1 resonances coupled to the invariant fermionic currents that arise in a Composite Higgs set-up. The phenomenology entailed by the latter approach is thoroughly analysed via heavy resonances production and their decays modes that are explored along a viable resonance mass range. Additionally, the production of double and single-partner final states has been scanned along the partner mass scale. QCD drives such production, as well as the SM gauge, Higgs, plus the intermediation of charged and neutral resonances. Non-zero modifications are induced as long as extra fermion-resonance effects are accounted for. Finally, the recent LHC searches for vector-like quarks production in -collisions at 13 TeV have been imposed to exclude regions of the parameter spaces underlying our framework. Specifically, we explore the allowed regions by bounding the decays of a heavy vector-like quark into the -channel according with the latest experimental limits. Generically, the impact of the extra fermion-resonance couplings will substantially reduce the permitted regions, leading us to roughly estimate the sensitivity of the parametric dependence in the shed light of exotic matter interactions.
## 1 Introduction
Despite the Higgs discovery at the LHC [1, 2], the long-standing Hierarchy Problem is still pending to be solved. Healing such UV sensitivity of the Higgs mass demands new dynamics beyond the Standard Model (BSM), characterized by an energy close to the electroweak (EW) scale. The stabilization of the EW regime may be achieved by postulating the existence of new particles armed with the same gauge quantum numbers as the top quarks. Exact cancellations among the virtual contributions of the new particles and those from the top quarks will restore the UV insensitivity of the Higgs mass. New physics (NP) states exhibiting this property are generically named as top partners. In some BSM frameworks such partners might be scalar quarks, as in the well known supersymmetry, or vector-like fermions [3, 4] as in composite Higgs models (CHMs) [5, 6, 7, 8, 9, 10, 11, 12, 13]. Vector-like quarks are hypothetical spin-1/2 particles whose left- and right-handed components transform in the same way under the SM symmetries. They are the simplest example of coloured fermions still allowed by experimental data, extensively analysed in the literature [14, 15, 16, 17, 18, 19]. Complementary, these models have been consistently armed with exotic spin-0 and spin-1 resonances at the TeV scale, whose impact on the pseudo Nambu-Goldstone bosons (PNGBs) scattering, and then on the high-energy vector boson scattering, have been thoroughly studied [20].
The aim at this work is to explore the low energy implications from the interplay among three matter sectors: elementary, composite partners and spin-1 resonances in a CHM. We have parametrised such interactions through couplings of the vector resonance , here assumed to be spin-1 triplets of , to a set of -invariant fermionic currents and tensors presented in this analysis. Such invariants cover all the structures built upon the SM elementary sector together with the top partners permitted by the unbroken , concretely, a fourplet and a singlet , naturally sourced by the decomposition rule under the unbroken group and encoded through
Ψ4=1√2⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝iB−iX\scriptsize 5\raisebox1.0pt/% \scriptsize 3B+X\scriptsize 5\raisebox1.0pt/\scriptsize 3iT+iX\scriptsize 2\raisebox1.0pt/\scriptsize 3−T+X\scriptsize 2\raisebox1.0pt/\scriptsize 3⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠,Ψ1=˜T. (1.1)
The fourplet is decomposable into two doublets and of hypercharge and respectively. The former has the same quantum numbers as the SM quark doublet, whilst the latter contains a state of exotic charge plus another top-like quark . The singlet representation entails only one exotic top-like state, denoted in here as . On the other hand, the elementary sector will be shaped according to the partial compositeness mechanism instead, via the Goldstone symmetry breaking Lagrangian
Lmix=∑qy¯qOq. (1.2)
The strong sector operator transforms in one of the -representations, determining thus two choices for the elementary sector embeddings: either as a fundamental or representation. In the former scenario, both fermion chiralities own elementary representatives coupled to the strong sector through -plets
q5L=1√2(idL,dL,iuL,−uL,0)T,u5R=(0,0,0,0,uR)T, (1.3)
whereas in the latter the right-handed quark enters as a totally composite state arising itself from the operator at low energies with the fields111In both cases the representations and have the same -charge , allowing to reproduce the correct electric charge of the top. The doublet has an isospin , providing thus a protection from large deformations of the -couplings [21, 22].
q14L=1√2⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝0000idL0000dL0000iuL0000−uLidLdLiuL−uL0⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠,u1R. (1.4)
All in all, the previous matter content will frame four models each of them generically described at the Lagrangian level through
L=Lelem+Lcomp+Lmix. (1.5)
Picture that will be coupled a posteriori to the vector resonance , whose description as triplet representations and of respectively, will follow the well known vector formalism [23]. All these Lagrangians will be thoroughly analysed along the text. The coupling to a hypothetical scalar field is postponed for a future analysis [24].
Top quark physics at CHMs have been extensively studied [16, 25, 26], with general flavour physics analyses [27, 28, 29] considered in the context of top partner sectors [30], whilst spin-0 and spin-1 resonances have been considered in CHMs [20] with updated analysis [31, 32]. Our discussion will be based on the previous studies [30, 25], extended up to a simple approach for effective top partners-vector resonances interplay proposed in [33]. The phenomenology entailed by the latter approach is thoroughly analysed in here, where the heavy resonances production and their decays modes are explored along a viable range for the resonance mass and for a given parameters choice in our model. Likewise, the production of double and single-partner final states has been scanned along the partner mass scale in this work, and they turn out to be controlled by a set of model-dependent couplings here provided. QCD drives the double production, as well as SM gauge, Higgs, and -mediated processes. The -mediated processes also appear for the single production in the case of charged final states. QCD pair production is completely model-independent, although non-zero parametric-dependent modifications are induced as soon as extra fermion-resonance effects are accounted for. Non-zero contributions arise for all the scenarios, but considerably bigger at the fourplet models though. The combined effect of fermion-resonance rotation as well as the number of additional fermionic currents determine such behaviour for these models.
Finally, the recent LHC searches for vector-like quarks production in -collisions at 13 TeV [34] have been imposed to exclude regions of the parameter spaces underlying our framework. Specifically, we explore the allowed regions by bounding the decays and according with the latest experimental limits. Generically, the impact of the extra fermion-resonance couplings treated here will substantially reduce the permitted regions, leading us to roughly estimate the sensitivity of the parametric dependence in the shed light of new exotic matter interactions.
This manuscript is divided in: introduction of the PNGBs for the assumed CHM, vector resonance sector and its generic interplay with the elementary-composite sector in Section 2. Heavy resonances production and their decays in Section 3. Top partners production mechanism are introduced in Section 4 and detailed discussed in 4.1-4.2. The latest LHC searches on vector-like quark production are translated into parameter spaces associated to our models in Section 5. The impact of the additional fermion-resonance interactions is thoroughly studied along the text. The concluding summary is presented in Section 6.
## 2 Assumptions and set-up
One matter sector of our framework consists on the composite sector, entailing a composite Higgs boson and other composite resonances. The CCWZ formalism [35] postulates the Higgs as a PNGB of the minimal global symmetry [12] and spontaneously broken to by the strong sector at the scale . Four massless PNGBs are generated, yielding thus an Higgs doublet222Hence the Higgs is exactly massless unless the strong sector is coupled to some source of an explicit -breaking.. An additional factor is introduced in order to reproduce the proper SM hypercharge , then . The PNGBs enter through the Goldstone matrix
(2.1)
where are the coset -generators, whilst and are the PNGB fields and the decay constant respectively. Henceforth will stand for the coset generators, while for the unbroken ones, all them defined in Appendix A.
Additionally, the elementary sector, containing copies of all the SM field sector except for the Higgs transforming under the SM gauge symmetry group . This sector is not invariant, therefore the one-loop effective potential triggered by the elementary-composite interactions allows the Higgs to pick a mass, fixing thus its vacuum expectation value (VEV) in a -breaking direction. The unbroken contains the SM symmetry whose breaking will be triggered via a non-zero Higgs VEV GeV, measuring together with the breaking scale the degree of tuning of the scalar potential through the ratio [12]
ξ=v2f2. (2.2)
Generically, the value of must be large to suppress NP effects, but not too far from to maintain a tolerable tuning. Since controls low energies SM departures, then it cannot be too large. Electroweak precision tests suggest which corresponds to GeV. More stringent constraints on have been reported previously, following the current 95% combined limit from direct production of either the charged or the neutral at the LHC [36]. Those limits allow , or even smaller, for a vector resonance mass TeV. Such small values might be directly tested through single Higgs production at the LHC, reaching larger precision via double Higgs processes at CLIC, and should be compared with indirect bounds from EW precision data. In fact, by including only tree-level contributions to from the exchange [20] and the 1-loop IR effect from the modified Higgs couplings, it is possible to exclude at the region , with tending to in the infinite -mass case. Having no other contributions to the oblique parameters, masses TeV are already excluded even for very small (see [36] for more details). Nonetheless, slight modifications to the EW parameter shift the 95% exclusion boundary in such a manner that masses as low as TeV are still viable. On the other hand, the parameter is sensitive both to the cutoff and to the composite resonances scale, thus the inclusion of light fermionic resonances may relax those stringent constraints, reaching for instance or even bigger for TeV in minimal models with fermionic fourplet resonances [30]. More exotic scenarios, like the nineplet case, would lead to stringent values as for TeV in agreement with CL bounds from the parameter [30]. For the present work we will test , as they are compatible with the latter EWPT bounds, and with the vector resonance direct production bounds at LHC, as well as the expected single Higgs production at the LHC, and the double Higgs production at CLIC. In addtion, those values are inside the domain of validity of the scenario, and they will be assumed henceforth.
In this work we will cover all the possible couplings emerging from the interplay among the top partners sector and the composite operators sourced by the strong regime. The -invariance will prescribe the interplaying interactions via the generic Lagrangian
Lint=LM+∑χ=L,R(Lρχ+LM+ρχ+…) (2.3)
with labelling each one of the models arising from the assumed fermionic matter content
M=MΨ+q={{\bf M4+5},{\bf M4+14},{\bf M1+5},{\bf M1+14}}. (2.4)
is generically encoded by (1.5), whilst is
Lρχ=−14g2ρχρμνχρμνχ+m2ρχ2g2ρχ(ρμχ−eμχ)2 (2.5)
with the notation and the internal sum over the unbroken generators indices (defined in A) is assumed. The third Lagrangian in (2.3) encodes fermion currents coupled to the spin-1 resonances completley provided by the first time in [33], and generically defined as
LM+ρχ=1√2αχiJμiχ(ρμχ−eμχ)+h.c% ., (2.6)
with an implicit summation over the index spanning over all the possible currents and tensors that can be built upon the elementary , top partner and elementary-top partner sector and , e.g. can denote the set . Generic coefficients have been introduced and are correspondingly weighting each one of the fermion currents defined later on. The dots in (2.3) might account for higher dimensional operators (GB-scale suppressed), e.g. 2nd rank tensors made out of fermions and coupled to the resonance strength field, yielding contributions for the electric dipole moments at low energies (see [33] for more details). Such operators have been disregarded in here. In the next sections all the Lagrangians are explicitly provided.
### 2.1 M4+5 and M1+5 coupled to ρ
The leading order Lagrangian corresponding to -elementary fermions is given by the kinetic terms
Lelem=i¯¯¯qL⧸DqL+i¯¯¯uR⧸DuR, (2.7)
whereas both of the top partners and are introduced in (1.5) through the parametrization [25] as
Lcomp=i¯¯¯¯Ψ4⧸∇Ψ4−M4¯¯¯¯Ψ4Ψ4+(Ψ4↔Ψ1)+f24d2+(ic41(¯¯¯¯Ψ4)iγμdiμΨ1+h.c.) (2.8)
with standing for . Goldstone bosons kinetic terms are contained at the -term, while the coefficient controls the strength of the interplaying fourplet-singlet partner term, and it is is expected to be order one by power counting [37]. The covariant derivatives through (2.7)-(2.8), together with the and -symbols are defined in A. Finally, the mass terms mixing the elementary and top partners are described via
Lmix= yLf(¯¯¯q5LU)i(Ψ4R)i+yRf(¯¯¯u5RU)i(Ψ4L)i+h.c.+, (2.9) +~yLf(¯¯¯q5LU)5Ψ1R+~yRf(¯¯¯u5RU)5Ψ1L+h.c.
Suitable insertions have been done in order to guarantee the non-linear invariance. The small mixings and trigger the Goldstone symmetry breaking, providing thus a proper low Higgs mass. The latter Lagrangian entails partially composite and it gives rise to quark mass terms as well as trilinear couplings contributing to the single production of top partners. Their mass spectrum, couplings, implied phenomenology, production mechanisms and relevant decay channels at LHC searches, are thoroughly analysed in [25] for the case of a totally composite top quark .
Altogether, the leading order composite and mixing Lagrangians contain seven parameters , aside from the Goldstone decay constant . Six of them are arranged to reproduce the correct top mass plus the extra partner masses . Their expressions are reported in Appendix B.2.
The set of fermion currents constructable for both of these models, firstly provided in [33], are listed in Table 1 (left column). It is worth to comment that no currents built upon elementary right handed quarks are allowed for these models as the current turns out to be vanishing, with the definition . Check [33] for more details on this and related issues concerning heavy vector resonances, their equation of motions, as well as analogous stuff for the top partner fields.
### 2.2 M4+14 and M1+14 coupled to ρ
The elementary kinetic Lagrangian corresponding to this model is straightforwardly written
Lelem=i¯¯¯qL⧸DqL, (2.10)
whereas the composite counterpart is reshuffled as
Lcomp→L% comp+i¯¯¯uR⧸DuR+(ic41(¯¯¯¯Ψ4)iγμdiμΨ1+ic4u(¯¯¯¯Ψ4)iγμdiμuR+h.c.), (2.11)
where corresponds to the strong sector Lagrangian of (2.8) augmented by the those terms mixing the fourplet with the singlet and the totally composite through the coefficients and respectively. The elementary and top partners sector are mixed via
Lmix =yLf(Ut¯¯¯q14LU)i5(Ψ4R)i+~yLf(Ut¯¯¯q14LU)55Ψ1R+yRf(Ut¯¯¯q14LU)55u1R+h.c. (2.12)
This case also involves seven parameters , five of them are arranged to reproduce the correct top mass, plus extra four partner masses as the degeneracy is implied and also manifested at the previous two models. Notice that a direct mixing coupling and has been removed by a field redefinition. Table 1 lists the associated fermion currents (right column).
As we mentioned before, the assumption of spin-1 resonances brings us a mass scale below the cut-off of the theory at , entailing thus the coupling . Likewise, the top partner mass scales , assumed here such that , also brings the couplings . Hereinafter the linking relations
gρχ≡mρχf,g4(1)≡M4(1)f (2.13)
will be used throughout. As it is commonly argued in the literature, the ranges 500 GeV TeV and are the most favoured by concrete models (see [25] and references therein).
## 3 Heavy spin-1 production and decays
Concerning the vector resonance production, the role of spin-0 and spin-1 resonances on the PNGBs scattering were studied in [20]. Their experimental searches [38] were explored for in [36, 39, 40]. Recently, the impact of heavy triplet resonances at the LHC in the final states and (), , , as well as on the gauge and gauge-Higgs channels , , and , has been analysed (see [31, 41, 42] and references therein), constraining the vector resonance mass in the range 2.1-3 TeV. The latest searches [43] for heavy resonances decaying into a vector boson and a Higgs boson in final states with charged leptons, neutrinos and quarks have excluded resonance masses less than 2 TeV at 95% confidence level. In order to explore the feasibility and potentiality of our scenarios, a broader mass range will be explored in here333Previous analysis of the relevant spin-1 decay channels in a CHM, based on the pattern with top partners in the fundamental of , have been done in [44]. We will face similar treatment here, but analysing deeper the departures induced by our extra fermion-resonance couplings along several spin-1 decay channels.. At the Lagrangian level, the vector resonance production is induced by the effective charged-neutral interactions
Ludρ±i=−1√2¯u⧸ρ+i(guLdLρ+iPL+guRdRρ+iPR)d+h.c., (3.1)
Lffρ0i=∑f=u,d¯f⧸ρ0i(gfLfLρ0iPL+gfRfRρ0iPR)f. (3.2)
for . The different involved couplings directly depend on the weighting coefficients of (2.6) as well as on fermion-vector diagonalization effects, and are quite long to be reported here. Associated production cross sections through the processes and are computed from the latter Lagrangians by using MadGraph 5. Fig. 1 displays all the spin-1 production cross sections as a functions of the parameter in the mass range TeV, for all the aforementioned top-partner models at TeV, and by setting for both . The couplings and are fixed following the prescription in (2.13), whereas the corresponding Yukawa couplings in (2.9) and (2.12) are suitably fixed to maintain the SM top quark mass at its experimental observed value, either through its predicted value in (B.8) or (B.12) and by implementing relations in (B.9). As it can be seen from Fig. 1 (1st-2nd rows), the model M is the most predominant one in yielding either charged or neutral heavy resonances. In addition, a higher -value enhances all the productions, but the one for the at M where its production is diminished. Notice that whether the top partner is a fourplet or singlet, the -elementary fermions scenario favours higher production values rather than the -ones. Among the charged and neutral resonances, and are predominantly yielded at M and reaching rough cross section values of pb (20 pb) and pb (10 pb) at TeV (3 TeV) for .
The resonance production is compared with respect the production with no fermion-resonance current interactions of (2.6) in Fig. 1 (3rd-4th rows). Notice how remarkably the cross section values are enhanced by the presence of the fermion-resonance current interactions of (2.6) by fixing (dashed curves) with respect the situation (thick ones). In some cases such enhancement occurs by some orders of magnitude. The interactions of the heavy resonances to the SM fermions follow partly from the universal composite-elementary mixing, i.e. from the elementary component of the heavy spin-1 mass eigenstate. They exhibit a strength of order , being thus extremely suppressed in the limit as it can be seen for larger in Fig. 1. Such scenario changes as soon as the -coupings of (2.6) are accounted for, as well as the fermion-resonace diagonalization effects are considered in. All this clearly signals a feasible scenario for explaining future observations of heavy resonance production at higher energies, where the interactions encoded by (2.6) and Table 1, might help in determining the model and the strength for the involved effective terms.
Subsequent decays of the heavy resonance may occur into final states containing single and double top partners444In [45] was shown how the existing LHC searches can constrain decays of spin-1 resonances into a top partner pair, which generally make standard spin-1 searches, as dilepton resonant searches, ineffective. We will examine here how such top partner pair channels are altered-enhanced, once our additional fermion-resonance effects are switched on., as well as into gauge pair and gauge-Higgs final states. The fermionic decay channels will be triggered by the effective terms
LXfρ±= (3.3) −1√2⎡⎣∑f=u,d¯X⧸ρ+(gXLfLρ+PL+gXRfRρ+PR)f+¯X⧸ρ+(gXLX′Lρ+PL+gXRX′Rρ+PR)X′⎤⎦+h.c.,
LXfρ0=∑f=u,d¯X⧸ρ0(gXLfLρ0PL+gXRfRρ0PR)f+h.c.+¯X⧸ρ0(gXLXLρ0PL+gXRXRρ0PR)X, (3.4)
whilst the cubic interactions involving one heavy resonances are encoded by
Lρ±WZ=i(g(1){ρ+WZ}ρ+μνW−μZν−g(2){ρ+WZ}W−μνρ+μZν+g(3){ρ+WZ}Zμνρ+μW−ν+h.c.), (3.5)
Lρ0WW=i(g(1){ρ0WW}W+μνW−μρ0ν+h.c.)+i2g(2){ρ0WW}ρ0μνW+μW−ν, (3.6)
LρVh=gρ+Wh(ρ+μW−μh+h.c.)+gρ0Zhρ0μZμh, (3.7)
where the second term in (3.3) suitably couples two different top partners to the spin-1 resonance. It is implied that the Lagrangians along (3.3)-(3.7) apply for both and . The couplings involved in the fermionic decay channels entail both diagonalization effects from the gauge-resonance and elementary-composite sectors, being thus too long to be reported here. Nonetheless, the ones contributing to the resonance-gauge and resonance-gauge-Higgs interactions only depend on a single diagonalization. In fact, these couplings can be extracted by using the Equivalence Theorem for a heavy resonance field. In this limit the leading contribution to the interaction comes from the longitudinal polarizations of the SM vector fields, and the overall strength equals that of the coupling of one to two NG bosons up to small corrections . From (2.5) the strength of the -interaction is proportional to , with , a quantity expected to be of order 1 according to naive dimensional analysis (NDA).
These features leads the heavy resonances to be strongly coupled to the composite states, i.e. the longitudinal polarizations of , and the Higgs boson, while their coupling strength to the SM fermions to be extremely suppressed in the limit . Nonetheless, such scenario changes as soon as the interactions of in (2.6) are considered. Indeed, the fermion-resonance couplings are augmented by and are hence directly depending on the strength of the coefficients , as well as on the fermion and gauge-resonance diagonalization effects. Fig. 2 resumes all the previous remarks, where the branching fractions are compared for two different cases (thick-dashed curves) at M by setting . Notice that the branching fractions to and , as well as those to and , are equal to very good approximation. This is implied by the Equivalence Theorem, which works well since for the chosen values of parameters. As expected, the branching ratios of the resonance to fermions are much smaller as a consequence of the suppressed couplings. Some remarks are in order:
• No fermion-resonance currents () entails dominant pair gauge and gauge-Higgs decay channels, while extremely suppressed-subdominant fermionic channels for the charged-neutral resonances (upper-lower pannels).
• The scenario changes when fermion-resonance currents are switched on (). Indeed, the pair gauge , and the gauge-Higgs , final states are still the relevant ones for TeV, becoming subdominant with respect to and at TeV. The channel turns out to be dominant along the explored mass range for the charged resonance decays, as well as the modes .
• The higher regime ( TeV) triggers other exotic channels dominant compared with the pair gauge and gauge-Higgs channels, e.g. and , as well as .
• Even for no fermion-resonance currents (, thick curves) there will be exotic fermionic modes still active, although less relevant as the gauge and gauge-Higgs channels. Such fermionic exotic modes receive important contributions when the couplings in (2.6) are included in, some of them being enhanced by one-two orders of magnitude, or even three orders as in the case for and for a higher regime mass .
Similar comments apply for the product of the resonance production cross section times the corresponding branching ratio, not displayed here for briefness purposes. Once the heavy resonance are produced, their decays can lead to the generation of either a single or double quark partner in the final states. A fuller top partner production mechanism is triggered by bringing QCD, EW and Higgs-mediated interactions onto the stage.
## 4 Top partners production and decays
Being colored all the quark partners, their production in pairs at hadron colliders is QCD-driven as it is shown in Fig. 3, furthermore, completely model-independent and insensitive on the degrees of compositeness of the associated SM quarks. Qualitatively, the top partner production is independent on whether both or only one multiplet is present in the effective theory.
### 4.1 Double Partner production
The production of double-partner final states is driven by QCD as well as SM gauge, Higgs, and -mediated processes for the case of neutral final states as it is depicted in Fig. 3. The double production mechanisms is controlled by the model-dependent couplings through (3.1)-(3.4) and by the analogous ones involving SM charged and neutral gauge fields correspondingly. QCD pair production is completely model-independent, although non-zero parametric-dependent modifications are induced as soon as extra fermion-resonance effects are accounted for. Fig. 4 gathers the double-partner production cross sections only for neutral final states, where we have constructed the pair cross sections for each value of the mass parameter by interpolation using MadGraph 5 simulations, at 14 TeV LHC in all the models for , and for a fixed resonance mass TeV. The prescription in (2.13) is assumed again for the couplings and . Comparison of two different situations (thick-dashed curves) reflects the impact on the production from the additional fermion-resonance effects regarded here. The latter effects may enhance double-partner production by one order of magnitude at the fourplet models, whereas vanishing contributions and tiny ones are obtained at the singlet scenarios. The combined effect of fermion-resonance rotation as well as the less number of additional fermionic currents determine such behaviour for the latter models.
Notice how the final states and are mainly produced via proton-proton collision in M as the involved quark partner masses are smaller than the corresponding ones at M (see (B.8)-(B.12) and Fig. 9). The -modes does not distinguish the elementary embeddings representation as the involved partner masses are equal at both models. Nonetheless, as soon as extra fermion-resonance effects of (2.6) are regarded, the model M gets disfavoured in turn compared with M, due to the implied fermion-resonance diagonalization effects and the different number of fermionic currents in each model as well. The same comments apply qualitatively and quantitatively for the channel as the involved partner masses are degenerate with the corresponding one for (see Appendix B.2). Generically, producing pairs either of or will be kinematically favoured with respect double production of both and , because their relatively higher masses. Similar arguments alike to the fourplet case work for the pair production of the singlet (Fig. 4), where the involved masses result smaller at -elementary embeddings compared with the one at -scenario (see Fig. 9, right plot), favouring the former scenario for its production in pairs.
### 4.2 Single Partner production
QCD may trigger the production of single-partner final states, together with SM gauge and , -mediated processes for the case of charged-neutral final states respectively (Fig. 5). These channels are gathered in Fig. 6, where the charged final state has been omitted for briefness reasons. Important enhancements occur for the single partner production at the fourplet model (Fig. 6 1st row) as the kinematic of less massive final states is implied. The larger number of fermionic current entering in the stage also determines such increasing. Although some cases do not obey this, like the neutral final mode at M (3rd row left) and the charged channel at at M (1st row right), where the combined effect of fermion-resonance diagonalization effects roughly suppresses the induced contributions from the additional interactions of (2.6). The channel is absent at M because flavor-changing neutral couplings are forbidden in the charge â1/3 sector as explained in [25]. Nonetheless, non-zero contributions arise for it as long as the extra fermion-resonance effects regarded here together with diagonalization effects are considered as shown in Fig. 6 for (1st row left).
Notice again how the single production of the singlet is dominant at the -elementary embeddings rather than at -scenario as the involved masses result smaller at the former model555Recently it has been analysed the top partner single productions through loops mediated by the scalar singlet in [46]. With reasonable coupling strengths, the production rate of a top partner, in association with the SM top, can dominate top partner pair production at top partner masses higher than 1.5 TeV. See the reference for more details.. However the situation may turn as the extra fermion-resonance couplings are included, for instance in the charged mode (4th row right). Quantitatively, the final states containing either or will be largely produced as they involve partners whose masses are smaller compared with the others partners, as in the case of (3rd row left) and - (1st-3rd rows right).
Concerning the singlet partner and the fourplet ones, their single production in association with a SM quark is partly driven by the Higgs boson, suppressing therefore the single production of the up and charm partner by the square of the SM-like up and charm Yukawa coupling, respectively666Color octet resonances from the strong dynamics can favour the single production of singlet partners [47, 48].. Conversely, the large top mass makes the single production of the top partner one of the dominant mechanism, especially at large top partner mass [49, 50]. It is worth to quote that single production in association with an EW gauge boson or a Higgs boson is possible [51, 52] and are not regarded here. More additional contribution will play a role once the extra fermion-resonance couplings are accounted for, generically increasing the cross section production for single partners. Likewise the production in pairs, the single productions will be controlled by the effective interacting terms among the fermions and the SM charged and neutral gauge, as well as by the model-dependent couplings along (3.1)-(3.4). These are computed analytically in our models, and they arise from the interactions reported in Appendices B.1B.2 after performing the rotation to the physical basis of mass eigenstates. Since the rotation matrices can be expressed in a closed form the explicit formulae for the couplings are straightforwardly derived. The result is rather involved and for this reason it will not be reported here, however it is easily implemented in a Mathematica package.
Finally, some words concerning top partners decays are worth. The main channels are two-body decays to gauge bosons and third-family quarks. For the partners of charge and also the decay to the Higgs boson is allowed, and competitive with the others in some cases. This originates after the rotation to the physical basis. The relevant couplings are encapsulated in (3.1)-(3.4) and by the analogous ones involving SM charged and neutral gauge fields correspondingly. They can be computed analytically, and therefore exact tree-level expressions for the partial widths and eventually for the branching fractions are obtained. In principle, cascade decays or are also allowed. Exotic channels like or are theoretically allowed but less relevant though, as they involve higher masses in the final states777For a more detailed discussion on relevant decays see [25] and for a more recent update check [54, 55, 56]. Early discussions on the discovery potential of top-partners in a realistic composite Higgs model with LHC data can be found in [57, 58].. Such decays arise in our models, and depending on the chosen parameters, they would either enhance or decrease some standard SM final states, and would strongly depend on the resonance mass spectrum as well as on the decaying partner mass. In a future work, we will explore these issues and the flexibility entailed by the parametric dependence for the feasibility of exotic partner decay channels.
The constraints on the top partners that are inferred from available LHC searches of similar particles, have been recently explored in [54, 55] by imposing direct bounds on heavy top-like quarks with standard and exotic decays. Constraints on the allowed parameter space of our models are obtained by the imposition of recent LHC partner searches. Specifically, we excluded regions of the parameter space in terms of and the mass scales and .
## 5 Parameter spaces and constraints
The most stringent experimental constraints on and from the direct searches had been derived in [59, 60]. In fact, by means of pair production mechanism driven mostly by QCD interactions, rough limits on and were respectively established as GeV and GeV. Experimental searches for the singly produced partners [61] and searches for pair production into the bounds on singly produced partners [25, 62, 63, 64] have been considered. Additionally, the nineplet case has been analysed yielding TeV [65]. These bounds have been updated and refined following the latest ATLAS and CMS results [66, 34]. The search for the pair production of vector-like top quarks in final states with exactly one lepton, at least four jets and high missing transverse momentum has allowed to exclude masses below 870 GeV (890 GeV expected) and 1.05 TeV (1.06 TeV expected), for the singlet and doublet models respectively. The search was based on of TeV LHC collision data recorded by ATLAS in 2015 and 2016 (see [66] for more details).
More recently, CMS has released [34] the results of a search for vector-like quarks, with electric charge of 2/3 and -4/3, respectively, that are pair produced in interactions at TeV and decaying exclusively via the channel. Events were selected requiring a lepton and neutrino from one , and a quark-antiquark pair from the other boson gauge. The selection requires a muon or electron, significant missing transverse momentum, and at least four jets. A kinematic fit assuming a pair production of 2/3 or -4/3 electrically charged vector-like quarks was performed and for every event a corresponding candidate quark mass was reconstructed.
Upper limits were set in [34] for the pair production cross sections as a function of the implied vector-like quark masses. By comparing these limits with the predicted theoretical cross section of the pair production, the production of 2/3 or -4/3 electrically charged vector-like quarks is excluded at 95% confidence level for masses below 1295 GeV (1275 GeV expected). More generally, the results set upper limits on the product of the production cross section and branching fraction for the -channel of any new heavy quark decaying to this mode. Such limits have been imposed in for all of our models and are translated into exclusion regions for the parameter spaces involved by , and . We have analytically computed and including a heavy resonance in the final states for the total width, and also simulated through MadGraph 5 the pair production cross section of and at TeV for the fourplet and singlet models respectively. Fig. 7 gathers the allowed parameter spaces (1st-2nd plots) and (3rd-4th) for all the fourplet and singlet models, with a total decay width summing the standard modes , and up, and augmented by and . Consequently, the branching ratio for any channel will be also -dependent and will entail a parametric dependence on the extra fermion-resonance interactions regarded here in (2.6). Their impact is explored by displaying two different situations: the dashed border regions stand for the allowed parameter spaces assuming extra fermion-resonance couplings weighted by , whilst the others zones denote zero additional interactions, i.e. . The heavy resonance mass has been set as GeV at the first and second plots, whereas the partner mass scale is fixed as GeV at the third and fourth graphs. Some comments are in order:
• When accounting for extra couplings in (2.6), the allowed region is strongly constrained and bounded to a tiny areas
{\bf M4+5}: ξ∼[0.1,0.15]forMΨ∼[750,900]GeV⇒mT∼[1130,1366]GeV {\bf M1+14}: ξ∼[0.05,0.1]forMΨ∼[1300,1500]GeV⇒m˜T∼[1055,1319]GeV
The latter mass ranges are partly allowed by the recent limits [34] on the exclusion at 95% confidence level for masses below 1295 GeV. The -range at both models are compatible with EWPT bounds, the vector resonance direct production bounds at LHC, as well as the expected LHC single Higgs production, and the double Higgs production at CLIC (see discussion in 2.3).
• Conversely, by switching off the extra -couplings, a broader parameter space is allowed and the previous ranges become relaxed. Certainly, intervals for compatible with experimental expectations are possible at both fourplet models, becoming ruled out at M as they entirely fall inside the exclusion limit of [34]. At M, such bounds entail (see Fig. 9)
{\bf M4+14}:0.05≲ξ≲0.35forMΨ≳1150GeV (5.1)
favouring a extreme part of the obtained parameter space in Fig. 7. Likewise, the exclusion limit in [34] leads to
{\bf M1+5}: 0.01≲ξ≲0.15forMΨ∼[500,930]GeV, {\bf M1+14}: ξ≳0.3forMΨ∼[1350,1400]GeV
approving a small region for the associated parameter spaces at both models. Tiny -values are allowed and still compatible with experimental constraints at both singlet models.
As a conclusion, the recent upper limits on top-like partners production permit part of the parameter spaces from M and from the singlet models as well. By including additional fermion-resonance couplings a strongly bounded region at M and M remains only. In this sense, those extra couplings are helpful in disregarding-selecting models and refining further their involved parameter space. An additional insight into the parametric freedom can be perfomed by fixing now the partner mass scale and letting the resonance mass to vary. This is illustrated in Fig. 7 (3rd-4th plots) where it has been set GeV (a bit below the threshold for the exclusion limit [34]). We can infer:
• Before including extra couplings, the parameter spaces are similar at both fourplet and singlet models, notoriously split into a left and right-handed regions, and with slight differences at low and high resonance mass. All the left-handed regions are ruled out by the analysis in [31, 41, 42] and the experimental searches in [43] as they fall well below the lower limit of 2000 GeV for the resonance mass. On the other hand, the right-hand regions fall partly inside the resonance mass excluded region, while permitting a relatively large area for M and M consistent with the feasible -values.
• After turning on the additional couplings of (2.6), the parameter space for M is slightly enlarged at the left-handed side towards low resonance mass, incompatible with experimental expectations. For the same model, the right-handed region shortens, leaving a small corner compatible with the range 2000-2100 GeV. Numerically, it is also allowed the pretty small area around GeV and . For the singlet scenarios, M allows resonance masses compatible with the expectations, and even for small . In this case, the final permitted area is larger compared with the fourplet case.
Finally, a deeper insight into the parametric dependence is gained by fixing the EW and GB scales while letting the resonance and quark partner mass scales to simultaneously vary. Fig. 8 displays the involved allowed areas for . The influence of the extra fermion-resonance interactions proposed in this work is remarkably observed, specially when it tends to drive the permitted regions outside the excluded vector-like quark masses range. Although they become ruled out at M by the expected resonance mass, a small window it is still feasible in the case of M. Likewise, a small region remains for M. Conversely, when turning the extra interactions off, the latter model does not allow any region compatible with [34] (for ), though compatible with resonance mass expectations. At the -embeddings scenario no region remains, whilst at M a small band results compatible with the expectations. In summary, including extra -couplings will allow M and M at small windows, while the removal of those couplings will rule out them for M at a relatively small band.
## 6 Summary
We have explored in this work the phenomenological signals arising from the interplay among three matter sectors: elementary, top partners and vector resonances in a composite Higgs Model. The vector resonance , here assumed to be spin-1 triplet of , is coupled to the -invariant fermionic currents and 2nd rank tensors listed in Table 1 that were proposed by the first time in [33]. The top partners permitted by the unbroken are here restricted to the fourplet and singlet embeddings. Such matter content spans four models in (2.4), coupled each of them to the -resonance via the prescription (2.6) and subsequently scanned along their involved parametric dependence.
Heavy spin-1 production and their decays have been thoroughly studied along some range for the resonance mass scale and for a given model parameters election in Fig. 1. The model M is the most predominant one in yielding either charged or neutral heavy resonances, while a higher | 2021-01-22 19:18:16 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8107149600982666, "perplexity": 1763.5748040566061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531335.42/warc/CC-MAIN-20210122175527-20210122205527-00309.warc.gz"} |
https://www.cdt21.com/faq/what-are-the-frequencies-of-the-intermediate-frequency-if-stages-in-the-cdp-rx-02e-ep-cdp-rx-02f-cdp-rx-07m-mp-cdp-rx-05m-r/ | ## What are the frequencies of the intermediate frequency (IF) stages in the CDP-RX-02E/EP, CDP-RX-02F, CDP-RX-07M/MP, CDP-RX-05M-R?
The 1st IF is 21.7 MHz and the 2nd IF is 450 kHz. | 2021-03-02 17:52:45 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8242868185043335, "perplexity": 4327.20085655603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364027.59/warc/CC-MAIN-20210302160319-20210302190319-00400.warc.gz"} |
https://docs.panda3d.org/1.10/python/programming/render-effects/compass | # Compass Effects¶
A CompassEffect causes a node to inherit its rotation (or pos or scale, if specified) from some other reference node in the graph, or more often from the root.
In its purest form, a CompassEffect is used to keep the node’s rotation fixed relative to the top of the scene graph, despite other transforms that may exist above the node. Hence the name: the node behaves like a magnetic compass, always pointing in the same direction.
As an couple of generalizing extensions, the CompassEffect may also be set up to always orient its node according to some other reference node than the root of the scene graph. Furthermore, it may optionally adjust any of pos, rotation, or scale, instead of necessarily rotation; and it may adjust individual pos and scale components. (Rotation may not be adjusted on an individual component basis, that’s just asking for trouble.)
Be careful when using the pos and scale modes. In these modes, it’s possible for the CompassEffect to move its node far from its normal bounding volume, causing culling to fail. If this is an issue, you may need to explicitly set a large (or infinite) bounding volume on the effect node.
nodePath.setCompass()
If a NodePath is supplied to the setCompass call, it indicates the node to which the rotation will be kept relative (which is render by default). | 2020-10-31 08:08:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43762677907943726, "perplexity": 761.1770274654126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107916776.80/warc/CC-MAIN-20201031062721-20201031092721-00261.warc.gz"} |
http://megasoft-rapid.com/Massachusetts/error-propagation-rules-exponents.html | Address 778 Essex St, Lawrence, MA 01841 (978) 682-2758
# error propagation rules exponents Lynnfield, Massachusetts
Generated Fri, 14 Oct 2016 13:58:35 GMT by s_wx1094 (squid/3.5.20) ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://0.0.0.10/ Connection Or in matrix notation, f ≈ f 0 + J x {\displaystyle \mathrm σ 6 \approx \mathrm σ 5 ^ σ 4+\mathrm σ 3 \mathrm σ 2 \,} where J is is formed in two steps: i) by squaring Equation 3, and ii) taking the total sum from $$i = 1$$ to $$i = N$$, where $$N$$ is the total number of soerp package, a python program/library for transparently performing *second-order* calculations with uncertainties (and error correlations).
Please try the request again. The final result for velocity would be v = 37.9 + 1.7 cm/s. Note this is equivalent to the matrix expression for the linear case with J = A {\displaystyle \mathrm {J=A} } . ISSN0022-4316.
Square Terms: $\left(\dfrac{\delta{x}}{\delta{a}}\right)^2(da)^2,\; \left(\dfrac{\delta{x}}{\delta{b}}\right)^2(db)^2, \;\left(\dfrac{\delta{x}}{\delta{c}}\right)^2(dc)^2\tag{4}$ Cross Terms: $\left(\dfrac{\delta{x}}{da}\right)\left(\dfrac{\delta{x}}{db}\right)da\;db,\;\left(\dfrac{\delta{x}}{da}\right)\left(\dfrac{\delta{x}}{dc}\right)da\;dc,\;\left(\dfrac{\delta{x}}{db}\right)\left(\dfrac{\delta{x}}{dc}\right)db\;dc\tag{5}$ Square terms, due to the nature of squaring, are always positive, and therefore never cancel each other out. The system returned: (22) Invalid argument The remote host or network may be down. Eq.(39)-(40). Contributors http://www.itl.nist.gov/div898/handb...ion5/mpc55.htm Jarred Caldwell (UC Davis), Alex Vahidsafa (UC Davis) Back to top Significant Digits Significant Figures Recommended articles There are no recommended articles.
Derivation of Exact Formula Suppose a certain experiment requires multiple instruments to carry out. Therefore, the propagation of error follows the linear case, above, but replacing the linear coefficients, Aik and Ajk by the partial derivatives, ∂ f k ∂ x i {\displaystyle {\frac {\partial In the following examples: q is the result of a mathematical operation δ is the uncertainty associated with a measurement. You see that this rule is quite simple and holds for positive or negative numbers n, which can even be non-integers.
Now that we have done this, the next step is to take the derivative of this equation to obtain: (dV/dr) = (∆V/∆r)= 2cr We can now multiply both sides of the In both cases, the variance is a simple function of the mean.[9] Therefore, the variance has to be considered in a principal value sense if p − μ {\displaystyle p-\mu } Please try the request again. Uncertainty, in calculus, is defined as: (dx/x)=(∆x/x)= uncertainty Example 3 Let's look at the example of the radius of an object again.
The end result desired is $$x$$, so that $$x$$ is dependent on a, b, and c. as follows: The standard deviation equation can be rewritten as the variance ($$\sigma_x^2$$) of $$x$$: $\dfrac{\sum{(dx_i)^2}}{N-1}=\dfrac{\sum{(x_i-\bar{x})^2}}{N-1}=\sigma^2_x\tag{8}$ Rewriting Equation 7 using the statistical relationship created yields the Exact Formula for Propagation of Every time data are measured, there is an uncertainty associated with that measurement. (Refer to guide to Measurement and Uncertainty.) If these measurements used in your calculation have some uncertainty associated Article type topic Tags Upper Division Vet4 © Copyright 2016 Chemistry LibreTexts Powered by MindTouch View text only version Skip to main content Skip to main navigation Skip to search
Le's say the equation relating radius and volume is: V(r) = c(r^2) Where c is a constant, r is the radius and V(r) is the volume. Introduction Every measurement has an air of uncertainty about it, and not all uncertainties are equal. GUM, Guide to the Expression of Uncertainty in Measurement EPFL An Introduction to Error Propagation, Derivation, Meaning and Examples of Cy = Fx Cx Fx' uncertainties package, a program/library for transparently The results of each instrument are given as: a, b, c, d... (For simplification purposes, only the variables a, b, and c will be used throughout this derivation).
The rules for indeterminate errors are simpler. doi:10.6028/jres.070c.025. Using Beer's Law, ε = 0.012614 L moles-1 cm-1 Therefore, the $$\sigma_{\epsilon}$$ for this example would be 10.237% of ε, which is 0.001291. Retrieved 2016-04-04. ^ "Propagation of Uncertainty through Mathematical Operations" (PDF).
Each covariance term, σ i j {\displaystyle \sigma _ σ 2} can be expressed in terms of the correlation coefficient ρ i j {\displaystyle \rho _ σ 0\,} by σ i Example: F = mg = (20.4 kg)(-9.80 m/s2) = -199.92 kgm/s2 δF/F = δm/m δF/(-199.92 kgm/s2) = (0.2 kg)/(20.4 kg) δF = ±1.96 kgm/s2 δF = ±2 kgm/s2 F = -199.92 Your cache administrator is webmaster. Joint Committee for Guides in Metrology (2011).
What is the uncertainty of the measurement of the volume of blood pass through the artery? The measured track length is now 50.0 + 0.5 cm, but time is still 1.32 + 0.06 s as before. Since both distance and time measurements have uncertainties associated with them, those uncertainties follow the numbers throughout the calculations and eventually affect your final answer for the velocity of that object. The mean of this transformed random variable is then indeed the scaled Dawson's function 2 σ F ( p − μ 2 σ ) {\displaystyle {\frac {\sqrt {2}}{\sigma }}F\left({\frac {p-\mu }{{\sqrt
For such inverse distributions and for ratio distributions, there can be defined probabilities for intervals, which can be computed either by Monte Carlo simulation or, in some cases, by using the Uncertainty never decreases with calculations, only with better measurements. Now make all negative terms positive, and the resulting equuation is the correct indeterminate error equation. The uncertainty u can be expressed in a number of ways.
doi:10.1016/j.jsv.2012.12.009. ^ Lecomte, Christophe (May 2013). "Exact statistics of systems with uncertainties: an analytical theory of rank-one stochastic dynamic systems". The coefficients in parantheses ( ), and/or the errors themselves, may be negative, so some of the terms may be negative. JCGM. doi:10.1007/s00158-008-0234-7. ^ Hayya, Jack; Armstrong, Donald; Gressis, Nicolas (July 1975). "A Note on the Ratio of Two Normally Distributed Variables".
The problem might state that there is a 5% uncertainty when measuring this radius. Error Propagation Contents: Addition of measured quantities Multiplication of measured quantities Multiplication with a constant Polynomial functions General functions Very often we are facing the situation that we need to measure The system returned: (22) Invalid argument The remote host or network may be down. Most commonly, the uncertainty on a quantity is quantified in terms of the standard deviation, σ, the positive square root of variance, σ2.
The equation for molar absorptivity is ε = A/(lc). In fact, since uncertainty calculations are based on statistics, there are as many different ways to determine uncertainties as there are statistical methods. It may be defined by the absolute error Δx. The indeterminate error equations may be constructed from the determinate error equations by algebraically reaarranging the final resultl into standard form: ΔR = ( )Δx + ( )Δy + ( )Δz
f = ∑ i n a i x i : f = a x {\displaystyle f=\sum _ σ 4^ σ 3a_ σ 2x_ σ 1:f=\mathrm σ 0 \,} σ f 2 | 2018-12-15 15:54:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8956702947616577, "perplexity": 1955.6405221940693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826892.78/warc/CC-MAIN-20181215152912-20181215174912-00066.warc.gz"} |
https://math.stackexchange.com/questions/2014489/what-is-the-best-strategy-for-roulette | # What is the best strategy for roulette?
You start with $\$10$. You have a$\dfrac {18}{38}$chance of winning, and if you win you get back double the money you spent. The minimum bet is$\$1$. How should you split your bets so that you make $\$20$the fastest? This question was given to me by a friend, who in turn got the question from another student. So unfortunately I don't know the context or the exact wording. The only way I thought of interpreting the question is to see which strategy has the best expected value. If you bet$\$10$ directly, your expected value is $E_1 = 10 \cdot \dfrac {18}{38} - 10 \cdot \dfrac {20}{38}$. If you bet $\$5$twice, your expected value is$E_2 = 2\left( 5 \cdot \dfrac {18}{38} - 5 \cdot \dfrac {20}{38} \right) = E_1.$I don't see how splitting the bets in different ways would ever make a difference. • A detailed analysis of the optimal betting strategy to double your money on roulette was done with my question here: math.stackexchange.com/questions/1994169/… Basically, you bet as much as possible on the highest odds adding in surrounding bets as needed to exactly double your money. – doug Mar 5 '17 at 23:29 • @doug Thanks for the link! – Ovi Mar 6 '17 at 2:01 ## 2 Answers First of all,$\frac{18}{38}=\frac{9}{19}$. Also winning gives you double, so$E_1$is really$10\cdot 2\cdot\frac{9}{19}-10\cdot\frac{10}{19}=\frac{80}{19}$. Now: The function for your expected value is$E(m)=2m\cdot\frac{9}{19}-m\cdot\frac{10}{19}=\frac{8m}{19}$, where m is the money you bet. However,$E(m)$is additive; this means that$E(a+b)=E(a)+E(b)$for any two real numbers a and b (obviously though satisfying the given conditions). So you are right that splitting the bets makes no difference. Edit: Indeed, it technically does matter, but only as far as your risk. So if you bet all 10 dollars on your first bet, then you might win immediately, but you might also lose immediately. • Well winning does give you double, but if you bet$10$and win you get$20$, so your net gain is$10$, and this has a$\dfrac {9}{19}$chance of happening. And if you lose, your net loss is$10$, with a chance of$\dfrac {10}{19}\$. – Ovi Nov 15 '16 at 1:00
Expected win
$$E_1 = 10 \cdot \dfrac {18}{38} - 10 \cdot \dfrac {20}{38}$$
makes sense if you repeat this process large number of times, but since you do it once, your chance of losing = 20/38
your chance of winning = 18/38
And if you split the bet into two 5 dollar,
your chance of losing it all = (20/38)*(20/38)
your chance of winning it all = (18/38)*(18/38)
So you chances of winning and losing both decrease. Because now we have some intermediary states 1 win and 1 loss( which is start state) so splitting it in two bets decreases the risk of going broke, but it also decreases the chance of winning. | 2019-12-12 16:32:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9942588806152344, "perplexity": 342.67280838904975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540544696.93/warc/CC-MAIN-20191212153724-20191212181724-00499.warc.gz"} |
http://drorbn.net/index.php?title=09-240:HW9 | # 09-240:HW9
Just for fun. A certain $100\times 100$ matrix $A$ of random numbers between $0$ and $1$ is fed into a computer called Golem, capable of about $10^9$ arithmetic operations per second (between floating point numbers, at roughly 14 decimal digits of precision).
• Estimate how long it will take Golem to compute $\det A$ using the explicit recursive formula.
• Assuming you are ready to wait and shuffle screens, will you trust the results? (Remember that even if electrical power will be available to eternity and electronic components will never fail, every time a computer adds or multiplies two 14-digit numbers it makes a rounding error of size around $10^{-14})$.
• Estimate how long it will take Golem to compute $\det A$ using row operations. | 2019-01-20 21:29:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7679271101951599, "perplexity": 647.1802240641557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583739170.35/warc/CC-MAIN-20190120204649-20190120230649-00500.warc.gz"} |
http://en.wikipedia.org/wiki/Gibbs_sampler | # Gibbs sampling
(Redirected from Gibbs sampler)
In statistics and in statistical physics, Gibbs sampling or a Gibbs sampler is a Markov chain Monte Carlo (MCMC) algorithm for obtaining a sequence of observations which are approximated from a specified multivariate probability distribution (i.e. from the joint probability distribution of two or more random variables), when direct sampling is difficult. This sequence can be used to approximate the joint distribution (e.g., to generate a histogram of the distribution); to approximate the marginal distribution of one of the variables, or some subset of the variables (for example, the unknown parameters or latent variables); or to compute an integral (such as the expected value of one of the variables). Typically, some of the variables correspond to observations whose values are known, and hence do not need to be sampled.
Gibbs sampling is commonly used as a means of statistical inference, especially Bayesian inference. It is a randomized algorithm (i.e. an algorithm that makes use of random numbers, and hence may produce different results each time it is run), and is an alternative to deterministic algorithms for statistical inference such as variational Bayes or the expectation-maximization algorithm (EM).
As with other MCMC algorithms, Gibbs sampling generates a Markov chain of samples, each of which is correlated with nearby samples. As a result, care must be taken if independent samples are desired (typically by thinning the resulting chain of samples by only taking every nth value, e.g. every 100th value). In addition (again, as in other MCMC algorithms), samples from the beginning of the chain (the burn-in period) may not accurately represent the desired distribution.
## Introduction
Gibbs sampling is named after the physicist Josiah Willard Gibbs, in reference to an analogy between the sampling algorithm and statistical physics. The algorithm was described by brothers Stuart and Donald Geman in 1984, some eight decades after the death of Gibbs.[1]
In its basic version, Gibbs sampling is a special case of the Metropolis–Hastings algorithm. However, in its extended versions (see below), it can be considered a general framework for sampling from a large set of variables by sampling each variable (or in some cases, each group of variables) in turn, and can incorporate the Metropolis–Hastings algorithm (or similar methods such as slice sampling) to implement one or more of the sampling steps.
Gibbs sampling is applicable when the joint distribution is not known explicitly or is difficult to sample from directly, but the conditional distribution of each variable is known and is easy (or at least, easier) to sample from. The Gibbs sampling algorithm generates an instance from the distribution of each variable in turn, conditional on the current values of the other variables. It can be shown (see, for example, Gelman et al. 1995) that the sequence of samples constitutes a Markov chain, and the stationary distribution of that Markov chain is just the sought-after joint distribution.
Gibbs sampling is particularly well-adapted to sampling the posterior distribution of a Bayesian network, since Bayesian networks are typically specified as a collection of conditional distributions.
## Implementation
Gibbs sampling, in its basic incarnation, is a special case of the Metropolis–Hastings algorithm. The point of Gibbs sampling is that given a multivariate distribution it is simpler to sample from a conditional distribution than to marginalize by integrating over a joint distribution. Suppose we want to obtain $\left.k\right.$ samples of $\mathbf{X} = (x_1, \dots, x_n)$ from a joint distribution $\left.p(x_1, \dots, x_n)\right.$. Denote the $i$th sample by $\mathbf{X}^{(i)} = (x_1^{(i)}, \dots, x_n^{(i)})$. We proceed as follows:
1. We begin with some initial value $\mathbf{X}^{(0)}$.
2. For each sample $i \in \{1,\dots,k\}$, sample each variable $x_j^{(i)}$ from the conditional distribution $p(x_j|x_1^{(i)},\dots,x_{j-1}^{(i)},x_{j+1}^{(i-1)},\dots,x_n^{(i-1)})$. That is, sample each variable from the distribution of that variable conditioned on all other variables, making use of the most recent values and updating the variable with its new value as soon as it has been sampled.
If such sampling is performed, these important facts hold:
• The samples approximate the joint distribution of all variables.
• The marginal distribution of any subset of variables can be approximated by simply considering the samples for that subset of variables, ignoring the rest.
• The expected value of any variable can be approximated by averaging over all the samples.
When performing the sampling:
• The initial values of the variables can be determined randomly or by some other algorithm such as expectation-maximization.
• It is not actually necessary to determine an initial value for the first variable sampled.
• It is common to ignore some number of samples at the beginning (the so-called burn-in period), and then consider only every $n$th sample when averaging values to compute an expectation. For example, the first 1,000 samples might be ignored, and then every 100th sample averaged, throwing away all the rest. The reason for this is that (1) successive samples are not independent of each other but form a Markov chain with some amount of correlation; (2) the stationary distribution of the Markov chain is the desired joint distribution over the variables, but it may take a while for that stationary distribution to be reached. Sometimes, algorithms can be used to determine the amount of autocorrelation between samples and the value of $n$ (the period between samples that are actually used) computed from this, but in practice there is a fair amount of "black magic" involved.
• The process of simulated annealing is often used to reduce the "random walk" behavior in the early part of the sampling process (i.e. the tendency to move slowly around the sample space, with a high amount of autocorrelation between samples, rather than moving around quickly, as is desired). Other techniques that may reduce autocorrelation are collapsed Gibbs sampling, blocked Gibbs sampling, and ordered overrelaxation; see below.
### Relation of conditional distribution and joint distribution
Furthermore, the conditional distribution of one variable given all others is proportional to the joint distribution:
$p(x_j|x_1,\dots,x_{j-1},x_{j+1},\dots,x_n) = \frac{p(x_1,\dots,x_n)}{p(x_1,\dots,x_{j-1},x_{j+1},\dots,x_n)} \propto p(x_1,\dots,x_n)$
"Proportional to" in this case means that the denominator is not a function of $x_j$ and thus is the same for all values of $x_j$; it forms part of the normalization constant for the distribution over $x_j$. In practice, to determine the nature of the conditional distribution of a factor $x_j$, it is easiest to factor the joint distribution according to the individual conditional distributions defined by the graphical model over the variables, ignore all factors that are not functions of $x_j$ (all of which, together with the denominator above, constitute the normalization constant), and then reinstate the normalization constant at the end, as necessary. In practice, this means doing one of three things:
1. If the distribution is discrete, the individual probabilities of all possible values of $x_j$ are computed, and then summed to find the normalization constant.
2. If the distribution is continuous and of a known form, the normalization constant will also be known.
3. In other cases, the normalization constant can usually be ignored, as most sampling methods do not require it.
## Inference
Gibbs sampling is commonly used for statistical inference (e.g. determining the best value of a parameter, such as determining the number of people likely to shop at a particular store on a given day, the candidate a voter will most likely vote for, etc.). The idea is that observed data is incorporated into the sampling process by creating separate variables for each piece of observed data and fixing the variables in question to their observed values, rather than sampling from those variables. The distribution of the remaining variables is then effectively a posterior distribution conditioned on the observed data.
The most likely value of a desired parameter (the mode) could then simply be selected by choosing the sample value that occurs most commonly; this is essentially equivalent to maximum a posteriori estimation of a parameter. (Since the parameters are usually continuous, it is often necessary to "bin" the sampled values into one of a finite number of ranges or "bins" in order to get a meaningful estimate of the mode.) More commonly, however, the expected value (mean or average) of the sampled values is chosen; this is a Bayes estimator that takes advantage of the additional data about the entire distribution that is available from Bayesian sampling, whereas a maximization algorithm such as expectation maximization (EM) is capable of only returning a single point from the distribution. For example, for a unimodal distribution the mean (expected value) is usually similar to the mode (most common value), but if the distribution is skewed in one direction, the mean will be moved in that direction, which effectively accounts for the extra probability mass in that direction. (Note, however, that if a distribution is multimodal, the expected value may not return a meaningful point, and any of the modes is typically a better choice.)
Although some of the variables typically correspond to parameters of interest, others are uninteresting ("nuisance") variables introduced into the model to properly express the relationships among variables. Although the sampled values represent the joint distribution over all variables, the nuisance variables can simply be ignored when computing expected values or modes; this is equivalent to marginalizing over the nuisance variables. When a value for multiple variables is desired, the expected value is simply computed over each variable separately. (When computing the mode, however, all variables must be considered together.)
Supervised learning, unsupervised learning and semi-supervised learning (aka learning with missing values) can all be handled by simply fixing the values of all variables whose values are known, and sampling from the remainder.
For observed data, there will be one variable for each observation — rather than, for example, one variable corresponding to the sample mean or sample variance of a set of observations. In fact, there generally will be no variables at all corresponding to concepts such as "sample mean" or "sample variance". Instead, in such a case there will be variables representing the unknown true mean and true variance, and the determination of sample values for these variables results automatically from the operation of the Gibbs sampler.
Generalized linear models (i.e. variations of linear regression) can sometimes be handled by Gibbs sampling as well. For example, probit regression for determining the probability of a given binary (yes/no) choice, with normally distributed priors placed over the regression coefficients, can be implemented with Gibbs sampling because it is possible to add additional variables and take advantage of conjugacy. However, logistic regression cannot be handled this way. One possibility is to approximate the logistic function with a mixture (typically 7-9) of normal distributions. More commonly, however, Metropolis-Hastings is used instead of Gibbs sampling.
## Mathematical background
Suppose that a sample $\left.X\right.$ is taken from a distribution depending on a parameter vector $\theta \in \Theta \,\!$ of length $\left.d\right.$, with prior distribution $g(\theta_1, \ldots , \theta_d)$. It may be that $\left.d\right.$ is very large and that numerical integration to find the marginal densities of the $\left.\theta_i\right.$ would be computationally expensive. Then an alternative method of calculating the marginal densities is to create a Markov chain on the space $\left.\Theta\right.$ by repeating these two steps:
1. Pick a random index $1 \leq j \leq d$
2. Pick a new value for $\left.\theta_j\right.$ according to $g(\theta_1, \ldots , \theta_{j-1} , \, \cdot \, , \theta_{j+1} , \ldots , \theta_d )$
These steps define a reversible Markov chain with the desired invariant distribution $\left.g\right.$. This can be proved as follows. Define $x \sim_j y$ if $\left.x_i = y_i\right.$ for all $i \neq j$ and let $\left.p_{xy}\right.$ denote the probability of a jump from $x \in \Theta$ to $y \in \Theta$. Then, the transition probabilities are
$p_{xy} = \begin{cases} \frac{1}{d}\frac{g(y)}{\sum_{z \in \Theta: z \sim_j x} g(z) } & x \sim_j y \\ 0 & \text{otherwise} \end{cases}$
So
$g(x) p_{xy} = \frac{1}{d}\frac{ g(x) g(y)}{\sum_{z \in \Theta: z \sim_j x} g(z) } = \frac{1}{d}\frac{ g(y) g(x)}{\sum_{z \in \Theta: z \sim_j y} g(z) } = g(y) p_{yx}$
since $x \sim_j y$ is an equivalence relation. Thus the detailed balance equations are satisfied, implying the chain is reversible and it has invariant distribution $\left.g\right.$.
In practice, the suffix $\left.j\right.$ is not chosen at random, and the chain cycles through the suffixes in order. In general this gives a non-stationary Markov process, but each individual step will still be reversible, and the overall process will still have the desired stationary distribution (as long as the chain can access all states under the fixed ordering).
## Variations and extensions
Numerous variations of the basic Gibbs sampler exist. The goal of these variations is to reduce the autocorrelation between samples sufficiently to overcome any added computational costs.
### Collapsed Gibbs sampler
• A collapsed Gibbs sampler integrates out (marginalizes over) one or more variables when sampling for some other variable. For example, imagine that a model consists of three variables A, B, and C. A simple Gibbs sampler would sample from p(A|B,C), then p(B|A,C), then p(C|A,B). A collapsed Gibbs sampler might replace the sampling step for A with a sample taken from the marginal distribution p(A|C), with variable B integrated out in this case. Alternatively, variable B could be collapsed out entirely, alternately sampling from p(A|C) and p(C|A) and not sampling over B at all. The distribution over a variable A that arises when collapsing a parent variable B is called a compound distribution; sampling from this distribution is generally tractable when B is the conjugate prior for A, particularly when A and B are members of the exponential family. For more information, see the article on compound distributions or Liu (1994).[2]
#### Implementing a collapsed Gibbs sampler
##### Collapsing Dirichlet distributions
In hierarchical Bayesian models with categorical variables, such as latent Dirichlet allocation and various other models used in natural language processing, it is quite common to collapse out the Dirichlet distributions that are typically used as prior distributions over the categorical variables. The result of this collapsing introduces dependencies among all the categorical variables dependent on a given Dirichlet prior, and the joint distribution of these variables after collapsing is a Dirichlet-multinomial distribution. The conditional distribution of a given categorical variable in this distribution, conditioned on the others, assumes an extremely simple form that makes Gibbs sampling even easier than if the collapsing had not been done. The rules are as follows:
1. Collapsing out a Dirichlet prior node affects only the parent and children nodes of the prior. Since the parent is often a constant, it is typically only the children that we need to worry about.
2. Collapsing out a Dirichlet prior introduces dependencies among all the categorical children dependent on that prior — but no extra dependencies among any other categorical children. (This is important to keep in mind, for example, when there are multiple Dirichlet priors related by the same hyperprior. Each Dirichlet prior can be independently collapsed and affects only its direct children.)
3. After collapsing, the conditional distribution of one dependent children on the others assumes a very simple form: The probability of seeing a given value is proportional to the sum of the corresponding hyperprior for this value, and the count of all of the other dependent nodes assuming the same value. Nodes not dependent on the same prior must not be counted. Note that the same rule applies in other iterative inference methods, such as variational Bayes or expectation maximization; however, if the method involves keeping partial counts, then the partial counts for the value in question must be summed across all the other dependent nodes. Sometimes this summed up partial count is termed the expected count or similar. Note also that the probability is proportional to the resulting value; the actual probability must be determined by normalizing across all the possible values that the categorical variable can take (i.e. adding up the computed result for each possible value of the categorical variable, and dividing all the computed results by this sum).
4. If a given categorical node has dependent children (e.g. when it is a latent variable in a mixture model), the value computed in the previous step (expected count plus prior, or whatever is computed) must be multiplied by the actual conditional probabilities (not a computed value that is proportional to the probability!) of all children given their parents. See the article on the Dirichlet-multinomial distribution for a detailed discussion.
5. In the case where the group membership of the nodes dependent on a given Dirichlet prior may change dynamically depending on some other variable (e.g. a categorical variable indexed by another latent categorical variable, as in a topic model), the same expected counts are still computed, but need to be done carefully so that the correct set of variables is included. See the article on the Dirichlet-multinomial distribution for more discussion, including in the context of a topic model.
##### Collapsing other conjugate priors
In general, any conjugate prior can be collapsed out, if its only children have distributions conjugate to it. The relevant math is discussed in the article on compound distributions. If there is only one child node, the result will often assume a known distribution. For example, collapsing an inverse-gamma-distributed variance out of a network with a single Gaussian child will yield a Student's t-distribution. (For that matter, collapsing both the mean and variance of a single Gaussian child will still yield a Student's t-distribution, provided both are conjugate, i.e. Gaussian mean, inverse-gamma variance.)
If there are multiple child nodes, they will all become dependent, as in the Dirichlet-categorical case. The resulting joint distribution will have a closed form that resembles in some ways the compound distribution, although it will have a product of a number of factors, one for each child node, in it.
In addition, and most importantly, the resulting conditional distribution of one of the child nodes given the others (and also given the parents of the collapsed node(s), but not given the children of the child nodes) will have the same density as the posterior predictive distribution of all the remaining child nodes. Furthermore, the posterior predictive distribution has the same density as the basic compound distribution of a single node, although with different parameters. The general formula is given in the article on compound distributions.
For example, given a Bayes network with a set of conditionally independent identically distributed Gaussian-distributed nodes with conjugate prior distributions placed on the mean and variance, the conditional distribution of one node given the others after compounding out both the mean and variance will be a Student's t-distribution. Similarly, the result of compounding out the gamma prior of a number of Poisson-distributed nodes causes the conditional distribution of one node given the others to assume a negative binomial distribution.
In these cases where compounding produces a well-known distribution, efficient sampling procedures often exist, and using them will often (although not necessarily) be more efficient than not collapsing, and instead sampling both prior and child nodes separately. However, in the case where the compound distribution is not well-known, it may not be easy to sample from, since it generally will not belong to the exponential family and typically will not be log-concave (which would make it easy to sample using adaptive rejection sampling, since a closed form always exists).
In the case where the child nodes of the collapsed nodes themselves have children, the conditional distribution of one of these child nodes given all other nodes in the graph will have to take into account the distribution of these second-level children. In particular, the resulting conditional distribution will be proportional to a product of the compound distribution as defined above, and the conditional distributions of all of the child nodes given their parents (but not given their own children). This follows from the fact that the full conditional distribution is proportional to the joint distribution. If the child nodes of the collapsed nodes are continuous, this distribution will generally not be of a known form, and may well be difficult to sample from despite the fact that a closed form can be written, for the same reasons as described above for non-well-known compound distributions. However, in the particular case that the child nodes are discrete, sampling is feasible, regardless of whether the children of these child nodes are continuous or discrete. In fact, the principle involved here is described in fair detail in the article on the Dirichlet-multinomial distribution.
### Gibbs sampler with ordered overrelaxation
• A Gibbs sampler with ordered overrelaxation samples a given odd number of candidate values for $x_j^{(i)}$ at any given step and sorts them, along with the single value for $x_j^{(i-1)}$ according to some well-defined ordering. If $x_j^{(i-1)}$ is the sth smallest in the sorted list then the $x_j^{(i)}$ is selected as the sth largest in the sorted list. For more information, see Neal (1995).[3]
### Other extensions
It is also possible to extend Gibbs sampling in various ways. For example, in the case of variables whose conditional distribution is not easy to sample from, a single iteration of slice sampling or the Metropolis-Hastings algorithm can be used to sample from the variables in question. It is also possible to incorporate variables that are not random variables, but whose value is deterministically computed from other variables. Generalized linear models, e.g. logistic regression (aka "maximum entropy models"), can be incorporated in this fashion. (BUGS, for example, allows this type of mixing of models.)
## Failure modes
There are two ways that Gibbs sampling can fail. The first is when there are islands of high-probability states, with no paths between them. For example, consider a probability distribution over 2-bit vectors, where the vectors (0,0) and (1,1) each have probability ½, but the other two vectors (0,1) and (1,0) have probability zero. Gibbs sampling will become trapped in one of the two high-probability vectors, and will never reach the other one. More generally, for any distribution over high-dimensional, real-valued vectors, if two particular elements of the vector are perfectly correlated (or perfectly anti-correlated), those two elements will become stuck, and Gibbs sampling will never be able to change them.
The second problem can happen even when all states have nonzero probability and there is only a single island of high-probability states. For example, consider a probability distribution over 100-bit vectors, where the all-zeros vector occurs with probability ½, and all other vectors are equally probable, and so have a probability of $\frac{1}{2(2^{100}-1)}$ each. If you want to estimate the probability of the zero vector, it would be sufficient to take 100 or 1000 samples from the true distribution. That would very likely give an answer very close to ½. But you would probably have to take more than $2^{100}$ samples from Gibbs sampling to get the same result. No computer could do this in a lifetime.
This problem occurs no matter how long the burn-in period is. This is because in the true distribution, the zero vector occurs half the time, and those occurrences are randomly mixed in with the nonzero vectors. Even a small sample will see both zero and nonzero vectors. But Gibbs sampling will alternate between returning only the zero vector for long periods (about $2^{99}$ in a row), then only nonzero vectors for long periods (about $2^{99}$ in a row). Thus convergence to the true distribution is extremely slow, requiring much more than $2^{99}$ steps; taking this many steps is not computationally feasible in a reasonable time period. The slow convergence here can be seen as a consequence of the curse of dimensionality.
Note that a problem like this can be solved by block sampling the entire 100-bit vector at once. (This assumes that the 100-bit vector is part of a larger set of variables. If this vector is the only thing being sampled, then block sampling is equivalent to not doing Gibbs sampling at all, which by hypothesis would be difficult.)
## Software
The OpenBUGS software (Bayesian inference Using Gibbs Sampling) does a Bayesian analysis of complex statistical models using Markov chain Monte Carlo.
JAGS (Just another Gibbs sampler) is a GPL program for analysis of Bayesian hierarchical models using Markov Chain Monte Carlo.
Church is free software for performing Gibbs inference over arbitrary distributions that are specified as probabilistic programs.
PyMC is an open source Python library for Bayesian learning of general Probabilistic Graphical Model with advanced features and easy to use interface.[4]
## Notes
1. ^ Geman, S.; Geman, D. (1984). "Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images". IEEE Transactions on Pattern Analysis and Machine Intelligence 6 (6): 721–741. doi:10.1109/TPAMI.1984.4767596.
2. ^ Liu, Jun S. (September 1994). "The Collapsed Gibbs Sampler in Bayesian Computations with Applications to a Gene Regulation Problem". Journal of the American Statistical Association 89 (427): 958–966. doi:10.2307/2290921. JSTOR 2290921.
3. ^ Neal, Radford M. (1995). Suppressing Random Walks in Markov Chain Monte Carlo Using Ordered Overrelaxation (Technical report). University of Toronto, Department of Statistics. 9508.
4. ^ | 2015-03-05 05:10:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 49, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8243683576583862, "perplexity": 336.35530740670146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463708.99/warc/CC-MAIN-20150226074103-00096-ip-10-28-5-156.ec2.internal.warc.gz"} |
http://www.eng.fsu.edu/~dommelen/pdes/style_a/burgers.html | Subsections
### 3.5 The inviscid Burgers’ equation
The inviscid Burgers’ equation is a model for nonlinear wave propagation, especially in fluid mechanics. It takes the form
(3.5)
The characteristic equations are, according to (3.4),
The second of these shows that is constant along the characteristics of the Burgers’ equation, and then the first equation shows that the characteristic lines are straight lines in the -plane.
The solution of the two characteristic ordinary differential equations above is simple:
The general solution of the partial differential equation may be found in terms of and by noting that must be a function of , , and then substituting for :
Some special cases are singular in those terms; they require that is written in terms of :
Normally, either expression may be taken to be the general solution of the ordinary differential equation. One-parameter function , respectively remains to be identified from whatever initial or boundary conditions there are.
#### 3.5.1 Wave steepening
The given solution of the inviscid Burgers’ equation shows that the characteristics are straight lines. This is troubling, since straight lines are likely to intersect. In particular, since the point on a given characteristic lines propagates with speed , faster points behind less fast ones will eventually overtake them.
As an example, consider the following problem:
This problem is self-evidently periodic of period . Figure 3.4 shows how the characteristics intersect starting from time .
Figure 3.5 shows profiles versus at various times. Note that for times greater than one, becomes a multiple-valued function. Physically, this is normally not acceptable: you can not have three different pressures or flow velocities at the same point.
#### 3.5.2 Shocks
The previous subsection noted that solutions of hyperbolic equations with intersecting characteristics are usually not physically acceptable. In fact, the desired solution for the inviscid Burgers’ equation is usually taken to be the solution of the viscous Burgers’ equation:
in the limit that the coefficient of viscosity becomes zero.
The viscous Burgers’ equation, too, is analytically solvable, though the solution will be skipped here. The bottom line is that it does not have multiple valued solutions. So what does the solution of the viscous Burgers’ equation look like in the limit that the viscosity becomes zero? Like figures 3.6 and 3.7. A jump discontinuity called a “shock” develops in . The characteristics run into this shock and disappear.
The question now is of course, what determines the precise location of the shock? Clearly, it should be somewhere in the region of intersecting characteristics, but that still leaves a considerable uncertainty. Equations for the shock are needed. They usually follow from the requirement that certain quantities remain conserved in the solution. This is addressed in the next subsections.
#### 3.5.3 Conservation laws
Often, partial differential equations express conservation of some physical quantity. For example, the continuity equation for the density of a fluid expresses conservation of mass of the fluid: the mass of a region of fluid is found by integrating the density over the volume of the region, and the continuity equation implies that mass is preserved in time.
The viscous Burgers’ equation, too, preserves some quantity. To see what, integrate the equation over an interval from some position to some position :
The last two integrals can be integrated after noting that , to give
First consider the case that the problem is periodic and the integral is over a full period. Then the quantities at and are the same because of periodicity and drop away against each other. This shows that
so that over a period is a conserved quantity, unchanging in time. The unknown itself can then be identified as the amount of conserved quantity per unit length.
Next consider the case that the region of integration is not a period. In that case, the Leibniz rule for differentiating integrals says that
and plugging that into the integrated equation:
Now think of interval as being preceded by a similar interval , with . It is evident from the above expression that the reduction in the value of caused by the term
is fully compensated for by a corresponding increase in , because the same term shows up there as
with a plus sign. So whatever goes out of interval at goes into interval . The same way, whatever comes in at comes out of the region . It follows that is still preserved.
It may be noted that in
the first term represents the amount of conserved quantity being swept into the interval by the motion of its end point . Typically, the second term physically corresponds to the amount of conserved quantity being convected out by motion of the substance, and the final term to the amount diffusing in by random molecular motion.
#### 3.5.4 Shock relation
If the solution of the inviscid Burgers’ equation is indeed supposed to approximate the solution of the viscous equation when the coefficient of viscosity becomes zero, it puts a condition on how the shocks must move. The shock is vanishingly thin and can only hold a negligible amount of conserved material. So, whatever goes into the shock at one side must come out at the other side.
The amounts going in and out of a region were derived in the previous section for an interval . Taking point just before the shock and just behind the shock, so that to practical purposes with the shock velocity, equality of the amounts going in and out requires
Solving for the shock velocity , you get
It follows that the shock must move with the average of the characteristic velocities and just before and after the shock. Figures 3.6 and 3.7 were obtained by finding the shock position from that relationship.
Shock relations, like this one for Burgers’ equation, are known as Rankine-Hugoniot relations in fluid mechanics. When deriving shock relations, make sure that the unknown variables are the conserved quantities per unit volume. If you multiply the inviscid Burgers’ equation by , you get
from which it can be seen that as far as the inviscid Burgers’ equation is concerned, is also a conserved quantity. But the shocks you would compute using the corresponding conservation law are going to be different, and wrong if the true conserved quantity across shocks is the of the viscous Burgers’ equation.
#### 3.5.5 The entropy condition
Consider now Burgers’ equation for a unit “pulse” initial condition:
This problem has a simple solution that is also quite wrong. It is shown in figure 3.8. It implies that the pulse moves with velocity towards the right. Note that both shocks satisfy the shock condition of the previous section; at one side of each shock and at the other side average in each case to .
The problem is with the left shock. Characteristics should run into the shock for increasing time like for the right shock, not emerge out of it as happens for the left one. In fluid mechanics, the left shock is what is called an “expansion” shock. It produces an adiabatic decrease in entropy over the shock, something the second law of thermodynamics does not allow. For that reason, the condition that characteristics must run into the shock is called the “entropy condition.”
The correct solution is shown in figure 3.9. The left jump in the initial condition spreads out into what is called an “expansion far.” Unlike the shock, the expansion fan is a perfectly good nonsingular solution of the Burgers“ equation, though you must use the solution form with . The solution form does not work since is the same, zero, on all characteristics, and u must be different on different characteristics. Conversely, in the other three regions, you must use the solution form with either uniformly zero or uniformly one. There the solution form does not work since is the same for all characteristics and is not.
It may also be observed that the entropy condition is necessary to get a unique solution; both figures 3.8 and 3.9 satisfy the Burgers“ equation at all continuous points and the shock conditions at all discontinuities. | 2019-06-19 15:12:25 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9198675751686096, "perplexity": 335.8433117092919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999000.76/warc/CC-MAIN-20190619143832-20190619165832-00487.warc.gz"} |
https://kerodon.net/tag/01QQ | # Kerodon
$\Newextarrow{\xRightarrow}{5,5}{0x21D2}$ $\newcommand\empty{}$
Construction 9.10.6.1 (Contravariant Transport). Let $U: \operatorname{\mathcal{D}}\rightarrow \operatorname{\mathcal{C}}$ be a fibration in sets. Using Example 4.2.3.4 and Remark 4.2.3.6, we see that for each object $X \in \operatorname{\mathcal{C}}$, the fiber $\operatorname{\mathcal{D}}_{X} = \{ X\} \times _{\operatorname{\mathcal{C}}} \operatorname{\mathcal{D}}$ is a discrete category.
Let $f: X \rightarrow Y$ be a morphism in the category $\operatorname{\mathcal{C}}$. For each object $\widetilde{Y} \in \operatorname{\mathcal{D}}_{Y}$, our assumption that $U$ is a fibration in sets guarantees that there exists a unique pair $(\widetilde{X}, \widetilde{f} )$, where $\widetilde{X}$ is an object of the fiber $\operatorname{\mathcal{D}}_{X}$ and $\widetilde{f}: \widetilde{X} \rightarrow \widetilde{Y}$ satisfies $U( \widetilde{f} ) = f$. Note that the object $\widetilde{X}$ depends only on $f$ and $\widetilde{Y}$. To emphasize this dependence, we will denote $\widetilde{X}$ by $f^{\ast }( \widetilde{Y} )$. The construction $\widetilde{Y} \mapsto f^{\ast }( \widetilde{Y} )$ then determines a function $f^{\ast }: \operatorname{Ob}( \operatorname{\mathcal{D}}_{Y} ) \rightarrow \operatorname{Ob}( \operatorname{\mathcal{D}}_{X} )$. | 2023-03-30 04:37:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9338677525520325, "perplexity": 99.17793576553922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949097.61/warc/CC-MAIN-20230330035241-20230330065241-00186.warc.gz"} |
http://www.acmerblog.com/POJ-1800-Magic-Trick-blog-518.html | 2013
11-10
# Magic Trick
Background
Warning! This problem statement contains a serious spoiler. It shows the trick behind a magic trick. So if you still want to be amazed in case somebody shows this trick to you then do NOT read the rest of this problem statement. Stop reading… NOW!
Problem
Well, you’re still reading, so obviously you have no respect for magic tricks. Be ashamed, please. Ok,here’s what happens. The magician shows you a text with three paragraphs like this one:
It was a horribly dark night.
The moon was shining, but not much.
A suspicious stranger entered the
bar and went straight to John Doe.
“I’m searching for aliens, can I
He then asks you to secretly pick a word in the first paragraph. Then you shall do this:
1. Count the number of characters in your word (call that number X).
2. From your word move on X words.
Repeat these two steps until you reach the third paragraph. Then tell the magician that you’re done.After some hocus pocus he tells you the word you ended up with.
For our purposes, a “word” is defined as consecutive letters (A-Z,a-z). For example, “I’m” is regarded as two separate words.
For example, let’s say you choose “night” in the above example. It has 5 characters, so you move on five words: “The”, “moon”, “was”, “shining”, “but”. Our new word is “but”. You move on 3 words to “A”,then 1 to “suspicious”, then 10 to “Doe” and then 3 to “searching”. Now you tell the magician that you’re ready. He says that you’ve reached “searching”.
How can he know? Well, it doesn’t matter where you start in the first paragraph, you’ll always end up at “searching”. The magician needs new texts and asks you to help him to find all possible outcomes (in the above example, “searching” is the only one). Apart from words, a possible outcome is “-outside-”,which means it’s possible to jump behind the third paragraph. Also, he’s not interested if more than three outcomes are possible.
The first line contains the number of scenarios. For each scenario, three lines are given, representing the three paragraphs. No line is longer than 100000 characters. Every paragraph will contain at least one word.
The output for every scenario begins with a line containing “Scenario #i:”, where i is the number of the scenario starting at 1. Then print the possible outcomes (possibly including “-outside-”) in alphabetical/lexicographical order, one word per line. Write words in lower case. Don’t list outcomes more than once. If however there are more than three possible outcomes, then print “-too many-” and do *not* print any of them. Terminate the output for the scenario with a blank line.
4
It was a horribly dark night. The moon was shining, but not much.
A suspicious stranger entered the bar and went straight to John Doe.
"I'm searching for aliens, can I borrow your computer?", he said.
!pablo espanol!
!pablo espanol!
!pablo espanol!
c'mon howLongOrShortCanASingleWordBe?
a b c d e f g f e d c b a
54254#@%$^%^@4626^#^%^$hahaha#$@%#$@63456326
Hello buddy dance tango!
This is too much for me...
Scenario #1:
searching
Scenario #2:
-outside-
espanol
Scenario #3:
-outside-
hahaha
Scenario #4:
-too many-
import java.io.BufferedReader;
import java.io.IOException;
import java.util.*;
public class Main {
public static void main(String[] args) throws NumberFormatException, IOException {
for(int i=1;i<=T;i++)
{ int n=0,last=0;
boolean[] use=new boolean[300010];
Set strset=new TreeSet();
for(int p=0;p< 3;p++)
{
str=str.replaceAll("[^a-zA-Z]+"," ").trim();//"[^a-zA-Z]+"匹配非字母字符串
String[] sa=str.split(" ");
for(int j=0;j< sa.length&&strset.size()<=3;j++,n++)
{
String te=sa[j];
if(p==0||p==1&&use[n])
{
int next=n+te.length();
use[next]=true;
last=Math.max(last,next);
}
if(p==2&&use[n])
{
}
}
}
System.out.println("Scenario #"+i+":");
if(last>=n)
if(strset.size()>3)
System.out.println("-too many-");
else
{
Iterator it=strset.iterator();
while(it.hasNext())
System.out.println(it.next());
}
System.out.println();
}
}
}
1. 第一句可以忽略不计了吧。从第二句开始分析,说明这个花色下的所有牌都会在其它里面出现,那么还剩下♠️和♦️。第三句,可以排除2和7,因为在两种花色里有。现在是第四句,因为♠️还剩下多个,只有是♦️B才能知道答案。
2. 第一句可以忽略不计了吧。从第二句开始分析,说明这个花色下的所有牌都会在其它里面出现,那么还剩下♠️和♦️。第三句,可以排除2和7,因为在两种花色里有。现在是第四句,因为♠️还剩下多个,只有是♦️B才能知道答案。 | 2017-03-29 01:19:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4248426854610443, "perplexity": 3262.8925972051725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190134.67/warc/CC-MAIN-20170322212950-00400-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://nrich.maths.org/1855 | ### Lesser Digits
How many positive integers less than or equal to 4000 can be written down without using the digits 7, 8 or 9?
### Mini-max
Consider all two digit numbers (10, 11, . . . ,99). In writing down all these numbers, which digits occur least often, and which occur most often ? What about three digit numbers, four digit numbers and so on?
### The Codabar Check
This article explains how credit card numbers are defined and the check digit serves to verify their accuracy.
# Six Times Five
##### Age 11 to 14 Challenge Level:
How many six-digit numbers are there which DO NOT contain a $5$? | 2020-04-03 18:27:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2664915919303894, "perplexity": 931.1372386352505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370515113.54/warc/CC-MAIN-20200403154746-20200403184746-00425.warc.gz"} |
https://www.coursehero.com/file/51237323/David-Morin-Probability-for-the-Enthusiastic-Beginner-CreateSpace-2016pdf/ | David Morin - Probability for the Enthusiastic Beginner-CreateSpace (2016).pdf - PROBABILITY For the Enthusiastic Beginner David Morin Harvard
# David Morin - Probability for the Enthusiastic Beginner-CreateSpace (2016).pdf
This preview shows page 1 out of 370 pages.
#### You've reached the end of your free preview.
Want to read all 370 pages?
Unformatted text preview: PROBABILITY For the Enthusiastic Beginner David Morin Harvard University © David Morin 2016d ISBN-10: 1523318678 ISBN-13: 978-1523318674 Printed by CreateSpace Additional resources located at: ˜ djmorin/book.html Contents Preface 1 2 Combinatorics 1.1 Factorials . . . . . . . . . . . . . . . . 1.2 Permutations . . . . . . . . . . . . . . 1.3 Ordered sets, repetitions allowed . . . . 1.4 Ordered sets, repetitions not allowed . . 1.5 Unordered sets, repetitions not allowed . 1.6 What we know so far . . . . . . . . . . 1.7 Unordered sets, repetitions allowed . . . 1.8 Binomial coefficients . . . . . . . . . . 1.8.1 Coins and Pascal’s triangle . . . 1.8.2 (a + b) n and Pascal’s triangle . 1.8.3 Properties of Pascal’s triangle . 1.9 Summary . . . . . . . . . . . . . . . . 1.10 Exercises . . . . . . . . . . . . . . . . 1.11 Problems . . . . . . . . . . . . . . . . 1.12 Solutions . . . . . . . . . . . . . . . . vii . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 3 7 12 14 20 21 29 29 31 33 35 35 36 41 Probability 2.1 Definition of probability . . . . . . . . . . . . . . . . . . 2.2 The rules of probability . . . . . . . . . . . . . . . . . . . 2.2.1 AND: The “intersection” probability, P( A and B) 2.2.2 OR: The “union” probability, P( A or B) . . . . . 2.2.3 (In)dependence and (non)exclusiveness . . . . . . 2.2.4 Conditional probability . . . . . . . . . . . . . . . 2.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 The art of “not” . . . . . . . . . . . . . . . . . . . 2.3.2 Picking seats . . . . . . . . . . . . . . . . . . . . 2.3.3 Socks in a drawer . . . . . . . . . . . . . . . . . . 2.3.4 Coins and dice . . . . . . . . . . . . . . . . . . . 2.3.5 Cards . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Four classic problems . . . . . . . . . . . . . . . . . . . . 2.4.1 The Birthday Problem . . . . . . . . . . . . . . . 2.4.2 The Game-Show Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 57 59 60 68 71 73 75 75 76 79 81 83 85 85 87 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.3 The Prosecutor’s Fallacy 2.4.4 The Boy/Girl Problem . 2.5 Bayes’ theorem . . . . . . . . . 2.6 Stirling’s formula . . . . . . . . 2.7 Summary . . . . . . . . . . . . 2.8 Exercises . . . . . . . . . . . . 2.9 Problems . . . . . . . . . . . . 2.10 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 93 97 106 108 109 109 114 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 133 141 146 150 155 163 164 165 168 4 Distributions 4.1 Discrete distributions . . . . . . . . . . . . . . . . 4.2 Continuous distributions . . . . . . . . . . . . . . 4.2.1 Motivation . . . . . . . . . . . . . . . . . 4.2.2 Probability density . . . . . . . . . . . . . 4.2.3 Probability equals area . . . . . . . . . . . 4.3 Uniform distribution . . . . . . . . . . . . . . . . 4.4 Bernoulli distribution . . . . . . . . . . . . . . . . 4.5 Binomial distribution . . . . . . . . . . . . . . . . 4.6 Exponential distribution . . . . . . . . . . . . . . . 4.6.1 Discrete case . . . . . . . . . . . . . . . . 4.6.2 Rates, expectation values, and probabilities 4.6.3 Continuous case . . . . . . . . . . . . . . 4.7 Poisson distribution . . . . . . . . . . . . . . . . . 4.7.1 Discrete case . . . . . . . . . . . . . . . . 4.7.2 Continuous case . . . . . . . . . . . . . . 4.8 Gaussian distribution . . . . . . . . . . . . . . . . 4.9 Summary . . . . . . . . . . . . . . . . . . . . . . 4.10 Exercises . . . . . . . . . . . . . . . . . . . . . . 4.11 Problems . . . . . . . . . . . . . . . . . . . . . . 4.12 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 182 184 184 186 189 191 192 193 196 196 199 202 207 207 209 215 221 222 222 227 5 Gaussian approximations 5.1 Binomial and Gaussian . . . . . 5.2 The law of large numbers . . . . 5.3 Poisson and Gaussian . . . . . . 5.4 Binomial, Poisson, and Gaussian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 250 256 260 263 3 Expectation values 3.1 Expectation value . . . . . . . 3.2 Variance . . . . . . . . . . . . 3.3 Standard deviation . . . . . . 3.4 Standard deviation of the mean 3.5 Sample variance . . . . . . . . 3.6 Summary . . . . . . . . . . . 3.7 Exercises . . . . . . . . . . . 3.8 Problems . . . . . . . . . . . 3.9 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 5.6 5.7 5.8 5.9 6 7 The central limit theorem Summary . . . . . . . . Exercises . . . . . . . . Problems . . . . . . . . Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 269 270 270 272 Correlation and regression 6.1 The concept of correlation . . . . . 6.2 A model for correlation . . . . . . . 6.3 The correlation coefficient, r . . . . 6.4 Improving the prediction for Y . . . 6.5 Calculating ρ(x, y) . . . . . . . . . 6.6 The standard-deviation box . . . . . 6.7 The regression lines . . . . . . . . . 6.8 Two regression examples . . . . . . 6.8.1 Example 1: Retaking a test . 6.8.2 Example 2: Comparing IQ’s 6.9 Least-squares fitting . . . . . . . . . 6.10 Summary . . . . . . . . . . . . . . 6.11 Exercises . . . . . . . . . . . . . . 6.12 Problems . . . . . . . . . . . . . . 6.13 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 277 280 285 291 294 297 300 305 305 310 313 317 318 318 320 Appendices 7.1 Appendix A: Subtleties about probability 7.2 Appendix B: Euler’s number, e . . . . . . 7.2.1 Definition of e . . . . . . . . . . 7.2.2 Raising e to a power . . . . . . . 7.2.3 The infinite series for e x . . . . . 7.2.4 The slope of e x . . . . . . . . . . 7.3 Appendix C: Approximations to (1 + a) n 7.4 Appendix D: The slope of e x . . . . . . . 7.4.1 First derivation . . . . . . . . . . 7.4.2 Second derivation . . . . . . . . . 7.5 Appendix E: Important results . . . . . . 7.6 Appendix F: Glossary of notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 335 339 339 341 344 346 346 350 350 353 356 359 Preface This book is written for high school and college students learning about probability for the first time. Most of the book is very practical, with a large number of concrete examples and worked-out problems. However, there are also parts that are a bit theoretical (at least for an introductory book), with many mathematical derivations. All in all, if you are looking for a book that serves as a quick reference, this may not be the one for you. But if you are looking for a book that starts at the beginning and derives everything from scratch in a comprehensive manner, then you’ve come to the right place. In short, this book will appeal to the reader who has a healthy level of enthusiasm for understanding how and why the standard results of probability come about. Probability is a very accessible (and extremely fun!) subject, packed with challenging problems that don’t require substantial background or serious math. The examples in Chapter 2 are a testament to this. Of course, there are plenty of challenging topics in probability that do require a more formal background and some heavy-duty math. This will become evident in Chapters 4 and 5 (and the latter part of Chapter 3). However, technically the only math prerequisite for this book is a comfort with algebra. Calculus isn’t relied on, although there are a few problems that do involve calculus. These are marked clearly. All of the problems posed at the ends of the chapters have solutions included. The difficulty is indicated by stars; most problems have two stars. One star means plug and chug, while three stars mean some serious thinking. Be sure to give a solid effort when solving a problem, and don’t look at the solution too soon. If you can’t solve a problem right away, that’s perfectly fine. Just set it aside and come back to it later. It’s better to solve a problem later than to read the solution now. If you do eventually need to look at a solution, cover it up with a piece of paper and read one line at a time, to get a hint to get started. Then set the book aside and work things out for real. That way, you can still (mostly) solve it on your own. You will learn a great deal this way. If you instead head right to the solution and read it straight through, you will learn very little. For instructors using this book as the assigned textbook for a course, a set of homework exercises is posted at ˜ djmorin/book.html. A solutions manual is available to instructors upon request. When sending a request, please point to a syllabus and/or webpage for the course. The outline of this book is as follows. Chapter 1 covers combinatorics, which is the study of how to count things. Counting is critical in probability, because probabilities often come down to counting the number of ways that something can happen. In Chapter 2 we dive into actual probability. This chapter includes a large number of examples, ranging from coins to cards to four classic problems presented in Section 2.4. Chapter 3 covers expectation values, including the variance and standard deviation. A section on the “sample variance” is included; this is rather mathematical and can be skipped on a first reading. In Chapter 4 we introduce the concept of a continuous distribution and then discuss a number of the more common probability distributions. In Chapter 5 we see how the binomial and Poisson distributions reduce to a Gaussian (or normal) distribution in certain limits. We also discuss the law of large numbers and the central limit theorem. Chapter 6 is somewhat of a stand-alone chapter, covering correlation and regression. Although these topics are usually found in books on statistics, it makes sense to include them here, because all of the framework has been set. Chapter 7 contains six appendices. Appendix C deals with approximations to (1 + a) n which are critical in the calculations in Chapter 5, Appendix E lists all of the main results we derive in the book, and Appendix F contains a glossary of notation; you may want to refer to this when starting each chapter. A few informational odds and ends: This book contains many supplementary remarks that are separated off from the main text; these end with a shamrock, “♣.” The letters N, n, and k generally denote integers, while x and t generally denote continuous quantities. Upper-case letters like X denote a random variable, while lower-case letters like x denote the value that the random variable takes. We refer to the normal distribution by its other name, the “Gaussian” distribution. The numerical plots were generated with Mathematica. I will sometimes use “they” as a gender-neutral singular pronoun, in protest of the present failing of the English language. And I will often use an “ ’s” to indicate the plural of one-letter items (like 6’s on dice rolls). Lastly, we of course take the frequentist approach to probability in this introductory book. I would particularly like to thank Carey Witkov for meticulously reading through the entire book and offering many valuable suggestions. Joe Swingle provided many helpful comments and sanity checks throughout the writing process. Other friends and colleagues whose input I am grateful for are Jacob Barandes, Sharon Benedict, Joe Blitzstein, Brian Hall, Theresa Morin Hall, Paul Horowitz, Dave Patterson, Alexia Schulz, and Corri Taylor. Despite careful editing, there is essentially zero probability that this book is error free (as you can show in Problem 4.16!). If anything looks amiss, please check the webpage ˜ djmorin/book.html for a list of typos, updates, additional material, etc. And please let me know if you discover something that isn’t already posted. Suggestions are always welcome. David Morin Cambridge, MA Chapter 1 Combinatorics TO THE READER: This book is available as both a paperback and an eBook. I have made a few chapters available on the web, but it is possible (based on past experience) that a pirated version of the complete book will eventually appear on file-sharing sites. In the event that you are reading such a version, I have a request: If you don’t find this book useful (in which case you probably would have returned it, if you had bought it), or if you do find it useful but aren’t able to afford it, then no worries; carry on. However, if you do find it useful and are able to afford the Kindle eBook (priced below \$10), then please consider purchasing it (available on Amazon). If you don’t already have the Kindle reading app for your computer, you can download it free from Amazon. I chose to self-publish this book so that I could keep the cost low. The resulting eBook price of around \$10, which is very inexpensive for a 350-page math book, is less than a movie and a bag of popcorn, with the added bonus that the book lasts for more than two hours and has zero calories (if used properly!). – David Morin Combinatorics is the study of how to count things. By “things” we mean the various combinations, permutations (different orderings), subgroups, and so on, that can be formed from a given set of objects/people/etc. For example, how many different outcomes are possible if you flip a coin four times? How many different full-house hands are there in poker? How many different committees of three people can be chosen from five people? What if we additionally designate one person as the committee’s president? Knowing how to count these types of things is critical for an understanding of probability, because when calculating the probability of a given event, we often need to count the number of ways that the event can happen. The outline of this chapter is as follows. In Section 1.1 we introduce the concept of factorials, which are ubiquitous in the study of probability. In Section 1.2 we learn how to count the number of possible permutations (orderings) of a set of objects. Section 1.3 covers the number of possible combined outcomes of a repeated experiment, where each repetition has an identical set of possible results. Examples 2 Chapter 1. Combinatorics include rolling dice and flipping coins. In Section 1.4 we learn how to count the number of subgroups that can be formed from a given set of objects, where the order within the subgroup matters. An example is choosing a committee of people in which all of the positions are distinct. Section 1.5 covers the related question of the number of subgroups that can be formed from a given set of objects, where the order within the subgroup doesn’t matter. An example is a poker hand; the order of the cards in the hand is irrelevant. We find that the answer takes the form of a binomial coefficient. In Section 1.6 we summarize the various results we have found so far. We discover that one result is missing from our counting repertoire, and we remedy this in Section 1.7. In Section 1.8 we look at the binomial coefficients in more detail. After learning in this chapter how to count all sorts of things, we’ll see in Chapter 2 how the counting can be used to calculate probabilities. It’s usually a trivial step to obtain a probability once you’ve counted the relevant things, so the work we do here will prove well worth it. 1.1 Factorials Before getting into the discussion of actual combinatorics, we first need to look at a certain quantity that comes up again and again. This quantity is called the factorial. We’ll see throughout this chapter that when dealing with a situation that involves an integer N, we often need to consider the product of the first N integers. This product is called “N factorial,” and it is denoted by “N!”.1 For the first few integers, we have: 1! = 1, 2! = 1 · 2 = 2, 3! = 1 · 2 · 3 = 6, 4! = 1 · 2 · 3 · 4 = 24, 5! = 1 · 2 · 3 · 4 · 5 = 120, 6! = 1 · 2 · 3 · 4 · 5 · 6 = 720. (1.1) As N increases, N! gets very large very fast. For example, 10! = 3, 628, 800, and 20! ≈ 2.43 · 1018 . In Chapter 2 we will introduce an approximation to N! called Stirling’s formula. This formula makes it clear what we mean by the statement, “N! gets very large very fast.” We should add that 0! is defined to be 1. Of course, 0! doesn’t make much sense, because when we talk about the product of the first N integers, it is understood that we start with 1. Since 0 is below this starting point, it is unclear what 0! actually means. However, there is no need to try too hard to make sense of it, because as we’ll see below, if we simply define 0! to be 1, then a number of formulas turn out to be very nice. 1I don’t know why someone long ago picked the exclamation mark for this notation. But just remember that it has nothing to do with the more common grammatical use of the exclamation mark for emphasis. So try not to get too excited when you see “N!”! 1.2. Permutations 3 Having defined N!, we can now start counting things. With the exception of the result in Section 1.3, all of the main results in this chapter involve factorials. 1.2 Permutations A permutation of a set of objects is a way of ordering them. For example, if we have three people – Alice, Bob, and Carol – then one permutation of them is Alice, Bob, Carol. Another permutation is Carol, Alice, Bob. Another is Bob, Alice, Carol. It turns out that there are six permutations in all, as we will see below. The goal of this section is to learn how to count the number of possible permutations. We’ll do this by starting off with the very simple case where we have only one object. Then we’ll consider two objects, then three, and so on, until we see a pattern. The route we take here will be a common one throughout this book: Although many of the results can be derived in a few lines of reasoning, we’ll take the longer route where we start with a few...
View Full Document
• Fall '13
### What students are saying
• As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.
Kiran Temple University Fox School of Business ‘17, Course Hero Intern
• I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.
Dana University of Pennsylvania ‘17, Course Hero Intern
• The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.
Jill Tulane University ‘16, Course Hero Intern
Stuck? We have tutors online 24/7 who can help you get unstuck.
Ask Expert Tutors You can ask You can ask You can ask (will expire )
Answers in as fast as 15 minutes | 2020-09-19 13:21:35 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9526846408843994, "perplexity": 43.98601863821409}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400191780.21/warc/CC-MAIN-20200919110805-20200919140805-00618.warc.gz"} |
https://datascience.stackexchange.com/questions/19840/rs-mice-imputation-alternative-in-python | # R's mice imputation alternative in Python
What is Python's alternative to missing data imputation with mice in R? Imputation using median/mean seems pretty lame, I'm looking for other methods of imputation, something like randomForest.
• You might want to take a look at fancyimpute – chainD Jun 30 '17 at 3:19 | 2020-10-22 03:52:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3840792775154114, "perplexity": 3660.0261800036133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878879.33/warc/CC-MAIN-20201022024236-20201022054236-00222.warc.gz"} |
https://mnmeconomics.wordpress.com/category/micro-concepts/perfect-competition/ | ### Archive
Archive for the ‘Perfect Competition’ Category
## The profits of a competitive firm
In the long run when a market is perfectly competitive, all firms will make zero profit, but in the short run firms can make profits (this is what attracts new firms into the market and increases the market supply).
As the firm cannot influence the price by setting its level of output, it just receives a price P on every unit it sells. So its total revenue will be $TR=PQ$ and its marginal revenue will be $MR = \frac{d(TR)}{dQ} = P$.
So in a competitive market price equals marginal revenue.
To maximise its profit, a firm sells where its marginal revenue equals its marginal cost, so in a competitive market, a firm sells where price equals marginal cost.
Let’s consider a firm in a competitive market which faces an inverse demand function of $P=20$
The firm has fixed costs of $FC=2000$ and variable costs of $VC=0.4q^{1.5}$
So its total cost is $TC = FC + VC = 2000 + 0.4q^{1.5}$.
Its marginal cost is $MC = \frac{d(TC)}{dq} = 0.6q^{0.5}$.
It sets its production level where P=MC, so
$20 = 0.6q^{0.5} \Rightarrow 33.333 = q^{0.5} \Rightarrow q = 1111.111$
What profits does it earn at this point?
Profit is equal to total revenue minus total cost, so
$\pi = TR - TC = Pq - FC - VC = 20(1111.111) - 2000 - 0.4(1111.111)^{1.5} = 5407.407$
Notice that the marginal cost is not affected by the amount of the fixed cost. If the firm had say, fixed costs of 4000, then it would still produce an output of 1111.111, it would just find that its overall profits were down to 3407.407.
The level of fixed cost affects the overall profits, but it does not affect the overall profit maximising amount.
## The price taking firm
In a perfectly competitive market, the firm is a price-taker, it cannot influence the market price through the quantity it produces. In practice this means the firm is so small in proportion to the overall market that it has no market power, so it can sell any quantity it is able to produce at the market price.
The overall market price is determined by the market supply (provided by all the firms in the market) and the market demand (demanded by all the consumers in the market). The firm can sell any quantity it can produce at the market price, so even though the market is likely to face a downward sloping market demand curve, the individual firm faces a horizontal market demand curve at the market price. Because the firm can sell as much as it can produce at the market price, the marginal revenue for each unit sold is equal to the price, and the average revenue is also equal to the price, as every unit costs the same, so when you divide the total revenue by the number of units sold, you get the price.
The price-taking firm will observe two rules:
Marginal output rule – the firm will produce at an output where the price is equal to the marginal cost of production (MR = MC, and here P = MR so P = MC)
Shutdown rule – the firm will shut down if the average revenue is lower than the average cost at all output levels, so as the price equals average revenue, it will shut down if the price is lower than the average total cost at all levels.
We can look at this in terms of graphs:
This is the market supply and market demand. The equilibrium market price is P1.
This is the firm’s supply and demand graph. The firm’s supply curve is the marginal cost of production, and it faces a horizontal demand curve at the market price of P1. So it produces quantity q1, the point where price equals marginal cost. Here the firm is able to make some profit $\pi$ because at point q1 the average revenue (the price) is greater than the average total cost. This is a short run situation, because when other people see that there are profits to be made in the industry, it will attract the entry of new firms to the market.
The arrival of the new firms expands the market supply to S2, which drives down the equilibrium market price to P2.
At P2, the price has been driven to the level where the firm produces at point q2, where the price is equal to the marginal cost and the average total cost. At this point because the average revenue (price) is equal to the average cost, there is zero profit. So now you reach an equilibrium point. No new firms will enter the industry as there are no profits to be made, firms are just breaking even. But no firms will leave the industry as they are not making losses and are not in the shutdown position. Remember that perfect competition assumes that the firms are identical and face identical cost functions.
So firms in a perfectly competitive market can make profits in the short run, but will make zero profit in the long run.
## What makes a market competitive?
The idea of perfect competition is like the Holy Grail in economics, many economic models start from the premise of perfect competition as a fundamental assumption, which is pretty unrealistic. But it is really important to understand perfect competition because it is the centrepiece of anything to do with markets in microeconomics.
A perfectly competitive market will have these four characteristics:
Sellers are price takers – each seller is sufficiently small in relation to the overall market that they can’t influence the market price by their own production decisions. Because of this no firm believes that it can influence the behaviour of other firms. This isn’t the case with other forms of market structure where there is some element of market power…and a firm with a large market share can influence the market price by varying the level of output it chooses to produce.
Buyers are price takers – each buyer is sufficiently small in relation to the overall market that they can’t influence the market price by the amount they consume.
Sellers do not engage in strategic behaviour – when a firm makes its own output decisions, it does not take into consideration the response of other firms (as it doesn’t expect them to change their behaviour as a result of their decisions).
Firms can enter and exit the market freely – there are no barriers to entry such as prohibitive start up costs or difficulties obtaining licences to produce.
In order for these four characteristics to be present, you will usually need to have:
A large number of sellers and buyers – so that the first two assumptions hold, no individual can influence the market price.
Highly substitutable goods – if one seller reduced the price, consumers would switch away from the other firms, because the good in question is easily substitutable from one firm to another. This will not happen if the firms can differentiate between their brands, ie a firm with market power may be able to get away with charging a higher price than a rival and still get sales, because consumers prefer that brand. But if they are producing something which is basically identical (eg ball bearings of standard size) then people will just buy from the firm that sells it cheapest. This characteristic pushes price down to the lowest possible level that firms can still cover their costs to produce the good.
Buyers must have full information – if a firm were to raise its price, consumers would know that rival firms sell it cheaper and could switch away to them. They have to have full information available about the alternatives. | 2017-01-19 12:51:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38815993070602417, "perplexity": 788.905452778493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00175-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://gsebsolutions.com/gseb-solutions-class-9-maths-chapter-12-ex-12-2/ | # GSEB Solutions Class 9 Maths Chapter 12 Heron’s Formula Ex 12.2
Gujarat Board GSEB Solutions Class 9 Maths Chapter 12 Heron’s Formula Ex 12.2 Textbook Questions and Answers.
## Gujarat Board Textbook Solutions Class 9 Maths Chapter 12 Heron’s Formula Ex 12.2
Question 1.
A park in the shape of a quadrilateral ABCD, has ∠C = 90°,AB = 9m, BC = 12m,CD = 5m and AD = 8 m. How much area does it occupy?
Solution:
Join BD.
Area of right triangle BCD
= $$\frac {1}{2}$$ x base x height
= $$\frac {1}{2}$$ x 5 x 12 = 30m2
In right triangle BCD,
BD2 = BC2 + CD2 (By Pythagoras Theorem)
= (12)2 + (5)2 = 144 + 25 = 169
= BD = $$\sqrt{169}$$ = 13m
For ΔABD
a = 13m, b = 8m, c = 9m
∴ s = $$\frac {a + b + c}{2}$$
s = $$\frac {13 + 8 + 9}{2}$$ = $$\frac {30}{2}$$ = 15m
∴ Area of the ΔABD
= $$\sqrt{s(s-a)(s-b)(s-c)}$$
= $$\sqrt{15(15-13)(15-8)(15-9)}$$
= $$\sqrt{15(2)(7)(6)}$$
= $$\sqrt{(3 \times 5)(2)(7)(2 \times 3)}$$
= 3 x 2$$\sqrt{(35}$$ = 6$$\sqrt{(35}$$m2
= 6 x 5.916 = 35.5 m2 (approx.)
∴ Area of the quadrilateral ΔBCD
= Area of ΔBCD + Area of ΔABD
= 30 m2 + 35.5 m2
= 65.5 m2 (approx.)
Hence, the park occupies the area 65.5 m2 (approx.).
Question 2.
Find the area of a quadrilateral ABCD in which AB = 3 cm, BC = 4 cm, CD = 4 cm, DA = 5 cm and AC =5cm.
Solution:
For ΔABC
a = 4cm, b = 5cm, c = 3cm
∴ a2 + c2 = b2
ΔABC is right angled with ∠B = 90
∴ Area of right triangle ABC
= $$\frac {1}{2}$$ x base x height
= $$\frac {1}{2}$$ x 3 x 4 = 6 cm2
For ΔACD
a = 4 cm, b = 5 cm, c = 5 cm
s = $$\frac {a + b + c}{2}$$ = $$\frac {4 + 5 + 5}{2}$$ = $$\frac {14}{2}$$ = 7cm
∴ Area of the ACD
= $$\sqrt{s(s-a)(s-b)(s-c)}$$
= $$\sqrt{7(7-4)(7-5)(7-5)}$$
= .$$\sqrt{7(3)(2)(2)}$$ = 2 $$\sqrt{21}$$cm2
= 2 x 4.6 cm2 (approx.)
= 9.2 cm2 (approx.)
∴ Area of the quadrilateral ABCD
= Area of ΔABC + Area of ΔACD
= 6 cm2 + 9.2 cm2
= 15.2 cm2 (approx.)
∴ Area of the quadrilateral ABCD
= Area of ÊABC + Area of ΔACD
= 6 cm2 + 9.2 cm2
= 15.2 cm2 (approx.)
Question 3.
Radha made a picture of an aeroplane with colored paper as shown in figure. Find the total area of the paper used.
Solution:
For triangular Area-I
a = 5 cm, b = 5cm, c = 1cm
s = $$\frac {a + b + c}{2}$$ = $$\frac {5 + 5 + 1}{2}$$ = $$\frac {11}{2}$$ = 5.5 cm
∴ Area-I = $$\sqrt{s(s-a)(s-b)(s-c)}$$
= $$\sqrt{5.5(5.5-5)(5.5-5)(5.5-1)}$$
= $$\sqrt{5.5(.5)(.5)(4.5)}$$
= $$(.5) \sqrt{(5.5)(4.5)}$$
= $$(.5) \sqrt{(.5)(11)(.5)(9)}$$
= (.5) (.5) (3) $$\sqrt{11}$$
= 0.75 $$\sqrt{11}$$ = 0.75(3.3) (approx.)
= 2.5 cm2 (approx.)
Area-II = 6.5 x 1 = 6.5 cm2
For Area-III
Height of an equilateral = $$\frac{\sqrt{3}}{2}$$ a = $$\frac{\sqrt{3}}{2}$$(1)
∴ h = $$\frac{\sqrt{3}}{2}$$ cm
Area of trapezium
= $$\frac {1}{2}$$ x (1 + 2) x $$\frac{\sqrt{3}}{2}$$
= $$\frac{3 \sqrt{3}}{4}$$ = $$\frac {3}{4}$$ x 1.732
= 1.3 cm2 (approx.)
Area-IV= $$\frac{6 \times 1.5}{2}$$ = 4.5 cm2
Area-V= $$\frac{6 \times 1.5}{2}$$ = 4.5 cm2
∴ Total area of the paper used
= Area-I + Area-II + Area-III + Area-IV + Area-V
= 2.5 2cm2 + 6.5 cm2 + 1.3 cm2 + 4.5 cm2 + 4.5 cm2
= 19.3 cm2.
Question 4.
A triangle and a parallelogram have the same base and the same area. If the sides of the triangle are 26 cm, 28 cm, and 30 cm, and the parallelogram stands on the base 28 cm, find the height of the parallelogram.
Solution:
For triangle
a = 26 cm, b = 28 cm, c = 30
s = $$\frac{a+b+c}{2}$$
s = $$\frac{26+28+30}{2}$$ = $$\frac{84}{2}$$ = 42 cm
∴ Area of the triangle
= $$\sqrt{s(s-a)(s-b)(s-c)}$$
= $$\sqrt{42(42-26)(42-28)(42-30)}$$
= $$\sqrt{42(16)(14)(12)}$$
= $$\sqrt{(6 \times 7)(16)(7 \times 2)(6 \times 2)}$$
= 6 x 4 x 7 x 2= 336 cm2.
Let the height of the parallelogram be h cm.
Then, area of the parallelogram = Base x Height = 28 x h cm2
According to the question,
28 h = 336 h = $$\frac{336}{28}$$
h = 12 cm
Hence, the height of the parallelogram is 12 cm.
Question 5.
A rhombus-shaped field has green grass for 18 cows to graze. If each side of the rhombus is 30 m and its longer diagonal is 48 m, how much area of grass field will each cow be getting?
For ΔABC
a = 30m, b = 48m, c = 30m
s = $$\frac{a+b+c}{2}$$ = $$\frac{30+48+30}{2}$$ = $$\frac{108}{2}$$ = 54 m
∴ Area of ΔABC
= $$\sqrt{s(s-a)(s-b)(s-c)}$$
= $$\sqrt{54(54-30)(54-48)(54-30)}$$
= $$\sqrt{54(24)(6)(24)}$$
= $$\sqrt{(9 \times 6)(24)(6)(24)}$$ = 3 x 6 x 24 = 432 m2
∴ Area of the rhombus
= 2 x area of ΔABC
= 2 x 432 = 864 m2
∴ Area of grass for 18 cows = 864 m2
∴ Area of grass for 1 cow
= $$\frac{864}{18}$$m2 = 48m2
Question 6.
An umbrella is made by stitching 10 triangular pieces of cloth of two different colours (see figure), each piece measuring 20 cm, 50 cm and 50 cm. How much cloth of each colour is required for the umbrella?
Solution:
For one triangular piece
a = 20 cm,b = 50cm, c = 50 cm
s = $$\frac{a+b+c}{2}$$
= $$\frac{20+50+50}{2}$$
= $$\frac{120}{2}$$
= 60cm
∴ Area of one triangle
= $$\sqrt{s(s-a)(s-b)(s-c)}$$
= $$\sqrt{60(60-20)(60-50)(60-50)}$$
= $$\sqrt{60(40)(10)(10)}$$ = 200$$\sqrt{6}$$ cm2
∴ Area of 5 triangles of one colour
= 5(200$$\sqrt{6}$$) cm2 = 1000$$\sqrt{6}$$ cm2
Hence, 1000 $$\sqrt{6}$$ cm2 cloth of each colour is required for the umbrella.
Question 7.
A kite in the shape of a square with a diagonal 32 cm and an isosceles triangle of base 8 cm and side 6 cm each is to be made of three different shades as shown in figure. How much paper of each shade has been used in it?
Solution:
= 2 x $$\left(\frac{1}{2} \times 16 \times 16\right)$$ = 256 cm2
Similarly, area of paper of shade-II = 256 cm2
For area of paper of shade-III
a = 8cm, b = 6cm, c = 6cm
s = $$\frac{a + b + c}{2}$$ = $$\frac{8 + 6 + 6}{2}$$ = 10 cm
∴ Area of paper of shade-III
= $$\sqrt{s(s-a)(s-b)(s-c)}$$
= $$\sqrt{10(10-8)(10-6)(10-6)}$$
= $$\sqrt{10(2)(4)(4)}$$ = 8 $$\sqrt{5}$$ = 17.89 cm2.
Question 8.
A floral design on a floor is made up of 16 tiles which are triangular, the sides of the triangle being 9 cm, 28 cm and 35 cm. Find the cost of polishing the tiles at the rate of 50 paise per cm2.
Solution:
For one tile
a = 9cm, b = 28cm, c = 35cm
s = $$\frac{a + b + c}{2}$$ = $$\frac{9 + 28 + 35}{2}$$ = 36 cm
∴ Area of one tile = $$\sqrt{s(s-a)(s-b)(s-c)}$$
= $$\sqrt{36(36-9)(36-28)(36-35)}$$
= $$\sqrt{36(27)(8)(1)}$$
= $$\sqrt{36(9 \times 3)(4 \times 2)}$$
= 6 x 3 x 2$$\sqrt{6}$$ = 36$$\sqrt{6}$$cm2
∴ Area of 16 tiles
= 36$$\sqrt{6}$$ x 16 = 576$$\sqrt{6}$$ cm2
∴ Cost of polishing the tiles at the rate of 50 paise per cm2.
= 576 $$\sqrt{6}$$ x 5o p = ₹ $$\frac{576 \sqrt{6} \times 50}{100}$$
= ₹ 288$$\sqrt{6}$$ = ₹ 705.60.
Question 9.
A field is in the shape of a trapezium whose parallel sides are 25 m and 10 m. The nonparallel sides are 14 m and 13 m. Find the area of the field.
Solution:
Let the given field be in the shape of a trapezium ABCD in which AB = 25 m, CD = 10 m, BC = 13m and AD = 14m. FromD,drawDEIIBC meeting AB at E. Also, draw DE .L AB.
∴ DE = BC = 13m
AE = AB – EB = AB – DC
= 25 – 10 = 15m
For ABD
a = 14m, b = 13m, c = 15m
S = $$\frac{a + b + c}{2}$$ = $$\frac{14 + 13 + 15}{2}$$ = $$\frac{42}{2}$$ = 21 m
∴ Area of the ΔAED
= $$\sqrt{s(s-a)(s-b)(s-c)}$$
= $$\sqrt{21(21-14)(21-13)(21-15)}$$
= $$\sqrt{21(7)(8)(6)}$$
= $$\sqrt{(7 \times 3)(7)(4 \times 2)(2 \times 3)}$$
= 7 x 3 x 2 x 2 = 84m2
∴ $$\frac{1}{2}$$ x AE x DF = 84
= $$\frac{1}{2}$$ x 15 x DF = 84
DF = $$\frac{84 x 2}{15}$$
DE = $$\frac{56}{5}$$ = 11.2m
Height of the trapezium is 11.2 m
Area of parallelogram EBCD = Base x Height
= EB x DE = 10 x $$\frac{56}{5}$$ =112m2
∴ Area of the field = Area of ΔAED + Area of parallelogram EBCD
= 84 m2 + 112 m2 = 196 m2. | 2022-05-22 17:47:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.52352374792099, "perplexity": 2034.6163511876816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545875.39/warc/CC-MAIN-20220522160113-20220522190113-00404.warc.gz"} |
http://math.stackexchange.com/questions/59813/bounds-on-off-diagonal-entries-of-a-correlation-matrix | Bounds on off-diagonal entries of a correlation matrix
Assume that all the entries of an $n \times n$ correlation matrix which are not on the main diagonal are equal to $q$. Find upper and lower bounds on the possible values of $q$.
I know that the matrix should be positive semidefinite but how to proceed to get the upper and lower bounds?
Thanks!
-
Do you know anything else about correlation matrices, other than positive semidefinite? Anything special about their form, how they are calculated? – Gerry Myerson Aug 26 '11 at 4:02
Bottom line: $-1/(n-1) \le q \le 1$. – Michael Hardy Aug 29 '11 at 15:40
Since it's a correlation matrix, the diagonal entries are equal to 1 and the off-diagonal entries are in $[-1,1]$. Now write the matrix as $aP+bQ$ where $P$ is the $n\times n$ matrix in which every entry is $1/n$, so it's the matrix of the orthogonal projection onto the line where all components of the vector are equal, and $Q = I - P$. Then you can exploit the fact that $P$ and $Q$ are complementary orthogonal projections onto spaces of dimensions $1$ and $n-1$. From that it follows that the matrix $aP+bQ$ can be diagonalized as $$\begin{bmatrix} a \\ & b \\ & & b \\ & & & b \\ & & & & \ddots \end{bmatrix}$$ This should be a covariance matrix. To see that, recall that (1) a correlation matrix is a covariance matrix in which the diagonal entries are all 1, and (2) if $A$ is the matrix of covariances of a random vector $X$, the $MAM^\top$ is the matrix of covariances of $MX$ ($M$ need not generally be a square matrix, but in this case it is).
Since the diagonal matrix above is a covariance matrix, $a$ and $b$ cannot be negative. So what must $q$ be in order that $a$ and $b$ be nonnegative?
-
BTW, a simple instance in which one of the two opposite extreme cases is realized is where $(X_1,\dots,X_n) = (0,0,\ldots,0,1,0,\ldots,0,0)$ with a $1$ in the $i$th place, with probability $1/n$ for each value of $i$. Clearly $\operatorname{corr}(X_i,X_j)$ is $1$ if $i=j$ and is negative if $i\neq j$, and close to $0$ if $n$ is large. – Michael Hardy Aug 26 '11 at 17:55
Is the case included, that all offdiagonal entries are $\small -1$ ? If I approach that value from above, say going to $\small -1+1e-80$ , the cholesky-decompositions begin to show much increasing values. So I suggest to check carefully whether all values can be $\small -1$ – Gottfried Helms Aug 26 '11 at 19:06
They cannot be $-1$ except in the simplest non-vacuous special case, that $n=2$. The lower bound is negative, but nowhere near $-1$. It depends on $n$. – Michael Hardy Aug 26 '11 at 19:20
@GottfriedHelms I just posted an answer showing that at least one off-diagonal orrelation must have value $-1/(n-1)$ or more, and so all off-diagonal values being $-1$ is not a possibility except for the trivial case $n=2$. – Dilip Sarwate Feb 22 '13 at 17:49
@dilip: thank you for the notification! – Gottfried Helms Feb 22 '13 at 18:16
A general scheme for the answer is immediately obvious by generalization of the following example. Assume the correlation-matrix $R$ of size nxn where in the example n=5 and $R=L \cdot L^T$ . Then define L with a unknown value $a$ $$L=\begin{bmatrix} a&a&a&a&.&.&.&.&.&. \\ -a&.&.&.&a&a&a&.&.&. \\ .&-a&.&.&-a&.&.&a&a&. \\ .&.&-a&.&.&-a&.&-a&.&a \\ .&.&.&-a&.&.&-a&.&-a&-a \\ \end{bmatrix}$$ Then all offdiagonal entries in $R=L \cdot L^T$ are $r_{k,j}=-a^2$ and the diagonal entries are $r_{k,k}=4 a^2$. To have $r_{k,k}=1$ we must have $a=\sqrt{1 \over 4}$ and thus $q = r_{k,j}=-{1 \over 4}$.
It is immediately obvious how this is generalized, so for some $n$ we have $q=-{1 \over n-1}$
Unfortunately, this is only an illustrative example so far. It would be nice to show, that this defines indeed also the highest possible value for $-q$, but I do not see it at the moment how this could be done in a similarly obvious manner ...
-
Consider $n$ unit-variance random variables $X_1, X_2, \ldots X_n$ with the property that $\operatorname{cov}(X_i,X_j) = q$ for all $i \neq j$. Then, the covariance matrix of these random variables is the same as the correlation matrix. Now \begin{align*} \operatorname{var}(X_1+X_2+\cdots+X_n) &= \sum_{i=1}^n \operatorname{var}(X_i) + 2\sum_{i=1}^n\sum_{j=i+1}^n\operatorname{cov}(X_i,X_j)\tag{1}\\ &= n + n(n-1)q\\ &\geq 0 \end{align*} and so it must be that $$q \geq -\frac{1}{n-1}$$ as Michael Hardy noted in a succinct comment on the question. The upper bound is, of course, $q \leq 1$. Both bounds are achievable. Obviously, if all the $X_i$ are the same random variable $X$, then $q = 1$. For the lower bound, suppose that the $X_i$ are independent unit-variance random variables so that they enjoy the desired constant correlation with $q=0$. For each $i$, set $Y_i = X_i-\bar{X}$ where $$\bar{X} = \frac{1}{n}\sum_{i=1}^n X_i.$$ Then, $$\operatorname{var}(Y_i) = \left(\frac{n-1}{n}\right)^2 + (n-1)\left(\frac{1}{n}\right)^2 = \frac{n-1}{n}$$ while for $i \neq j$, \begin{align} \operatorname{cov}(Y_i,Y_j) &= \operatorname{cov}(X_i - \bar{X}, Y_j- \bar{X})\\ &= \operatorname{cov}(X_i,X_j) - \operatorname{cov}(X_i,\bar{X}) - \operatorname{cov}(X_j,\bar{X})+ \operatorname{var}(\bar{X})\\ &= 0 - \frac{1}{n} - \frac{1}{n} + \frac{1}{n}\\ &= -\frac{1}{n} \end{align} showing that all the correlation coefficients do indeed have the minimum value $$\frac{-1/n}{\sqrt{(n-1)/n}\sqrt{(n-1)/n}} = -\frac{1}{n-1}.$$
Returning to $(1)$, note that if the correlation coefficients are not required to all have the same value, then from $(1)$, we get that the sum of the $n(n-1)$ correlations must be at least $-n$. Thus, the average of the $n(n-1)$ correlations is at least $-1/(n-1)$ and since at least one correlation must be as large as the average, we can assert that
In any collection of $n$ random variables $X_1, X_2, \ldots, X_n$ with finite variance, there must be at least one pair of random variables $(X_i,X_j)$ (with $i\neq j$) for which $$\operatorname{cov}(X_i,X_j) \geq -\frac{1}{n-1}$$ | 2014-03-08 12:53:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9895466566085815, "perplexity": 141.40845587811498}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999654453/warc/CC-MAIN-20140305060734-00086-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/519646/sequence-of-lebesgue-measurable-sets-a-k-such-that-a-k-subset0-1-lim-l | # sequence of Lebesgue measurable sets $A_k$ such that $A_k\subset[0,1]$, $\lim \lambda(A_k)=1$, but $\lim \inf A_k=\emptyset$.
Give an example in $\mathbb{R}$ of a sequence of Lebesgue measurable sets $A_k$ such that $A_k\subset[0,1]$, $\lim \lambda(A_k)=1$, but $\lim \inf A_k=\emptyset$.
My thoughts: By definition, $\lim\inf A_k=\cup_{n=1}^{\infty}\cap_{k\ge n}A_k$, and we want this to be empty. Maybe we can construct a sequence of sets such that $\lambda(A_k)=1-1/k$, but $\cap_{k\ge n}A_k=\emptyset$ for all $n$.
This was mentioned here but was unanswered: Fatou's lemma and measurable sets
First note that $\liminf A_{k}=\emptyset$, is equivalent to the statement that that given any $x\in[0,1]$ for any $n\in\mathbb{N}$ there is $n_{0}\in\mathbb{N}$ s.t. $n_{0}>n$ and $x\notin{A_{n_{0}}}$.
Using this we can come up with a sequence using the following "pattern" $A_{1}=[0,\frac{1}{2}]$, $A_{2}=[\frac{1}{2},1]$, $A_{3}=[0, \frac{1}{3}] \cup[\frac{1}{3},\frac{2}{3}]$, $A_{4}=[0,\frac{1}{3}] \cup[\frac{2}{3},1]$, $A_{5}=[\frac{1}{3},\frac{2}{3}] \cup[\frac{2}{3},1]$...... which has the desired properties.
• Wait, so what does $[2/3, 1/3]$ mean? – Christmas Bunny Oct 9 '13 at 4:39
• That was a typo, sorry. It has been corrected. – UserB1234 Oct 9 '13 at 15:37
Let $$A_1 = [0,1] \setminus [0,\frac12]$$ and $$A_2 = [0,1] \setminus [\frac12,1]$$.
Then let $$A_3 = [0,1] \setminus [0,\frac14]$$ and $$A_4 = [0,1] \setminus [\frac14,\frac12]$$ and $$A_5 = [0,1] \setminus [\frac12,\frac34]$$ and $$A_6 = [0,1] \setminus [\frac34,1]$$.
Then let $$A_7 = [0,1] \setminus [0,\frac18]$$ and ... and $$A_14 = [0,1] \setminus [\frac78,1]$$.
And so on. | 2021-06-24 04:12:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9859731793403625, "perplexity": 125.10058318462752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488550571.96/warc/CC-MAIN-20210624015641-20210624045641-00464.warc.gz"} |
https://www.nature.com/articles/s41598-022-06315-3 | Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Improving robustness of automatic cardiac function quantification from cine magnetic resonance imaging using synthetic image data
## Abstract
Although having been the subject of intense research over the years, cardiac function quantification from MRI is still not a fully automatic process in the clinical practice. This is partly due to the shortage of training data covering all relevant cardiovascular disease phenotypes. We propose to synthetically generate short axis CINE MRI using a generative adversarial model to expand the available data sets that consist of predominantly healthy subjects to include more cases with reduced ejection fraction. We introduce a deep learning convolutional neural network (CNN) to predict the end-diastolic volume, end-systolic volume, and implicitly the ejection fraction from cardiac MRI without explicit segmentation. The left ventricle volume predictions were compared to the ground truth values, showing superior accuracy compared to state-of-the-art segmentation methods. We show that using synthetic data generated for pre-training a CNN significantly improves the prediction compared to only using the limited amount of available data, when the training set is imbalanced.
## Introduction
Cardiovascular disease is the leading cause of death globally, according to the World Health Organization. Cardiovascular magnetic resonance imaging (MRI) is considered the gold standard for evaluating heart function. Estimating the ventricular end-systolic (ESV) and end-diastolic (EDV) volumes, stroke volume (SV) and ejection fraction (EF) from cardiac MRI is a prerequisite for assessing cardiovascular diseases, and typically requires careful and precise contouring of the ventricles.
Deep learning (DL) is predicted to bring substantial change to how cardiovascular MRI is acquired and analyzed1. The gradual adoption of DL to solve medical image analysis tasks has spawned hundreds of articles addressing the automatic segmentation of cardiac chambers from MRI2, including several segmentation challenges organized by societies such as MICCAI3 and Kaggle4. For example, Bai et al.5 proposed a deep learning segmentation approach using a fully convolutional network (FCN). Liao et al.6 also proposed a deep learning segmentation approach using a modified FCN called Hypercolumns Fully Convolutional Neural Network (HFCN), where features from different levels are concatenated along channel axis. DL algorithms are increasing their performance thanks to the larger annotated datasets available, such as the UK Biobank7, but data with ground-truth segmentations is typically not sufficiently representative of cardiovascular disease phenotypes, scanners, sequences, and protocols, which limits generalizability. Moreover, experts do not always agree on the precise contour location, as captured by the reduced inter-observer reproducibility of manual contours8, and corrections are still routinely required3.
Data augmentation is routinely used in training DL models for medical imaging to increase and diversify the training data set but is often limited to affine transformations and noise addition, which cannot generate cases with diverse clinical and scan parameters. In recent years, there has been a growing interest in DL for synthetic data generation, notably starting with Generative Adversarial Networks (GAN)9 which can map a random noise vector to a synthetically generated image. A major disadvantage of GAN is the lack of control over the generated images, which was mitigated with the introduction of conditional GANs10. Style transfer DL architectures (CycleGAN11, Pix2Pix12) convert an input image from one domain to another, by modifying the style, while preserving the content. Unsupervised style transfer has been applied from standard CINE MRI to LGE13 and CT14, but with limited application to cardiovascular pathologies. The main drawback of style transfer is the need for a large set of annotated images from at least one domain, that is representative of all cardiac anatomy phenotypes. Semantic image synthesis approaches (mask-to-image translation) map one or more segmentation masks to a corresponding image, i.e. the opposite of segmentation networks. GauGAN15 is a novel approach using a Spatially Adaptive Normalization (SPADE)16 technique which is a combination between batch normalization and instance normalization, implemented as a two-layer CNN. The network produces a realistic, completely new images, thus introducing more shape, texture, and background variations than conventional computer vision-based augmentation techniques. In one cardiac MRI application, Abbasi-Sureshjani et al.17 have used a GauGAN network to synthesize labeled 3D + t CINE images. The usage of synthetic data has been previously shown to improve deep-learning based segmentation models, when little training data is available18.
Other AI approaches focus on direct cardiac function quantification though regression, without producing an aggregated segmentation of the structure of interest. Luo et al.19 proposed a DL regression approach based on a multi-scale atlas for the left ventricle (LV) location and a deep Convolutional Neural Network (CNN). One benefit of regression methods is that they can incorporate training data where only the EDV and ESV values are available, e.g., from a radiology report, without requiring ground-truth segmentation masks, which are challenging and costly to obtain.
In this work, we investigate the automatic cardiac function quantification as a regression task. Our first contribution is a Residual Spatial Feature Encoding Recurrent network for Abstracting high-level patient features (SFERA) to predict left ventricle volumes (and implicitly the EF) without explicit segmentation. The network combines a fully convolutional feature encoder that learns the cardiac geometry with a recurrent network based on a bidirectional LSTM20 that incorporates the volumetric information over a stack of variable number of short-axis slices. To train our proposed regression network, a large dataset with a wide and dense distribution of ground truth EDV and ESV values would be required to ensure an accurate and robust performance across the entire continuum of values. We hypothesize that synthetically generated cardiac MRI can substantially improve the performance of our regression model. To show this, our second contribution is a DL approach based on the GauGAN15 architecture, to synthetically generate short axis (SAX) cardiac MRI stacks with a wide range of EF values, to be more representative of real-world clinical cases. The SFERA network was pre-trained on the large synthetically generated dataset, and then finetuned on real cases. Our final EF prediction error is comparable or slightly smaller than other state-of-the-art methods.
## Results
### Synthetic image generation
Figure 1a,b shows the normal distribution of the EF parameter in the two large datasets. In the original datasets, the reported EF was reduced (< 40%) in only 6.3% of the cases and high (> 70%) in only 10.5% of the cases. For a small to moderate training data size, this data imbalance can lead to suboptimal results for the pathological cases, i.e. an AI algorithm trained on such data distributions may perform poorly on the less represented low or high EF cases. Hence, by automatically processing the segmentation masks of our real training subjects, we synthetically generated 22,653 new SAX stacks consisting of ED and ES masks for the left and right ventricles with a uniform distribution along the LV EF spectrum as shown in Fig. 1c. Using a deep-learning network adversarial-trained for real patient data for mask-to-image generation, the synthetic masks were used to generate the same number of synthetic cardiac MR subject datasets. Figure 2 shows the entire workflow for generating new synthetic slices with a wide range of EF values, starting from a mid-ventricular slice of a real subject, as an example. For more details see the “Methods” section. The resulting synthetic cohort was approximately 32 × larger than the real subject cohort. Figure 3 shows three example synthetic subjects generated using the proposed approach.
### Cardiac function prediction
The baseline results, obtained by training our proposed SFERA network for cardiac function prediction solely on real case data with a normal EF distribution are referred to as Real Subjects Only (RSO). The same network architecture trained entirely on synthetic data with a uniform EF distribution is referred to as Synthetic Subjects Only (SSO). The SSO model finetuned on real cases is referred to as Real Subjects with Pretraining (RSP).
The Real Subjects All (RSA) experiment represents the same network architecture, but trained only on real data from both datasets (without finetuning).
Figure 4 shows the correlation between the manually annotated and the automatically predicted LV volumes and EF for the models with and without pretraining. The Pearson correlation values corresponding to RSO experiment (without pretraining) for EF, EDV and ESV are 78.7%, 91.1% and 94.0% ($$p$$ < 0.001) for Dataset 1 and 81.5%, 94.8%, 92.1% ($$p$$ < 0.001) for Dataset 2, as shown in Fig. 4 top. In the RSP experiment (with pretraining), the Pearson correlation values for EF, EDV and ESV increased to 95.0%, 98.0% and 98.1% ($$p$$ < 0.001) for Dataset 1, and 86.2%, 97.1%, 94.6% ($$p$$ < 0.001) for Dataset 2, as shown in Fig. 4 bottom.
The Fig. 5 shows the Bland–Altman analysis for the volumes and the EF predictions on our two test sets, for the experiments trained on real cases without and with pretraining. In both cases no bias was observed. The mean RMS error in the RSO experiment for the EF was 7.1% for Dataset 1 and 3.7% for Dataset 2. In the RSP experiment, the root mean squared error (RMSE) was significantly reduced to 3.7% for Dataset 1 and 3.2% for Dataset 2 ($$p$$ < 0.005). Similarly, the RMSE was significantly reduced from 23.7 to 11.2 ml ($$p$$ < 0.005) for Dataset 1 and from 11.0 to 8.4 ml for Dataset 2 for EDV. For ESV, the RMSE was reduced from 12.6 to 7.9 ml ($$p$$ < 0.005) for Dataset 1 and from 8.1 to 6.7 ml for Dataset 2.
The mean absolute error (MAE) in the RSO experiment for the EF was 4.9% for Dataset 1 and 2.8% for Dataset 2. In the RSP experiment, the mean absolute error was significantly reduced to 2.7% for Dataset 1 and 2.5% for Dataset 2 ($$p$$ < 0.005). Similarly, the mean absolute error was significantly reduced from 16.8 to 7.3 ml ($$p$$ < 0.005) for Dataset 1 and from 8.0 to 6.2 ml for Dataset 2 for EDV. For ESV, the mean absolute error was reduced from 9.0 to 5.2 ml ($$p$$ < 0.005) for Dataset 1 and from 5.6 to 4.7 ml for Dataset 2.The 95% confidence intervals of the MAE for EF, computed using bootstrapping, are [2.0, 2.1] in SSO experiment, [4.4, 5.3] and [2.7, 2.9] for RSO experiment Dataset 1 and Dataset 2, [2.4, 2.9] and [2.4, 2.6], for RSP experiment, Dataset 1 and Dataset 2.
Table 1 compares the RMSE of EDV, ESV, and EF prediction, for the RSO, RSA, and RSP experiments with our proposed approach, and the results of the winning team21 of the Kaggle challenge (based on the mean Continuous Ranked Probability Score (CRPS)22 metric) and the results of the top 46 team (which had the lowest RMSE for EF in the competition).Namely, the winning team Luo et al.21 obtained a 0.00948 CRPS22 score, which is the equivalent of 12.0 ml RMS error for EDV, 10.2 ml for ESV and 4.9% ejection fraction. The smallest ejection fraction error, 4.7 was obtained by the top 4 team Liao et al.6, even though the RMSE for volumes is a slightly bigger. We also compared our results with a previously published state-of-the-art approach on the Dataset 25.
We additionally show that while a large pretraining dataset improves the prediction, the potential for improvement is bounded. We could reach a similar accuracy using only a random 50% of the available synthetic data (RMSE 3.8) compared to using the full dataset (RMSE 3.7). Selecting 50% of our synthetic data such that it has the same distribution as the original Dataset 1 lead to a similar result (RMSE 3.9). However, when considering only the test subjects with a reduced EF < 40%, the model pretrained on synthetic data with a normal distribution of the EF parameter had a lower error compared to the model pretrained on data with the same EF distribution as the original Dataset 1 (RMSE 3.0 vs. 4.2).
The inference time on a desktop computer with the following hardware configuration: Intel® Core™ i7-7700 K CPU @ 4.20 GHz, NVIDIA GeForce GTX 1080 Ti graphics card, 64 GB RAM was around 5.5 ± 4.3 ms.
## Discussion
Our initial RSO model trained only on real data is not able to reach the same performance of state-of-the-art DL segmentation approaches on the same dataset. By addressing the automatic cardiac volume computation as a regression task, we are introducing more sensitivity to the distribution of the cardiac volumes over the training data, than in a classic image segmentation based setting. We observed that having a wide and dense distribution of values in the training set is crucial for achieving good accuracy across the entire range of values.
Our RSP model, first pretrained on synthetic data, by far outperforms the baseline RSO model trained only on real data. The EF prediction error decreases significantly when synthetic data is used for pretraining. Similarly, the Pearson correlation for the EDV, ESV, and EF is significantly higher for RSP compared to RSO. Pre-training has a high impact especially for cases with low or very high EF values, which had a low density in the initial distribution.
The RSA model, which was jointly trained on Dataset 1 and Dataset 2 and evaluated on the two test sets, has an improved performance compared to the RSO model, indicating that having more data overall improves the results. However, since combining the datasets does not lead to a wide and dense distribution of the ejection fraction values, the performance is inferior when compared to the RSP scenario where synthetic data with a quasi-uniform ground truth value distribution is employed for pre-training. Hence, performing pretraining on a large dataset where the EF is uniformly distributed is preferred to using a large dataset that preserves the EF imbalance of the original data.
Our final prediction model after pretraining on synthetic data (RSP) performs well compared to other state-of-the-art approaches. Since the original ground-truth of the Kaggle challenge test set is not publicly available, our results on Dataset 1 were based on our own manual segmentation of the CINE MRI data, so they are not directly comparable to the Kaggle challenge results. Nevertheless, our model shows very promising performance emphasized by a tight confidence interval.
A main benefit of our first contribution, the SFERA network for determining the EDV and ESV through regression, is that we can use training data where only the cardiac volumes and ejection fraction are provided as ground truth, without the need for a segmentation mask. Finetuning the network on a new dataset acquired with a different scanner, imaging protocol, or including new pathologies is often necessary when adapting a DL model to routine clinical data. In this setting, the EDV and ESV values could be more easily obtained in practice, for example from a radiology report, compared to full segmentation contours. More specifically, when finetuning on Dataset 2, our network only uses the EDV and ESV values. Nevertheless, our performance is close to a state-of-the-art segmentation approach trained on the segmentation masks. The main reason why the performance of the SFERA model does not improve more after pretraining on synthetic data is that Dataset 2 contains mostly healthy subjects, with an ejection fraction in the range 50–60%. Thus, adding synthetic data from a wider range of ejection fractions in this case does not have such a large positive impact overall.
The main disadvantage of our first contribution is that the result of the SFERA network is more difficult to confirm without the contours present, compared to a segmentation network. However, regression approaches could potentially serve as a verification step for a segmentation network, to help increase confidence in the final measurement when dealing with uncertainty. Another potential application is to filter out normal cases that do not require further precise quantification, which could save reading time. Hybrid approaches may employ an ensemble that combine different segmentation and regression solutions to improve the accuracy of the combined result23. For example, depending on how the basal slices are subjectively handled in manual vs. automatic contouring, segmentation-based approaches may introduce notable differenced in the EF in some cases. Figure 6 shows two sample subjects from Dataset 1 with overlaid manually annotated and automatic contours obtained using a state-of-the-art cardiac segmentation prototype24. For both subjects, predicted EF values using the proposed method (70% and 31% respectively) are similar to the EF values computed based on the manually segmented contours (66% and 32% respectively). The automatic segmentation algorithm inaccurately segments the base and apex at ES, and therefore the EF predictions obtained with the proposed approach is closer to the ground truth compared to the EF obtained by automatic segmentation (76% and 42% respectively).
An advantage of our second contribution, namely the image synthesis approach, is that we are able to generate realistic-looking cardiac anatomy including papillary muscles and trabeculations inside the blood pool, which could then be used for pre-training. The synthetic data may also include small image artefacts, different image sharpness and varying contrast, similar to the original dataset used for training, which contribute to the realistic aspect. These synthetic cases thus reliably serve in the pre-training step for the ventricle volume and EF prediction task.
One limitation of our image synthesis approach is that the network was trained on individual 2D frames. This causes the image background to be somewhat inconsistent between ED and ES and for consecutive slices of the same case. As shown in Fig. 3, the background may not always be anatomically accurate because no segmentation of the background structures was included when training the GauGan network. Nevertheless, the background generally captures the diaphragm, abdominal structures, lungs and chest wall, as well as the familiar texture expected from MRI, making it suitable for pretraining. In future work, we plan to extend the approach to generate consistent 3D volumes.
Our proposed image synthesis DL network also requires an initial segmentation of the training data, to generate new synthetic patients. In a novel approach, the need for manual segmentation could be circumvented by using an autoencoder25, one direction which we will further investigate. Another limitation is that the ED and ES frames are needed to be preselected as input to the volume prediction network. However, this task could also be performed by an independent neural network trained to automatically identify ED and ES timepoints from a CINE series such as26.
In general, while Dataset 2 contains mostly healthy subjects, the Dataset 1 data does contain some examples of unspecified cardiovascular pathologies but the precise disease labeling has not been made publicly available. However, this data is still not sufficiently representative of commonly imaged cardiovascular diseases such as: cardiomyopathies, dyssynchrony, akinetic or dyskinetic wall segments, or apical aneurysms. Our proposed image synthesis network could, in principle, be trained on data where such pathologies are well represented to produce more diverse synthetic cases.
In conclusion, we showed that generating synthetic training data with machine learning can be a powerful tool for improving results of deep learning pipelines, especially when only unbalanced, scarce data is available. In this work, we considered the task of automatically predicting the ventricle volumes from Cardiac MRI as a regression problem and we proposed a custom regression network (SFERA) to tackle this challenge. We have demonstrated that pretraining on a large synthetic dataset with a uniform distribution of the ejection fraction greatly improves the prediction compared to only using the limited amount of available data. To show this, we devised a two-step methodology: first, we generate synthetic data with a uniform distribution of EF values, by using a computer vision-based algorithm for generating binary masks and adopting a mask-to-image network. In the second phase, we pre-trained a neural network only on synthetic data, then finetuned it on the real cases. This methodology was demonstrated using two different datasets, with accurate results compared to the state-of-the-art. The same image synthesis approach is generalizable to other medical image analysis tasks where the distribution of the available training data is insufficiently representative, or the amount of data is scarce.
## Methods
### Data
The Kaggle Data Science Bowl Cardiac Challenge Data4 [Dataset 1] consists of CINE bSSFP cardiac MRI including a short-axis (SAX) stack which was used for ventricular volume quantification. This dataset is publicly available4. The data was acquired with 8–10 mm slice thickness, spatial resolution between 0.61–1.95 mm × 0.61–1.95 mm, and approximately 30 cardiac frames per slice, at 1.5 and 3 T (MAGNETOM Aera and Skyra, Siemens Healthcare, Erlangen, Germany). The average distance between consecutive SAX slices was 9.8 ± 0.6. Since the segmentation masks used to generate the EDV and ESV values used as ground truth in the competition were not made publicly available, the entire dataset was re-annotated by an expert observer. All individual ED and ES frames were manually identified, and the LV and right ventricle (RV) were manually contoured. The annotations were used to compute ground truth values for the ED and ES LV volumes. The subjects with less than 5 consecutive SAX slices or with the presence of significant motion artefacts were excluded from the training and validation subsets. 491 subject datasets were used for training, 187 for validation and the remaining 440 (same test set as in the original challenge) were reserved for testing.
A second independent dataset was publicly available from the UK Biobank Resource7 [Dataset 2]. CINE bSSFP cardiac MR data was acquired using a standard protocol27. The SAX stack spanning from the apex to the base of the left ventricle was acquired with 8 mm slice thickness, a spatial resolution ranging between 1.8–2.1 mm × 1.8–2.1 mm, and a 31 ms temporal resolution at 1.5 T (MAGNETOM Aera, Siemens Healthcare, Erlangen, Germany). The average slice distance was 8.89 ± 0.88 mm. A ground truth annotation of the LV and RV was obtained through manual segmentation of the end-systolic (ES) and end-diastolic (ED) phases by an expert observer. 3975 subjects were used for training, 300 for validation and the rest of 412 were reserved for testing.
The data was resampled to 1 × 1 mm spatial resolution, cropped to 150 × 150 pixels around the image center and the image intensity values were normalized to the [3%, 97%] quantiles.
### Synthetic image generation
The right approach for synthetic data generation depends on several factors: availability of annotated data, desired quality of the synthetic data, reproducibility, and the amount of control over the characteristics of the generated data (e.g. class label, the size and deformation of the structures). Herein, we describe a semantic image synthesis algorithm, capable of fully controlling the size and location of the resulting anatomical structures to obtain synthetic subjects with different EF values.
We adapted a state-of-the-art DL network architecture for mask-to-image translation GauGAN15 to the task of generating synthetic ED and ES image frames of a cardiac SAX image stack, while fully controlling the volume an ejection fraction of the LV. The generator consists of multiple SPADE blocks and the discriminator is a simple convolutional neural network. The loss function is computed from three weighted terms: Multiscale Adversarial Loss and two feature matching losses (one using the discriminator and the other one using a pretrained network).
We first trained the synthetic image generation network using the training subset of Dataset 1 consisting of CINE MR images and manually annotated segmentation masks with three labels for the LV, RV, and myocardium. The network was trained using the deterministic approach introduced in Ref.15 where we only use the segmentation mask as input. Taesung et al. also suggest a latent space vector to adjust the appearance of the produced synthetic images. However, in our experiments, using a latent space resulted in less realistic images, so we decided to use the strictly deterministic approach. The number of epochs used to train the image synthesis model was chosen empirically based on the subjective visual assessment of the generated images.
Next, we generated an extended dataset of synthetic masks to be used as input for the GauGAN model. For this, we used as starting point the segmentation masks in the Dataset 1 training subset. First, we perform for all slices an interpolation on (ED, ES) mask pairs, and return a number of $$\mathrm{F}=11$$ intermediate interpolated masks computed as follows:
$$IM=\left(\frac\alpha F\times{SDT}_1\right)+\left(\left(1-\frac{F-\alpha}F\right)\times{SDT}_2\right)$$
(1)
where IM represents the interpolated mask, $${SDT}_{1} and {SDT}_{2}$$ represent signed distance transform masks of ED and ES and $$\alpha \in (0, F)$$. Pairs of (ED, interpolated ED) and (interpolated ED, ES) masks are used to create synthetic cases with reduced EF. In the second step, we use an affine transformation $$\gamma$$ to rescale the ED and ES masks, such that anatomical structures become smaller at ES, and larger at ED. Thirteen uniformly distributed sample values of $$\gamma$$ over the interval [0.7, 1) are used for rescaling the ES mask, leading to a smaller LV and implicitly a smaller volume. The same number of samples are used for the ED mask, but covering the interval [1, 1.2), resulting in a larger LV for ED mask and an increased EF for the case. The values of $$\mathrm{\alpha }$$ and $$\gamma$$ have been chosen empirically.
The synthetically generated masks contained the same number of slices as the real cases used as starting point. The EDV and ESV for the synthetic subjects were computed using Simpson’s rule, assuming a constant slice thickness of 8 mm, and no gaps between slices. The EF is computed from the resulting volumes as:
$$EF = \frac{\left(EDV - ESV\right)}{EDV}.$$
(2)
Finally, we applied the trained image synthesis model described above to the previously generated extended set of synthetic masks with a uniform EF distribution to generate the synthetic CINE MR images. Three synthetically generated SAX stacks can be seen in Fig. 3.
The resulting 22,653 synthetic cases were split into 16,491 synthetic cases for training the SFERA network for cardiac function prediction, and 6162 for validation for the pretraining step in the RSP experiment. To assess the importance of a uniform EF distribution in the pretraining dataset, we selected a subset of 8245 synthetic cases (50% of the available pretraining data) such that the EF distribution was similar to the original Dataset 1 shown in Fig. 1. We also randomly selected another subset of 8245 cases with a uniform EF distribution. We then compared the performance of the models pretrained on these two subsets with the model pretrained on all available synthetic data.
### Cardiac function prediction
We designed a custom deep neural network capable of processing a stack of CINE MR slices of variable number of slices and which outputs both EDV and ESV, further used to compute the EF. The architecture of the SFERA network is shown in Fig. 7. The network input is a SAX stack of a varying number of slices, each consisting of one ED and one ES frame concatenated along the channel axis. A 2D residual CNN is employed in the first layers for every (ED, ES) pair. There are five residual blocks building the CNN. Every block consists of multiple 2D convolutional layers, ReLU activation functions, Batch Normalization28 and Max Pooling layers. The first convolutional layer outputs 32 channels, and this parameter doubles in value at every convolutional block. Before feeding the resulting features to the LSTM20, they are flattened, and a linear network is used to reduce their dimensionality to 128 elements containing spatial information. Then, a bidirectional LSTM20 network is applied to correlate the information between these feature vectors, resulting a vector containing both spatial and temporal information. As a final step, a Bayesian ridge regressor is employed to predict the final EDV and ESV volumes. The LSTM20 approach enables the proposed model to process a variable length of slices. The training of the SFERA model is performed using the Rectified Adam optimizer and RMSE loss function.
Volume data from the stack of SAX slices is normalized by the distance between slices. We have employed a unity-based normalization to rescale the EDV and ESV values to the range [0, 1]. Only the slices between the basal and the apex planes were retained. After the inference step, the actual ventricular volume (ml) is obtained by scaling the voxel volume estimations output by the network by the original distance between slices.
We used RMSE and Pearson correlation metrics to evaluate the performance of the trained model against the ground truth values for EDV, ESV and EF. The model prediction error was further investigated using Bland–Altman analysis where the confidence interval was defined as mean ± 1.96 SD. Kruskal–Wallis test was used to measure the statistical difference between the RMS errors obtain in the RSO and RSP experiments.
## References
1. Litjens, G. et al. State-of-the-art deep learning in cardiovascular image analysis. JACC Cardiovasc. Imaging. 12, 1549–1565 (2019).
2. Chen, C. et al. Deep learning for cardiac image segmentation: A review. Front. Cardiovasc. Med. 7, 25 (2020).
3. Bernard, O. et al. Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: Is the problem solved?. IEEE Trans. Med. Imaging. 37, 2514–2525 (2018).
4. The National Heart, Lung, and Blood Institute (NHLBI). Second Annual Data Science Bowl—Transforming How We Diagnose Heart Disease. s.l.: Booz Allen Hamilton, 2016. https://www.kaggle.com/c/second-annual-data-science-bowl. Accessed 27 June 2019.
5. Bai, W. et al. Automated cardiovascular magnetic resonance image analysis with fully convolutional networks. J. Cardiovasc. Magn. Reson. 20, 65 (2018).
6. Liao, F., Chen, X., Hu, X. & Song, S. Estimating the volume of the left ventricle from MRI images using deep neural networks. IEEE Trans. Cybern. 49, 495–504 (2017).
7. Sudlow, C. et al. UK biobank: An open access resource for identifying the causes of a wide range of complex diseases of middle and old age. PLoS Med. 12, e1001779 (2015).
8. Danilouchkine, M. G., Westenberg, J. J., de Roos, A., Reiber, J. H. & Lelieveldt, B. P. Operator induced variability in cardiovascular MR: Left ventricular measurements and their reproducibility. J. Cardiovasc. Magn. Reson. 7, 447–457 (2005).
9. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 27 (2014).
10. Mirza, M., Osindero, S. Conditional Generative Adversarial Nets. (2014). arXiv preprint arXiv::1411.1784.
11. Zhu, J. Y., Park, T., Isola, P., Efros, A. A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, 2223–2232 (2017).
12. Isola, P., Zhu, J. Y., Zhou, T., Efros, A. A. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1125–1134 (2017).
13. Chen, C., Ouyang, C., Tarroni, G., Schlemper, J., Qiu, H., Bai, W., Rueckert, D. Unsupervised multi-modal style transfer for cardiac MR segmentation. In International Workshop on Statistical Atlases and Computational Models of the Heart. 209–219 (Springer, 2019).
14. Ouyang, C., Kamnitsas, K., Biffi, C., Duan, J., Rueckert, D. Data efficient unsupervised domain adaptation for cross-modality image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2019).
15. Park, T., Liu, M.-Y., Wang, T.-C., Zhu, J.-Y. Semantic Image Synthesis with Spatially-Adaptive Normalization. (CVPR, 2019).
16. Github repository. NVlabs, Semantic Image Synthesis with SPADE. [Online] 2019. https://github.com/NVlabs/SPADE. Accessed 27 September 2019.
17. Abbasi-Sureshjani, S., Amirrajab, S., Lorenz, C., Weese, J., Pluim, J., Breeuwer, M. 4D semantic cardiac magnetic resonance image synthesis on XCAT anatomical model. In Medical Imaging with Deep Learning, 6–18 (PMLR, 2020).
18. Amirrajab, S., Abbasi-Sureshjani, S., Khalil, Y.A., Lorenz, C., Weese, J., Pluim, J., Breeuwer, M. Xcat-gan for synthesizing 3d consistent labeled cardiac mr images on anatomically variable xcat phantoms. In International Conference on Medical Image Computing and Computer-Assisted Intervention (2020).
19. Luo, G. et al. Multi-views fusion CNN for left ventricular volumes estimation on cardiac MR images. IEEE Trans. Biomed. Eng. 65, 1924–1934 (2017).
20. Staudemeyer, R. C., Morris, E. R. Understanding LSTM—A tutorial into Long Short-Term Memory Recurrent Neural Networks. (2019) arXiv preprint arXiv::1909.09586.
21. Luo, G., Dong, S., Wang, K., Zhang, H. Cardiac left ventricular volumes prediction method based on atlas location. In IEEE International Conference on Bioinformatics and Biomedicine (BIBM) (2016).
22. Leutbecher, M. & Haiden, T. Understanding changes of the continous ranked probability score using a homogeneous Gaussian approximation. Q. J. R. Meteorol. Soc. 147, 2925–2942 (2021).
23. Hann, E., Biasiolli, L., Zhang, Q., Popescu, I. A., Werys, K., Lukaschuk, E., Carapella, V., Paiva, J. M., Aung, N., Rayner, J. J., Fung, K., Puchta, H., Sanghvi, M. M., Moon, N. O., Thomas, K. E., Ferreira, V. M., Petersen, S. E., Neubauer, S., Piechnik, S. K. Quality control-driven image segmentation towards reliable automatic image analysis in large-scale cardiovascular magnetic resonance aortic cine imaging. In International Conference on Medical Image Computing and Computer-Assisted Intervention. (Springer, 2019) 750–758.
24. Chitiboi, T., Georgescu, B., Wetzl, J., Borgohain, I., Geppert, C., Piechnik, S. K., Neubauer, S., Petersen, S., Sharma, P. Deep learning-based strain quantification from CINE cardiac MRI. In ISMRM Annual Meeting (2020).
25. Kozerke, T, Joyce, S. Leveraging anatomical similarity for unsupervised model learning and synthetic MR data. In ISMRM Annual Meeting (2020).
26. Kong, B., Zhan, Y., Shin, M., Denny, T., Zhang, S. Recognizing end-diastole and end-systole frames via deep temporal regression network. In International Conference on Medical Image Computing and Computer-Assisted Intervention, 264–272 (Springer, 2016).
27. Petersen, S. E. et al. UK Biobank’s cardiovascular magnetic resonance protocol. J. Cardiovasc. Magn. Reson. 18, 8 (2015).
28. Ioffe, S., Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, 448–456 (PMLR, 2015).
## Acknowledgements
This research has been conducted using the UK Biobank Resource (access application 2964). The Data Science Bowl Cardiac Challenge Data was originally provided and publicly released by the National Heart, Lung, and Blood Institute (NHLBI). Special thanks to NHLBI Intramural Investigators Dr. Michael Hansen and Dr. Andrew Arai. We also acknowledge the support of Mr. Indraneel Borgohain for data processing. This work was partly funded by the European Union’s Horizon 2020 research and innovation programme under grant agreement No 825903 (euCanSHare project). SEP and AML acknowledges support from the National Institute for Health Research (NIHR) Biomedical Research Centre at Barts, from the SmartHeart EPSRC programme grant (EP/P001009/1) and the London Medical Imaging and AI Centre for Value-Based Healthcare. SEP acknowledges support from the CAP-AI programme, London’s first AI enabling programme focused on stimulating growth in the capital’s AI sector. SEP, SN and SKP acknowledge the British Heart Foundation for funding the manual analysis to create a cardiovascular magnetic resonance imaging reference standard for the UK Biobank imaging resource in 5000 CMR scans (PG/14/89/31194). This project was enabled through access to the Medical Research Council eMedLab Medical Bioinformatics infrastructure, supported by the Medical Research Council (MR/L016311/1). This work was partially supported by a grant of the Romanian Ministry of Education and Research, CNCS—UEFISCDI, project number PN-III-P1-1.1-TE-2019-1804, within PNCDI III.
## Author information
Authors
### Contributions
B.A.G., L.M.I., P.S., and T.C. made substantial contributions to the design of the machine learning approaches. P.S., A.M.L., C.S. made substantial contributions to the machine learning experiments. J.S.M., S.N., S.E.P. made substantial contributions to the data analysis and results interpretation. M.A.A.A. oversaw and validated data annotation. B.A.G., J.W., C.G., S.K.P., and T.C. drafted the manuscript. All authors were involved in critically reviewing and improving the manuscript and gave final approval of the version to be submitted.
### Corresponding author
Correspondence to Bogdan A. Gheorghiță.
## Ethics declarations
### Competing interests
BAG, LMI, PS, CS, JW, CG, MAAA, TC are employees of Siemens Healthineers (and affiliates). SEP acts as a paid consultant to Circle Cardiovascular Imaging Inc., Calgary, Canada and Servier. The other authors declare no competing financial interests.
### Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Gheorghiță, B.A., Itu, L.M., Sharma, P. et al. Improving robustness of automatic cardiac function quantification from cine magnetic resonance imaging using synthetic image data. Sci Rep 12, 2391 (2022). https://doi.org/10.1038/s41598-022-06315-3
• Accepted:
• Published:
• DOI: https://doi.org/10.1038/s41598-022-06315-3 | 2022-08-08 08:26:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48194852471351624, "perplexity": 2952.8971726692025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570767.11/warc/CC-MAIN-20220808061828-20220808091828-00597.warc.gz"} |
https://calculator.academy/displacement-to-velocity-calculator/ | Enter the total displacement and the total time into the calculator to determine the velocity from displacement.
## Displacement to Velocity Formula
The following equation is used to calculate the velocity from displacement.
V = D / t
• Where V is the velocity (m/s)
• D is the displacement (m)
• t is the time (s)
To calculate velocity from displacement, simply divide the displacement by the total time.
## How to Calculate Velocity From Displacement?
Example Problem:
The following example outlines the steps and information needed to calculate velocity from displacement.
First, determine the displacement. In this example, the displacement is found to be 310m.
Next, determine the time. In this case, the time is measured to be 10 seconds.
Finally, calculate the velocity using the formula above:
V = D / t
V = 310 / 10
V = 31 m/s | 2023-01-30 14:43:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7444852590560913, "perplexity": 865.5436998348629}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499819.32/warc/CC-MAIN-20230130133622-20230130163622-00504.warc.gz"} |
https://www.physicsforums.com/threads/time-taken-for-sound-to-travel-between-two-moving-observers.982375/ | # Time taken for sound to travel between two moving observers
Saptarshi Sarkar
Homework Statement:
A source S of frequency ##f_0## and an observer O, moving with speeds ##v_1## and ##v_2## respectively, are moving away from each other. When they are separated by distance a (t=0), a sound pulse is emitted by the source. Suppose velocity of sound to be ##v_s## and calculate the time ##t_1## that it takes for the pulse to be received by O.
Relevant Equations:
Total distance the pulse needs to travel:
##D = a + v_1t_1##
Speed of sound pulse = ##v_s - v_2##
So,
##t_1 = \frac {a + v_1t_1} {v_s - v_2}##
But the solution should be
##t_1 = \frac a {v_s - v_2}##
I assumed the following -
1. I did not consider the frequency as the Doppler shift in frequency was not asked.
2. I did not add the distance the source moved in time ##t_1## to the total distance traveled by the wave as the pulse was emitted at t=0.
Is any of my assumptions wrong?
## Answers and Replies
Homework Helper
Gold Member
2022 Award
You could sketch the motions of source, observer and pulse on a distance-time graph.
Saptarshi Sarkar
Homework Helper
Gold Member
2022 Award
So,
##t_1 = \frac {a + v_1t_1} {v_s - v_2}##
I assumed the following -
2. I did not add the distance the source moved in time ##t_1## to the total distance traveled by the wave as the pulse was emitted at t=0.
How is that equation based on your assumption?
Are taking ##v_2## to be the speed of the source?
Saptarshi Sarkar
Saptarshi Sarkar
Sorry, I guess I messed up the velocities. I will edit the question and add a sketch tomorrow morning.
Saptarshi Sarkar
Can't edit the question, so posting it here
Homework Statement::
A source S of frequency ##f_0## and an observer O, moving with speeds ##v_1## and ##v_2## respectively, are moving away from each other. When they are separated by distance a (t=0), a sound pulse is emitted by the source. Suppose velocity of sound to be ##v_s## and calculate the time ##t_1## that it takes for the pulse to be received by O.
Homework Equations::
Total distance the pulse needs to travel:
##D = a + v_2t_1##
Speed of sound pulse = ##v_s - v_1##
So,
##t_1 = \frac {a + v_2t_1} {v_s - v_1}##
But the solution should be
##t_1 = \frac a {v_s - v_2}##
I assumed the following -
1. I did not consider the frequency as the Doppler shift in frequency was not asked.
2. I did not add the distance the source moved in time ##t_1## to the total distance traveled by the wave as the pulse was emitted at t=0.
Is any of my assumptions wrong? | 2023-02-05 02:08:37 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8379151821136475, "perplexity": 1875.8510375364374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500158.5/warc/CC-MAIN-20230205000727-20230205030727-00064.warc.gz"} |
http://mathhelpforum.com/discrete-math/101866-cryptography.html | 1. ## Cryptography....
Hello...
"Z c s f f b e u a s s o x c z w u x b s n b h a o g y m q h m x z l r e c z m s i c x j k z f c x z e x q c h s k y f n w u v a y j b h t h c g f u a z t d p m t q g p r x g d s a d c y x n q t k h i v f o e b k l e b a l r j e s p b w i "
was encrypted with a (4 X 4) Hill cipher and the first 16 bits decrypted to:
" a r c h i m e d s e g o t s o e"
1) We need to find the (4 X 4 ) matrices A and B such that A.M = B, where M is the encryption matrix...
2) Use M to decrypt the entire message...
thanks for the help!
2. Originally Posted by Vedicmaths
Hello...
"Z c s f f b e u a s s o x c z w u x b s n b h a o g y m q h m x z l r e c z m s i c x j k z f c x z e x q c h s k y f n w u v a y j b h t h c g f u a z t d p m t q g p r x g d s a d c y x n q t k h i v f o e b k l e b a l r j e s p b w i "
was encrypted with a (4 X 4) Hill cipher and the first 16 bits decrypted to:
" a r c h i m e d s e g o t s o e"
1) We need to find the (4 X 4 ) matrices A and B such that A.M = B, where M is the encryption matrix...
2) Use M to decrypt the entire message...
!
The Hill Cipher was developed and published in 1929. Mr. Hill (& company) invented a hardware device -- for which he got a patent. The product did not become a killer app -- not because it had problems -- but in the 1930 depression era, data security was not a high priority.
The Hill cipher can use matrices of 2x2, 3x3, 4x4, 5x5, 6x6, etc. for the encryption/decryption process.
In this example above, the fundamental pieces of information about the Hill cipher are given.
1) A 4x4 encryption matrix.
2) The plain text and the matching cipher text.
3) Only 26 letters in the system [using MOD 26]
Some of the information supplied:
Ciphertext= "zcsffbeuassoxczwuxbsnbhaogymqhmxzlreczmsicxjkzfcx zexqchskyfnwuvayjbhthc..."
Plaintext = "archimedsegotsoe..."
--------------
4x4 ciphertext in columns of 4 row depth
zfax
cbsc
sesz
fuow
The ciphertext letters (ascii code) transformed to numeric values a=0, z=25 (mod 26 matrix).
$C = \begin{bmatrix}25 & 05 & 00 & 23 \\
02 & 01 & 18 & 02 \\
18 & 04 & 18 & 25 \\
05 & 20 & 14 & 22 \\
\end{bmatrix}$
-------------
4x4 plaintext
aist
rmes
cego
hdoe
& the associated plain text numeric matrix
$P= \begin{bmatrix}00 & 08 & 04 & 19\\17 & 12 & 18 & 18\\02 & 04 & 06 & 14\\07 & 03 & 14 & 04\\ \end{bmatrix}$
[The detailed explanation has been abridged]*
With a transpose of the 4x4 Ciphertext adjacent to a 4x4 transpose of the plaintext,
making a 4 row x 8 column matrix. The Ciphertext was row-reduced modulo 26 to reduced
echelon form, then forming an identity matrix for the left half, thus the right half
was a transpose of the inverse matrix.
That matrix was used to decrypt the data supplied.
Something didn't work as expected.
There was a typo in the supplied PLAINTEXT data.
The 9th & 10th letters of the plain text appeared to be swapped, since the
accepted spelling of the greek math man is "ARCHIMEDES".
I changed the plaintext & recomputed. This is the resulting decryption matrix and
the supplied ciphertext decoded.
===================
A decryption matrix
$M= \begin{bmatrix}24 & 02 & 17 & 00\\ 23 & 11 & 23 & 04\\ 21 & 05 & 05 & 21\\19 & 16 & 15 & 02 \end{bmatrix}$
The decrypted message:
archimedesgotsoexcitrdbecausehesblveqtheproblemtua thnzfjjlezpnrnnazfkmyvetcxpfni
with spacing:
archimedes got so excitrd because he sblveq the problem tuat hnzfjjlezpnrnnazfkmyvetcxpfni
From this it appears that the ciphertext was not proof read when entered.
*It makes little sense to have a full explanation of how it works when the data is invalid.
3. WoW !
Thanks a lot for the help Sir!
,
,
,
### hill cipher 4x4 example
Click on a term to search for related topics. | 2017-11-21 12:47:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6528818607330322, "perplexity": 1843.3080209036816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806353.62/warc/CC-MAIN-20171121113222-20171121133222-00206.warc.gz"} |
https://math.stackexchange.com/questions/1612723/how-to-calculate-int-0-infty-fracx-sqrtex-1-mathrmdx?rq=1 | # how to calculate $\int_{0}^{\infty}\frac{x}{\sqrt{e^x-1}}\mathrm{d}x$
I was trying to solve another integral when then I reached this, I've no idea of how to select the contour for the integration.
• For reference, the integral is $\pi\ln 4$. It also has a closed form antiderivative. – Ben Longo Jan 14 '16 at 23:28
• Yeah I was @RonGordon, but any it's pretty cool to see other methods too. – john Jan 15 '16 at 19:33
Sub $x=\log{(1+y^2)}$; then the integral is equal to
$$\int_{-\infty}^{\infty} dy \frac{\log{(1+y^2)}}{1+y^2}$$
I will illustrate how to use complex analysis to evaluate this integral. Consider the following contour integral:
$$\oint_C dz \frac{\log{(1+z^2)}}{1+z^2}$$
where $C$ is the following contour:
i.e., a semicircular contour of radius $R$ with a detour around the branch point at $z=i$ of radius $\epsilon$. The contour integral is equal to
$$\int_{-R}^R dx \frac{\log{(1+x^2)}}{1+x^2} + i R \int_0^{\pi/2} d\theta \, e^{i \theta} \frac{\log{(1+R^2 e^{i 2 \theta})}}{1+R^2 e^{i 2 \theta}} \\ + i \int_R^{1+\epsilon} dy \frac{\log{(y^2-1)}+i \pi}{1-y^2} + i \epsilon \int_{\pi/2}^{-3 \pi/2} d\phi \, e^{i \phi} \frac{\log{[1+(i+\epsilon e^{i \phi})^2]}}{1+(i+\epsilon e^{i \phi})^2} \\ + i \int_{1+\epsilon}^R dy \frac{\log{(y^2-1)}-i \pi}{1-y^2} + i R \int_{\pi/2}^{\pi} d\theta \, e^{i \theta} \frac{\log{(1+R^2 e^{i 2 \theta})}}{1+R^2 e^{i 2 \theta}}$$
Note that the third and fifth integrals are on opposite sides of the branch cut along the imaginary axis above $z=i$. Also note the limits on the fourth integral: the upper limit is less than the lower limit because the contour traverses clockwise locally about the branch point $z=i$.
We consider the limits as $R \to \infty$ and $\epsilon \to 0$. In these limits, the second and sixth integrals vanish. Rearranging things a bit, we get for the contour integral
$$\int_{-\infty}^{\infty} dx \frac{\log{(1+x^2)}}{1+x^2} - i (-i 2 \pi) \int_{1+\epsilon}^{\infty} \frac{dy}{y^2-1} + \frac12 \int_{\pi/2}^{-3 \pi/2} d\phi \, \left [\log{(i 2 \epsilon)} + i \phi \right ]$$
Note that, while there appears to be singular behavior as $\epsilon \to 0$, that singular behavior will cancel out as we will see.
By Cauchy's theorem, the contour integral is zero. Doing out the second and third integrals, we find that
$$\int_{-\infty}^{\infty} dx \frac{\log{(1+x^2)}}{1+x^2} - \pi \left [\log{\left (\frac{y-1}{y+1} \right )} \right ]_{1+\epsilon}^{\infty} - \pi \log{(i 2 \epsilon)} + i \frac14 (2 \pi^2) = 0$$
Simplifying, and taking $\log{i} = i \pi/2$, we get
$$\int_{-\infty}^{\infty} dx \frac{\log{(1+x^2)}}{1+x^2} + \pi \log{\epsilon} - \pi \log{2} - i \frac{\pi^2}{2} - \pi \log{2} - \pi \log{\epsilon} + i \frac{\pi^2}{2} = 0$$
Thus...
$$\int_0^{\infty} dx \frac{x}{\sqrt{e^x-1}} = 2 \pi \log{2}$$
• What is different between the third and the fifth integral (where do the extra terms $-i\pi$ and $+i\pi$ come from?), I know its got something to do with branch points but I don't get it! And how did you know to use this type of contour? @ron – john Jan 15 '16 at 5:12
• @john: The $\pm i \pi$ terms come from converting $\log{(1-y^2)}$ to $\log{(y^2-1)}$ - the $\pm i \pi$ is $\log{(-1)}$. Whether it is one or the other depends on which side of the branch cut we are on. I know to draw this sort of detour in a contour whenever there is a branch point that must be avoided. – Ron Gordon Jan 15 '16 at 6:56
• @john See my answer for a purely real analysis based technique. – Leg Jan 15 '16 at 15:42
Let $t^2= e^x-1$. We have $$2tdt = e^xdx = (1+t^2)dx \implies dx = \dfrac{2tdt}{1+t^2}$$ Hence, we have $$I = \int_0^{\infty} \dfrac{xdx}{\sqrt{e^x-1}} = \int_0^{\infty} \dfrac{2t \log(1+t^2)dt}{(1+t^2)t} = 2\int_0^{\infty} \dfrac{\log(1+t^2)}{(1+t^2)}dt$$ Let $$I(a) = \int_0^{\infty} \dfrac{\log(1+a^2t^2)}{1+t^2}dt \,\,\, (\clubsuit)$$ We need $2I(1)$. Differentiating $(\clubsuit)$, we obtain $$I'(a) = \int_0^{\infty} \dfrac{2at^2}{(1+a^2t^2)(1+t^2)}dt = \dfrac{2a}{a^2-1}\left(\int_0^{\infty} \dfrac{dt}{1+t^2} - \int_0^{\infty} \dfrac{dt}{1+a^2t^2} \right)$$ Hence, $$I'(a) = \dfrac{2a}{a^2-1}\left(\dfrac{\pi}2 - \dfrac{\pi}{2a}\right) = \dfrac{\pi}{(1+a)} \,\,\, (\spadesuit)$$ Further, we have $I(0) = 0$. Hence, integrating $(\spadesuit)$, we obtain $$I(a) = \pi \log(1+a)$$ The desired integral is $2I(1) = \pi \log(2)$.
• Very nice,when do you normally use this technique?@leg – john Jan 15 '16 at 15:44
• @john Generally, I try to prove a real integral using purely real analysis tools. It is hard to articulate when this tool can be used though. – Leg Jan 16 '16 at 3:26
Let $z=\mathrm{e}^{x}-1$, so that we have $$\int\limits_{0}^{\infty} \frac{\mathrm{ln}(z+1)}{\sqrt{z}}\frac{1}{z+1} \mathrm{d} z$$
Let us consider $$I(a) = \int\limits_{0}^{\infty} \frac{(z+1)^{a}}{\sqrt{z}} \mathrm{d} z = \mathrm{B}\left(\frac{1}{2}, -\frac{1}{2}-a\right) = \frac{\Gamma\left(\frac{1}{2}\right) \Gamma\left(-\frac{1}{2}-a\right)}{\Gamma(-a)}$$ so that $$\lim_{a \to -1} \frac{\partial I(a)}{\partial a} = \int\limits_{0}^{\infty} \frac{\mathrm{ln}(z+1)}{\sqrt{z}}\frac{1}{z+1} \mathrm{d} z = \int\limits_{0}^{\infty} \frac{x}{\sqrt{\mathrm{e}^{x}-1}} \mathrm{d} x$$
Then, $$\frac{\partial I(a)}{\partial a} = \Gamma\left(\frac{1}{2}\right)\left[\frac{-\Gamma(-a)\Gamma\left(-\frac{1}{2}-a\right)\psi^{0}\left(-\frac{1}{2}-a\right) + \Gamma\left(-\frac{1}{2}-a\right)\Gamma(-a)\psi^{0}(-a)}{\Gamma(-a)\Gamma(-a)} \right]$$
\begin{align} \lim_{a \to -1} \frac{\partial I(a)}{\partial a} & = -\frac{\Gamma\left(\frac{1}{2}\right)\Gamma\left(\frac{1}{2}\right)}{\Gamma(1)} \left[\psi^{0}\left(\frac{1}{2}\right) - \psi^{0}(1)\right] \\ & = -\pi[(-\gamma-\mathrm{ln}4) -(- \gamma)] \\ & = \pi\mathrm{ln}4 \\ & = \int\limits_{0}^{\infty} \frac{x}{\sqrt{\mathrm{e}^{x}-1}} \mathrm{d} x \end{align} | 2019-03-19 19:45:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 4, "x-ck12": 0, "texerror": 0, "math_score": 0.992354691028595, "perplexity": 274.50768735987486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202125.41/warc/CC-MAIN-20190319183735-20190319205735-00368.warc.gz"} |
http://conceptmap.cfapps.io/wikipage?lang=en&name=Great_Lakes | # Great Lakes
Satellite image of the Great Lakes, April 24, 2000
Terra MODIS image of the Great Lakes, January 27, 2005, showing ice beginning to build up around the shores of each of the lakes, with snow on the ground.
Location in North America
The Great Lakes (French: les Grands Lacs), also called the Laurentian Great Lakes[1] and the Great Lakes of North America, are a series of interconnected freshwater lakes primarily in the upper mid-east region of North America, on the Canada–United States border, which connect to the Atlantic Ocean through the Saint Lawrence River. They comprise Lakes Superior, Michigan, Huron, Erie, and Ontario. Hydrologically, there are only four lakes, because Lakes Michigan and Huron join at the Straits of Mackinac. The lakes form the Great Lakes Waterway.
The Great Lakes are the largest group of freshwater lakes on Earth by total area, and second-largest by total volume, containing 21% of the world's surface fresh water by volume.[2][3][4] The total surface is 94,250 square miles (244,106 km2), and the total volume (measured at the low water datum) is 5,439 cubic miles (22,671 km3),[5] slightly less than the volume of Lake Baikal (5,666 cu mi or 23,615 km3, 22–23% of the world's surface fresh water). Due to their sea-like characteristics (rolling waves, sustained winds, strong currents, great depths, and distant horizons) the five Great Lakes have also long been referred to as inland seas.[6] Lake Superior is the second-largest lake in the world by area, and the largest freshwater lake by area. Lake Michigan is the largest lake that is entirely within one country.[7][8][9][10]
The Great Lakes began to form at the end of the last glacial period around 14,000 years ago, as retreating ice sheets exposed the basins they had carved into the land which then filled with meltwater.[11] The lakes have been a major source for transportation, migration, trade, and fishing, serving as a habitat to many aquatic species in a region with much biodiversity.
The surrounding region is called the Great Lakes region, which includes the Great Lakes Megalopolis.[12]
## Geography
A map of the Great Lakes Basin showing the five sub-basins within. Left to right they are: Superior, including Nipigon's basin, (magenta); Michigan (blue); Huron (green); Erie (yellow); Ontario (orange).
Though the five lakes lie in separate basins, they form a single, naturally interconnected body of fresh water, within the Great Lakes Basin. They form a chain connecting the east-central interior of North America to the Atlantic Ocean. From the interior to the outlet at the Saint Lawrence River, water flows from Superior to Huron and Michigan, southward to Erie, and finally northward to Lake Ontario. The lakes drain a large watershed via many rivers, and are studded with approximately 35,000 islands.[13] There are also several thousand smaller lakes, often called "inland lakes", within the basin.[14] The surface area of the five primary lakes combined is roughly equal to the size of the United Kingdom, while the surface area of the entire basin (the lakes and the land they drain) is about the size of the UK and France combined.[15] Lake Michigan is the only one of the Great Lakes that is entirely within the United States; the others form a water boundary between the United States and Canada. The lakes are divided among the jurisdictions of the Canadian province of Ontario and the U.S. states of Michigan, Wisconsin, Minnesota, Illinois, Indiana, Ohio, Pennsylvania, and New York. Both the province of Ontario and the state of Michigan include in their boundaries portions of four of the lakes: The province of Ontario does not border Lake Michigan, and the state of Michigan does not border Lake Ontario. New York and Wisconsin's jurisdictions extend into two lakes, and each of the remaining states into one of the lakes.
### Bathymetry
Notes: The area of each rectangle is proportionate to the volume of each lake. All measurements at Low Water Datum. EPA[16]
Lake Erie Lake Huron Lake Michigan Lake Ontario Lake Superior
Surface area[5] 9,910 sq mi (25,700 km2) 23,000 sq mi (60,000 km2) 22,300 sq mi (58,000 km2) 7,340 sq mi (19,000 km2) 31,700 sq mi (82,000 km2)
Water volume[5] 116 cu mi (480 km3) 850 cu mi (3,500 km3) 1,180 cu mi (4,900 km3) 393 cu mi (1,640 km3) 2,900 cu mi (12,000 km3)
Elevation[16] 571 ft (174 m) 577 ft (176 m) 577 ft (176 m) 246 ft (75 m) 600.0 ft (182.9 m)
Average depth[15] 62 ft (19 m) 195 ft (59 m) 279 ft (85 m) 283 ft (86 m) 483 ft (147 m)
Maximum depth[17] 210 ft (64 m) 748 ft (228 m) 925 ft (282 m) 804 ft (245 m) 1,333 ft (406 m)
Major settlements[18] Buffalo, NY
Erie, PA
Cleveland, OH
Lorain, OH
Toledo, OH
Sandusky, OH
Alpena, MI
Bay City, MI
Owen Sound, ON
Port Huron, MI
Sarnia, ON
Chicago, IL
Gary, IN
Green Bay, WI
Sheboygan, WI
Milwaukee, WI
Kenosha, WI
Racine, WI
Muskegon, MI
Traverse City, MI
Hamilton, ON
Kingston, ON
Mississauga, ON
Oshawa, ON
Rochester, NY
Toronto, ON
Duluth, MN
Marquette, MI
Sault Ste. Marie, MI
Sault Ste. Marie, ON
Superior, WI
Thunder Bay, ON
System profile of the Great Lakes.
As the surfaces of Lakes Superior, Huron, Michigan, and Erie are all approximately the same elevation above sea level, while Lake Ontario is significantly lower, and because the Niagara Escarpment precludes all natural navigation, the four upper lakes are commonly called the "upper great lakes". This designation is not universal. Those living on the shore of Lake Superior often refer to all the other lakes as "the lower lakes", because they are farther south. Sailors of bulk freighters transferring cargoes from Lake Superior and northern Lake Michigan and Lake Huron to ports on Lake Erie or Ontario commonly refer to the latter as the lower lakes and Lakes Michigan, Huron, and Superior as the upper lakes. This corresponds to thinking of Lakes Erie and Ontario as "down south" and the others as "up north". Vessels sailing north on Lake Michigan are considered "upbound" even though they are sailing toward its effluent current.[25]
### Primary connecting waterways
Chicago on Lake Michigan is in the western part of the lakes megalopolis, and the site of the waterway linking the lakes to the Mississippi River valley
Detroit on the Detroit River links the region's central metropolitan areas.
Toronto on Lake Ontario is in the eastern section of the Great Lakes Megalopolis
### Lake Michigan–Huron
Lakes Huron and Michigan are sometimes considered a single lake, called Lake Michigan–Huron, because they are one hydrological body of water connected by the Straits of Mackinac.[26] The straits are five miles (8 km) wide[15] and 120 feet (37 m) deep; the water levels – currently[clarification needed] at 577 feet (176 m) – rise and fall together,[27] and the flow between Michigan and Huron frequently reverses direction.
### Islands
South Bass Island in Lake Erie
Dispersed throughout the Great Lakes are approximately 35,000 islands.[13] The largest among them is Manitoulin Island in Lake Huron, the largest island in any inland body of water in the world.[35] The second-largest island is Isle Royale in Lake Superior.[36] Both of these islands are large enough to contain multiple lakes themselves—for instance, Manitoulin Island's Lake Manitou is the world's largest lake on a freshwater island.[37] Some of these lakes even have their own islands, like Treasure Island in Lake Mindemoya in Manitoulin Island
### Peninsulas
The Great Lakes also have several peninsulas between them, including the Door Peninsula, the Peninsulas of Michigan, and the Ontario Peninsula. Some of these peninsulas even contain smaller peninsulas, such as the Keweenaw Peninsula, the Thumb Peninsula, the Bruce Peninsula, and the Niagara Peninsula. Population centers on the peninsulas include Grand Rapids and Detroit in Michigan along with London, Hamilton, Brantford, and Toronto in Ontario.
### Shipping connection to the ocean
Although the Saint Lawrence Seaway and Great Lakes Waterway make the Great Lakes accessible to ocean-going vessels,[38] shifts in shipping to wider ocean-going container ships—which do not fit through the locks on these routes—have limited container shipping on the lakes. Most Great Lakes trade is of bulk material, and bulk freighters of Seawaymax-size or less can move throughout the entire lakes and out to the Atlantic.[39] Larger ships are confined to working in the lakes themselves. Only barges can access the Illinois Waterway system providing access to the Gulf of Mexico via the Mississippi River. Despite their vast size, large sections of the Great Lakes freeze over in winter, interrupting most shipping from January to March. Some icebreakers ply the lakes, keeping the shipping lanes open through other periods of ice on the lakes.
The Great Lakes are also connected by the Chicago Sanitary and Ship Canal to the Gulf of Mexico by way of the Illinois River (from the Chicago River) and the Mississippi River. An alternate track is via the Illinois River (from Chicago), to the Mississippi, up the Ohio, and then through the Tennessee–Tombigbee Waterway (a combination of a series of rivers and lakes and canals), to Mobile Bay and the Gulf of Mexico. Commercial tug-and-barge traffic on these waterways is heavy.[40]
Pleasure boats can also enter or exit the Great Lakes by way of the Erie Canal and Hudson River in New York. The Erie Canal connects to the Great Lakes at the east end of Lake Erie (at Buffalo, New York) and at the south side of Lake Ontario (at Oswego, New York).
### Water levels
In 2009, the lakes contained 84% of the surface freshwater of North America;[41] if the water were evenly distributed over the entire continent's land area, it would reach a depth of 5 feet (1.5 meters).[15] The source of water levels in the lakes is tied to what was left by melting glaciers when the lakes took their present form. Annually, only about 1% is "new" water originating from rivers, precipitation, and groundwater springs that drain into the lakes. Historically, evaporation has been balanced by drainage, making the level of the lakes constant.[15]
Intensive human population growth only began in the region in the 20th century and continues today.[15] At least two human water use activities have been identified as having the potential to affect the lakes' levels: diversion (the transfer of water to other watersheds) and consumption (substantially done today by the use of lake water to power and cool electric generation plants, resulting in evaporation).[42]
The physical impacts of climate change can be seen in water levels in the Great Lakes over the past century.[43] The United Nations' Intergovernmental Panel on Climate Change in 1997, 23 years ago, predicted: "the following lake level declines could occur: Lake Superior −0.2 to −0.5 m, Lakes Michigan and Huron −1.0 to −2.5 m, and Lake Erie −0.9 to −1.9 m."[44] In 2009, 11 years ago, it was predicted that global warming will decrease water levels.[45] In 2013, record low water levels in the Great Lakes were attributed to climate change.[46]
Water levels of Lakes Michigan and Huron in the United States, 1918 to 2019.
The water level of Lake Michigan–Huron had remained fairly constant over the 20th century,[47] but has nevertheless dropped more than 6 feet from the record high in 1986 to the low of 2013.[48] In 2012, National Geographic tied the water level drop to warming climate change.,[49] as did the Natural Resources Defense Council.[50] One newspaper reported that the long-term average level has gone down about 20 inches because of dredging and subsequent erosion in the St. Clair River. Lake Michigan–Huron hit all-time record low levels in 2013; according to the US Army Corps of Engineers, the previous record low had been set in 1964.[48] By April 2015 the water level had recovered to 7 inches (17.5 cm) more than the "long term monthly average".[51]
## Name origins
The Great Lakes during early spring
Lake Erie
From the Erie tribe, a shortened form of the Iroquoian word erielhonan "long tail".[52]
Lake Huron
Named for the inhabitants of the area, the Wyandot (or "Hurons"), by the first French explorers .[53] The Wyandot originally referred to the lake by the name karegnondi, a word which has been variously translated as "Freshwater Sea", "Lake of the Hurons", or simply "lake".[54][55]
Lake Michigan
From the Ojibwa word mishi-gami "great water" or "large lake".[56]
Lake Ontario
From the Wyandot (Huron) word ontarí'io "lake of shining waters".[57]
Lake Superior
English translation of the French term lac supérieur "upper lake", referring to its position north of Lake Huron. The indigenous Ojibwe call it gichi-gami (from Ojibwe gichi "big, large, great"; gami "water, lake, sea"). Popularized in French-influenced transliteration as Gitchigumi as in Gordon Lightfoot's 1976 story song "The Wreck of the Edmund Fitzgerald", or Gitchee Gumee as in Henry Wadsworth Longfellow's 1855 epic poem, The Song of Hiawatha).[17]
## Statistics
The Great Lakes contain 21% of the world's surface fresh water: 5,472 cubic miles (22,810 km3), or 6.0×1015 U.S. gallons, that is 6 quadrillion U.S gallons, (2.3×1016 liters). This is enough water to cover the 48 contiguous U.S. states to a uniform depth of 9.5 feet (2.9 m). Although the lakes contain a large percentage of the world's fresh water, the Great Lakes supply only a small portion of U.S. drinking water on a national basis.[58]
The total surface area of the lakes is approximately 94,250 square miles (244,100 km2)—nearly the same size as the United Kingdom, and larger than the U.S. states of New York, New Jersey, Connecticut, Rhode Island, Massachusetts, Vermont, and New Hampshire combined.[59]
The Great Lakes coast measures approximately 10,500 miles (16,900 km);,[15] but the length of a coastline is impossible to measure exactly and is not a well-defined measure (see Coastline paradox). Of the total 10,500 miles (16,900 km) of shoreline, Canada borders approximately 5,200 miles (8,400 km), while the remaining 5,300 miles (8,500 km) are bordered by the United States. Michigan has the longest shoreline of the United States, bordering roughly 3,288 miles (5,292 km) of shoreline, followed by Wisconsin (820 miles (1,320 km)), New York (473 miles (761 km)), and Ohio (312 miles (502 km)).[60] Traversing the shoreline of all the lakes would cover a distance roughly equivalent to travelling half-way around the world at the equator.[15]
## Geology
A diagram of the formation of the Great Lakes
Map of Glacial Lake Algonquin and its Correlatives (USGS 1915)
It has been estimated that the foundational geology that created the conditions shaping the present day upper Great Lakes was laid from 1.1 to 1.2 billion years ago,[15][61] when two previously fused tectonic plates split apart and created the Midcontinent Rift, which crossed the Great Lakes Tectonic Zone. A valley was formed providing a basin that eventually became modern day Lake Superior. When a second fault line, the Saint Lawrence rift, formed approximately 570 million years ago,[15] the basis for Lakes Ontario and Erie were created, along with what would become the Saint Lawrence River.
The Great Lakes are estimated to have been formed at the end of the last glacial period (the Wisconsin glaciation ended 10,000 to 12,000 years ago), when the Laurentide Ice Sheet receded.[11] The retreat of the ice sheet left behind a large amount of meltwater (see Lake Algonquin, Lake Chicago, Glacial Lake Iroquois, and Champlain Sea) that filled up the basins that the glaciers had carved, thus creating the Great Lakes as we know them today.[62] Because of the uneven nature of glacier erosion, some higher hills became Great Lakes islands. The Niagara Escarpment follows the contour of the Great Lakes between New York and Wisconsin. Land below the glaciers "rebounded" as it was uncovered.[63] Since the glaciers covered some areas longer than others, this glacial rebound occurred at different rates.
A notable modern phenomenon is the formation of ice volcanoes over the lakes during wintertime. Storm-generated waves carve the lakes' ice sheet and create conical mounds through the eruption of water and slush. The process is only well-documented in the Great Lakes, and has been credited with sparing the southern shorelines from worse rocky erosion.[64]
## Climate
The Great Lakes have a humid continental climate, Köppen climate classification Dfa (in southern areas) and Dfb (in northern parts)[65] with varying influences from air masses from other regions including dry, cold Arctic systems, mild Pacific air masses from the West, and warm, wet tropical systems from the south and the Gulf of Mexico.[66] The lakes themselves also have a moderating effect on the climate; they can also increase precipitation totals and produce lake effect snowfall.[65]
### Lake effect
The location of common lake effect bands on the Great Lakes
The Great Lakes can have an effect on regional weather called lake-effect snow, which is sometimes very localized. Even late in winter, the lakes often have no icepack in the middle. The prevailing winds from the west pick up the air and moisture from the lake surface, which is slightly warmer in relation to the cold surface winds above. As the slightly warmer, moist air passes over the colder land surface, the moisture often produces concentrated, heavy snowfall that sets up in bands or "streamers". This is similar to the effect of warmer air dropping snow as it passes over mountain ranges. During freezing weather with high winds, the "snow belts" receive regular snow fall from this localized weather pattern, especially along the eastern shores of the lakes. Snow belts are found in Wisconsin, Michigan, Ohio, Pennsylvania, and New York, United States; and Ontario, Canada.
The lakes also moderate seasonal temperatures to some degree, but not with as large an influence as do large oceans; they absorb heat and cool the air in summer, then slowly radiate that heat in autumn. They protect against frost during transitional weather, and keep the summertime temperatures cooler than further inland. This effect can be very localized and overridden by offshore wind patterns. This temperature buffering produces areas known as "Fruit Belts", where fruit can be produced that is typically grown much farther south. For instance, Western Michigan has apple and cherry orchards, and vineyards cultivated adjacent to the lake shore as far north as the Grand Traverse Bay and Nottawasaga Bay in central Ontario. The eastern shore of Lake Michigan and the southern shore of Lake Erie have many successful wineries because of the moderating effect, as does the Niagara Peninsula between Lake Erie and Lake Ontario. A similar phenomenon allows wineries to flourish in the Finger Lakes region of New York, as well as in Prince Edward County, Ontario on Lake Ontario's northeast shore. Related to the lake effect is the regular occurrence of fog over medium-sized areas, particularly along the shorelines of the lakes. This is most noticeable along Lake Superior's shores.
The Great Lakes have been observed to help intensify storms, such as Hurricane Hazel in 1954, and the 2011 Goderich, Ontario tornado, which moved onshore as a tornadic waterspout. In 1996 a rare tropical or subtropical storm was observed forming in Lake Huron, dubbed the 1996 Lake Huron cyclone. Rather large severe thunderstorms covering wide areas are well known in the Great Lakes during mid-summer; these Mesoscale convective complexes or MCCs[67] can cause damage to wide swaths of forest and shatter glass in city buildings. These storms mainly occur during the night, and the systems sometimes have small embedded tornadoes, but more often straight-line winds accompanied by intense lightning.
## Ecology
Generalized schematic of Great Lakes waterline ecosystem
Historically, the Great Lakes, in addition to their lake ecology, were surrounded by various forest ecoregions (except in a relatively small area of southeast Lake Michigan where savanna or prairie occasionally intruded). Logging, urbanization, and agriculture uses have changed that relationship. In the early 21st century, Lake Superior's shores are 91% forested, Lake Huron 68%, Lake Ontario 49%, Lake Michigan 41%, and Lake Erie, where logging and urbanization has been most extensive, 21%. Some of these forests are second or third growth (i.e. they have been logged before, changing their composition). At least 13 wildlife species are documented as becoming extinct since the arrival of Europeans, and many more are threatened or endangered.[15] Meanwhile, exotic and invasive species have also been introduced.
### Fauna
Lake sturgeon, the largest native fish in the Great Lakes and the subject of extensive commercial fishing in the 19th and 20th centuries is listed as a threatened species[68]
While the organisms living on the bottom of shallow waters are similar to those found in smaller lakes, the deep waters contain organisms found only in deep, cold lakes of the northern latitudes. These include the delicate opossum shrimp (order mysida), the deepwater scud (a crustacean of the order amphipoda), two types of copepods, and the deepwater sculpin (a spiny, large-headed fish).[69]
The Great Lakes are an important source of fishing. Early European settlers were astounded by both the variety and quantity of fish; there were 150 different species in the Great Lakes.[15] Throughout history, fish populations were the early indicator of the condition of the Lakes and have remained one of the key indicators even in the current era of sophisticated analyses and measuring instruments. According to the bi-national (U.S. and Canadian) resource book, The Great Lakes: An Environmental Atlas and Resource Book: "The largest Great Lakes fish harvests were recorded in 1889 and 1899 at some 67,000 tonnes (66,000 long tons; 74,000 short tons) [147 million pounds]."[70]
By 1801, the New York Legislature found it necessary to pass regulations curtailing obstructions to the natural migrations of Atlantic salmon from Lake Erie into their spawning channels. In the early 19th century, the government of Upper Canada found it necessary to introduce similar legislation prohibiting the use of weirs and nets at the mouths of Lake Ontario's tributaries. Other protective legislation was passed, as well, but enforcement remained difficult.[71]
On both sides of the Canada–United States border, the proliferation of dams and impoundments have multiplied, necessitating more regulatory efforts. Concerns by the mid-19th century included obstructions in the rivers which prevented salmon and lake sturgeon from reaching their spawning grounds. The Wisconsin Fisheries Commission noted a reduction of roughly 25% in general fish harvests by 1875. The states have removed dams from rivers where necessary.[clarification needed][72]
Overfishing has been cited as a possible reason for a decrease in population of various whitefish, important because of their culinary desirability and, hence, economic consequence. Moreover, between 1879 and 1899, reported whitefish harvests declined from some 24.3 million pounds (11 million kg) to just over 9 million pounds (4 million kg).[73] By 1900, commercial fishermen on Lake Michigan were hauling in an average of 41 million pounds of fish annually.[74] By 1938, Wisconsin's commercial fishing operations were motorized and mechanized, generating jobs for more than 2,000 workers, and hauling 14 million pounds per year.[74] The population of giant freshwater mussels was eliminated as the mussels were harvested for use as buttons by early Great Lakes entrepreneurs.[73] Since 2000, the invasive quagga mussel has smothered the bottom of Lake Michigan almost from shore to shore, and their numbers are estimated at 900 trillion.[74]
The influx of parasitic lamprey populations after the development of the Erie Canal and the much later Welland Canal led to the two federal governments of the US and Canada working on joint proposals to control it. By the mid-1950s, the lake trout populations of Lakes Michigan and Huron were reduced, with the lamprey deemed largely to blame. This led to the launch of the bi-national Great Lakes Fishery Commission.
Cliffs at Palisade Head on Lake Superior in Minnesota near Silver Bay.
The Great Lakes: An Environmental Atlas and Resource Book (1972) noted: "Only pockets remain of the once large commercial fishery."[70] But, water quality improvements realized during the 1970s and 1980s, combined with successful salmonid stocking programs, have enabled the growth of a large recreational fishery.[75] The last commercial fisherman left Milwaukee in 2011 because of overfishing and anthropogenic changes to the biosphere.[74]
Since the 19th century an estimated 160 new species have found their way into the Great Lakes ecosystem; many have become invasive; the overseas ship ballast and ship hull parasitism are causing severe economic and ecological impacts.[76][77] According to the Inland Seas Education Association, on average a new species enters the Great Lakes every eight months.[77]
A zebra mussel–encrusted vector-averaging current meter from Lake Michigan.
Introductions into the Great Lakes include the zebra mussel, which was first discovered in 1988, and quagga mussel in 1989. The mollusks are efficient filter feeders, competing with native mussels and reducing available food and spawning grounds for fish. In addition, the mussels may be a nuisance to industries by clogging pipes. The U.S. Fish and Wildlife Service estimates that the economic impact of the zebra mussel could be about \$5 billion over the next decade.[78]
The alewife first entered the system west of Lake Ontario via 19th-century canals. By the 1960s, the small silver fish had become a familiar nuisance to beach goers across Lakes Michigan, Huron, and Erie. Periodic mass dieoffs result in vast numbers of the fish washing up on shore; estimates by various governments have placed the percentage of Lake Michigan's biomass, which was made up of alewives in the early 1960s, as high as 90%. In the late 1960s, the various state and federal governments began stocking several species of salmonids, including the native lake trout as well as non-native chinook and coho salmon; by the 1980s, alewife populations had dropped drastically.[79] The ruffe, a small percid fish from Eurasia, became the most abundant fish species in Lake Superior's Saint Louis River within five years of its detection in 1986. Its range, which has expanded to Lake Huron, poses a significant threat to the lower lake fishery.[80] Five years after first being observed in the St. Clair River, the round goby can now be found in all of the Great Lakes. The goby is considered undesirable for several reasons: it preys upon bottom-feeding fish, overruns optimal habitat, spawns multiple times a season, and can survive poor water quality conditions.[81]
Several species of exotic water fleas have accidentally been introduced into the Great Lakes, such as the spiny waterflea, Bythotrephes longimanus, and the fishhook waterflea, Cercopagis pengoi, potentially having an effect on the zooplankton population. Several species of crayfish have also been introduced that may contend with native crayfish populations. More recently an electric fence has been set up across the Chicago Sanitary and Ship Canal in order to keep several species of invasive Asian carp out of the area. These fast-growing planktivorous fish have heavily colonized the Mississippi and Illinois river systems.[82] The sea lamprey, which has been particularly damaging to the native lake trout population, is another example of a marine invasive species in the Great Lakes.[83] Invasive species, particularly zebra and quagga mussels, may be at least partially responsible for the collapse of the deepwater demersal fish community in Lake Huron,[84] as well as drastic unprecedented changes in the zooplankton community of the lake.[85]
### Microbiology
Scientists understand that the micro-aquatic life of the lakes is abundant, but know very little about some of the most plentiful microbes and their environmental effects in the Great Lakes. Although a drop of lake water may contain 1 million bacteria cells and 10 million viruses, only since 2012 has there been a long-term study of the lakes' micro-organisms. Between 2012 and 2019 more than 160 new species have been discovered.[86]
### Flora
Native habitats and ecoregions in the Great Lakes region include:
Plant lists include:
Logging
Logging of the extensive forests in the Great Lakes region removed riparian and adjacent tree cover over rivers and streams, which provide shade, moderating water temperatures in fish spawning grounds. Removal of trees also destabilized the soil, with greater volumes washed into stream beds causing siltation of gravel beds, and more frequent flooding.
Running cut logs down the tributary rivers into the Great Lakes also dislocated sediments. In 1884, the New York Fish Commission determined that the dumping of sawmill waste (chips and sawdust) had impacted fish populations.[87]
### Pollution
The first U.S. Clean Water Act, passed by a Congressional override after being vetoed by US President Richard Nixon in 1972, was a key piece of legislation,[88] along with the bi-national Great Lakes Water Quality Agreement signed by Canada and the U.S. A variety of steps taken to process industrial and municipal pollution discharges into the system greatly improved water quality by the 1980s, and Lake Erie in particular is significantly cleaner.[89] Discharge of toxic substances has been sharply reduced. Federal and state regulations control substances like PCBs. The first of 43 "Great Lakes Areas of Concern" to be formally "de-listed" due to successful cleanup was Ontario's Collingwood Harbour in 1994; Ontario's Severn Sound followed in 2003.[90] Presque Isle Bay in Pennsylvania is formally listed as in recovery, as is Ontario's Spanish Harbour. Dozens of other Areas of Concern have received partial cleanups such as the Rouge River (Michigan) and Waukegan Harbor (Illinois).[91]
Phosphate detergents were historically a major source of nutrient to the Great Lakes algae blooms in particular in the warmer and shallower portions of the system such as Lake Erie, Saginaw Bay, Green Bay, and the southernmost portion of Lake Michigan. By the mid-1980s, most jurisdictions bordering the Great Lakes had controlled phosphate detergents,[92] resulting in sharp reductions in the frequency and extent of the blooms.[citation needed]
Blue-green algae, or Cyanobacteria blooms,[93] have been problematic on Lake Erie since 2011.[94] "Not enough is being done to stop fertilizer and phosphorus from getting into the lake and causing blooms," said Michael McKay, executive director of the Great Lakes Institute for Environmental Research (GLIER) at the University of Windsor. The largest Lake Erie bloom to date occurred in 2015, exceeding the severity index at 10.5 and in 2011 at a 10.[95] In early August 2019, satellite images depicted a bloom stretching up to 1,300 square kilometres on Lake Erie, with the epicentre near Toledo, Ohio. A large bloom does not necessarily mean the cyanobacteria ... will produce toxins", said Michael McKay, of the University of Windsor. Water quality testing was underway in August 2019.[96][95]
#### Mercury
Until 1970, mercury was not listed as a harmful chemical, according to the United States Federal Water Quality Administration. Within the past ten years mercury has become more apparent in water tests. Mercury compounds have been used in paper mills to prevent slime from forming during their production, and chemical companies have used mercury to separate chlorine from brine solutions. Studies conducted by the Environmental Protection Agency have shown that when the mercury comes in contact with many of the bacteria and compounds in the fresh water, it forms the compound methyl mercury, which has a much greater impact on human health than elemental mercury due to a higher propensity for absorption. This form of mercury is not detrimental to a majority of fish types, but is very detrimental to people and other wildlife animals who consume the fish. Mercury has been known for health related problems such as birth defects in humans and animals, and the near extinction of eagles in the Great Lakes region.[97]
#### Sewage
The amount of raw sewage dumped into the waters was the primary focus of both the first Great Lakes Water Quality Agreement and federal laws passed in both countries during the 1970s. Implementation of secondary treatment of municipal sewage by major cities greatly reduced the routine discharge of untreated sewage during the 1970s and 1980s.[98] The International Joint Commission in 2009 summarized the change: "Since the early 1970s, the level of treatment to reduce pollution from waste water discharges to the Great Lakes has improved considerably. This is a result of significant expenditures to date on both infrastructure and technology, and robust regulatory systems that have proven to be, on the whole, quite effective."[99] The commission reported that all urban sewage treatment systems on the U.S. side of the lakes had implemented secondary treatment, as had all on the Canadian side except for five small systems.[citation needed]
Though contrary to federal laws in both countries, those treatment system upgrades have not yet eliminated Combined sewer Overflow events.[citation needed] This describes when older sewerage systems, which combine storm water with sewage into single sewers heading to the treatment plant, are temporarily overwhelmed by heavy rainstorms. Local sewage treatment authorities then must release untreated effluent, a mix of rainwater and sewage, into local water bodies. While enormous public investments such as the Deep Tunnel projects in Chicago and Milwaukee have greatly reduced the frequency and volume of these events, they have not been eliminated. The number of such overflow events in Ontario, for example, is flat according to the International Joint Commission.[99] Reports about this issue on the U.S. side highlight five large municipal systems (those of Detroit, Cleveland, Buffalo, Milwaukee and Gary) as being the largest current periodic sources of untreated discharges into the Great Lakes.[100]
Diatoms of different sizes seen through the microscope. These minuscule phytoplankton are encased within a silicate cell wall.
### Impacts of climate change on algae
Algae such as diatoms, along with other phytoplankton, are photosynthetic primary producers supporting the food web of the Great Lakes,[101] and have been effected by global warming.[102] The changes in the size or in the function of the primary producers may have a direct or an indirect impact on the food web. Photosynthesis carried out by diatoms comprises about one fifth of the total photosynthesis. By taking CO
2
out of the water, to photosynthesize, diatoms help to stabilize the pH of the water, as otherwise CO
2
would react with water making it more acidic.
${\displaystyle {\ce {CO2 + H2O <=> HCO3^- + H+}}}$
Diatoms acquire inorganic carbon thought passive diffusion of CO
2
and HCO3, as well they use carbonic anhydrase mediated active transport to speed up this process.[103] Large diatoms require more carbon uptake than smaller diatoms.[104] There is a positive correlation between the surface area and the chlorophyll concentration of diatom cells.[105]
## History
A woodcut of Le Griffon
Several Native American populations (Paleo-indians) inhabited the region around 10,000 BC, after the end of the Wisconsin glaciation.[106][107] The peoples of the Great Lakes traded with the Hopewell culture from around 1000 AD, as copper nuggets have been extracted from the region, and fashioned into ornaments and weapons in the mounds of Southern Ohio. The brigantine Le Griffon, which was commissioned by René-Robert Cavelier, Sieur de La Salle, was built at Cayuga Creek, near the southern end of the Niagara River, and became the first known sailing ship to travel the upper Great Lakes on August 7, 1679.[108]
The Rush–Bagot Treaty signed in 1818, after the War of 1812 and the later Treaty of Washington eventually led to a complete disarmament of naval vessels in the Great Lakes. Nonetheless, both nations maintain coast guard vessels in the Great Lakes.
During settlement, the Great Lakes and its rivers were the only practical means of moving people and freight. Barges from middle North America were able to reach the Atlantic Ocean from the Great Lakes when the Welland canal opened in 1824 and the later Erie Canal opened in 1825.[109] By 1848, with the opening of the Illinois and Michigan Canal at Chicago, direct access to the Mississippi River was possible from the lakes.[110] With these two canals an all-inland water route was provided between New York City and New Orleans.
The main business of many of the passenger lines in the 19th century was transporting immigrants. Many of the larger cities owe their existence to their position on the lakes as a freight destination as well as for being a magnet for immigrants. After railroads and surface roads developed, the freight and passenger businesses dwindled and, except for ferries and a few foreign cruise ships, have now vanished. The immigration routes still have an effect today. Immigrants often formed their own communities and some areas have a pronounced ethnicity, such as Dutch, German, Polish, Finnish, and many others. Since many immigrants settled for a time in New England before moving westward, many areas on the U.S. side of the Great Lakes also have a New England feel, especially in home styles and accent.
The Eastland leaving Chicago, c. 1909
Since general freight these days is transported by railroads and trucks, domestic ships mostly move bulk cargoes, such as iron ore, coal and limestone for the steel industry. The domestic bulk freight developed because of the nearby mines. It was more economical to transport the ingredients for steel to centralized plants rather than try to make steel on the spot. Grain exports are also a major cargo on the lakes.
In the 19th century and early 20th centuries, iron and other ores such as copper were shipped south on (downbound ships), and supplies, food, and coal were shipped north (upbound). Because of the location of the coal fields in Pennsylvania and West Virginia, and the general northeast track of the Appalachian Mountains, railroads naturally developed shipping routes that went due north to ports such as Erie, Pennsylvania and Ashtabula, Ohio.
Because the lake maritime community largely developed independently, it has some distinctive vocabulary. Ships, no matter the size, are called boats. When the sailing ships gave way to steamships, they were called steamboats—the same term used on the Mississippi. The ships also have a distinctive design (see Lake freighter). Ships that primarily trade on the lakes are known as lakers. Foreign boats are known as salties. One of the more common sights on the lakes has been since about 1950 the 1,000‑by‑105-foot (305-by-32-meter), 78,850-long-ton (80,120-metric-ton) self-unloader. This is a laker with a conveyor belt system that can unload itself by swinging a crane over the side.[111] Today, the Great Lakes fleet is much smaller in numbers than it once was because of the increased use of overland freight, and a few larger ships replacing many small ones.
During World War II, the risk of submarine attacks against coastal training facilities motivated the United States Navy to operate two aircraft carriers on the Great Lakes, USS Sable (IX-81) and USS Wolverine (IX-64). Both served as training ships to qualify naval aviators in carrier landing and takeoff.[112] Lake Champlain briefly became the sixth Great Lake of the United States on March 6, 1998, when President Clinton signed Senate Bill 927. This bill, which reauthorized the National Sea Grant Program, contained a line declaring Lake Champlain to be a Great Lake. Not coincidentally, this status allows neighboring states to apply for additional federal research and education funds allocated to these national resources.[113] Following a small uproar, the Senate voted to revoke the designation on March 24 (although New York and Vermont universities would continue to receive funds to monitor and study the lake).[114]
In the early years of the 21st century, water levels in the Great Lakes were a concern.[115] Researchers at the Mowat Centre said that low levels could cost \$19bn by 2050.[116]
## Economy
Photograph of Lakes Ontario, Erie and Huron plus the Finger Lakes of upstate New York, June 14, 2012, taken aboard the International Space Station, with lake names added
### Fishing
Alan B. McCullough has written that the fishing industry of the Great Lakes got its start "on the American side of Lake Ontario in Chaumont Bay, near the Maumee River on Lake Erie, and on the Detroit River at about the time of the War of 1812." Although the region was sparsely populated until the 1830s, so there was not much local demand and transporting fish was still prohibitively costly, there were economic and infrastructure developments that were promising for the future of the fishing industry going into the 1830s. Particularly, the 1825 opening of the Erie Canal and the Welland Canal a few years later. The fishing industry expanded particularly in the waters associated with the fur trade that connect Lake Erie and Lake Huron. In fact, two major suppliers of fish in the 1830s were the fur trading companies Hudson's Bay Company and the American Fur Company.[117]
The catch from these waters would be sent to the growing market for salted fish in Detroit, where merchants involved in the fur trade had already gained some experience handling salted fish. One such merchant was John P. Clark, a shipbuilder and merchant who began selling fish in the area of Manitowoc, Wisconsin where whitefish was abundant. Another operation cropped up in Georgian Bay, Canadian waters plentiful with trout as well as whitefish. In 1831, Alexander MacGregor from Goderich, Ontario found whitefish and herring in unusually abundant supply around the Fishing Islands. A contemporary account by Methodist missionary John Evans describes the fish as resembling a "bright cloud moving rapidly through the water".[117]
### Shipping
Except when the water is frozen during winter, more than 100 lake freighters operate continuously on the Great Lakes,[118] which remain a major water transport corridor for bulk goods. The Great Lakes Waterway connects all the lakes; the smaller Saint Lawrence Seaway connects the lakes to the Atlantic oceans. Some lake freighters are too large to use the Seaway, and operate only on the Waterway and lakes.
In 2002, 162 million net tons of dry bulk cargo were moved on the Lakes. This was, in order of volume: iron ore, grain and potash.[119] The iron ore and much of the stone and coal are used in the steel industry. There is also some shipping of liquid and containerized cargo but most container ships cannot pass the locks on the Saint Lawrence Seaway because the ships are too wide.
Only four bridges are on the Great Lakes other than Lake Ontario because of the cost of building structures high enough for ships to pass under. The Blue Water Bridge is, for example, more than 150 feet high and more than a mile long.[118]
Major ports on the Great Lakes include Duluth-Superior, Chicago, Detroit, Cleveland, Twin Harbors, Hamilton and Thunder Bay.
### Drinking water and compact
The Great Lakes are used to supply drinking water to tens of millions of people in bordering areas. This valuable resource is collectively administered by the state and provincial governments adjacent to the lakes, who have agreed to the Great Lakes Compact to regulate water supply and use.
### Recreation
Escanaba's Ludington Park in Michigan
Tourism and recreation are major industries on the Great Lakes.[120] A few small cruise ships operate on the Great Lakes including a couple of sailing ships. Sport fishing, commercial fishing, and Native American fishing represent a U.S.\$4 billion a year industry with salmon, whitefish, smelt, lake trout, bass and walleye being major catches. Many other water sports are practiced on the lakes such as yachting, sea kayaking, diving, kitesurfing, powerboating, and lake surfing.
The Great Lakes Circle Tour is a designated scenic road system connecting all of the Great Lakes and the Saint Lawrence River.[121]
### Great Lakes passenger steamers
From 1844 through 1857, palace steamers carried passengers and cargo around the Great Lakes.[122] In the first half of the 20th century large luxurious passenger steamers sailed the lakes in opulence.[123] The Detroit and Cleveland Navigation Company had several vessels at the time and hired workers from all walks of life to help operate these vessels.[124] Several ferries currently operate on the Great Lakes to carry passengers to various islands, including Isle Royale, Drummond Island, Pelee Island, Mackinac Island, Beaver Island, Bois Blanc Island (Ontario), Bois Blanc Island (Michigan), Kelleys Island, South Bass Island, North Manitou Island, South Manitou Island, Harsens Island, Manitoulin Island, and the Toronto Islands. As of 2007, four car ferry services cross the Great Lakes, two on Lake Michigan: a steamer from Ludington, Michigan, to Manitowoc, Wisconsin, and a high speed catamaran from Milwaukee to Muskegon, Michigan, one on Lake Erie: a boat from Kingsville, Ontario, or Leamington, Ontario, to Pelee Island, Ontario, then onto Sandusky, Ohio, and one on Lake Huron: the M.S. Chi-Cheemaun [125] runs between Tobermory and South Baymouth, Manitoulin Island, operated by the Owen Sound Transportation Company. An international ferry across Lake Ontario from Rochester, New York, to Toronto ran during 2004 and 2005, but is no longer in operation.
### Shipwrecks
The large size of the Great Lakes increases the risk of water travel; storms and reefs are common threats. The lakes are prone to sudden and severe storms, in particular in the autumn, from late October until early December. Hundreds of ships have met their end on the lakes. The greatest concentration of shipwrecks lies near Thunder Bay (Michigan), beneath Lake Huron, near the point where eastbound and westbound shipping lanes converge.
The Lake Superior shipwreck coast from Grand Marais, Michigan, to Whitefish Point became known as the "Graveyard of the Great Lakes". More vessels have been lost in the Whitefish Point area than any other part of Lake Superior.[126] The Whitefish Point Underwater Preserve serves as an underwater museum to protect the many shipwrecks in this area.
The first ship to sink in Lake Michigan was Le Griffon, also the first ship to sail the Great Lakes. Caught in a 1679 storm while trading furs between Green Bay and Michilimacinac, she was lost with all hands aboard.[127] Its wreck may have been found in 2004,[128] but a wreck subsequently discovered in a different location was also claimed in 2014 to be Le Griffon.[129]
The largest and last major freighter wrecked on the lakes was the SS Edmund Fitzgerald, which sank on November 10, 1975, just over 17 miles (30 km) offshore from Whitefish Point on Lake Superior. The largest loss of life in a shipwreck out on the lakes may have been that of Lady Elgin, wrecked in 1860 with the loss of around 400 lives on Lake Michigan. In an incident at a Chicago dock in 1915, the SS Eastland rolled over while loading passengers, killing 841.
In August 2007, the Great Lakes Shipwreck Historical Society announced that it had found the wreckage of Cyprus, a 420-foot (130 m) long, century-old ore carrier. Cyprus sank during a Lake Superior storm on October 11, 1907, during its second voyage while hauling iron ore from Superior, Wisconsin, to Buffalo, New York. The entire crew of 23 drowned, except one, Charles Pitz, who floated on a life raft for almost seven hours.[130]
In June 2008, deep sea divers in Lake Ontario found the wreck of the 1780 Royal Navy warship HMS Ontario in what has been described as an "archaeological miracle".[131] There are no plans to raise her as the site is being treated as a war grave.
In June 2010, L.R. Doty was found in Lake Michigan by an exploration diving team led by dive boat Captain Jitka Hanakova from her boat the Molly V.[132] The ship sank in October 1898, probably attempting to rescue a small schooner, Olive Jeanette, during a terrible storm.
Still missing are the two last warships to sink in the Great Lakes, the French minesweepers, Inkerman and Cerisoles, which vanished in Lake Superior during a blizzard in 1918. 78 lives were lost making it the largest loss of life in Lake Superior and the greatest unexplained loss of life in the Great Lakes.
Related articles
## Legislation
Various national, state, provincial, and municipal jurisdictions govern the Great Lakes
In 1872, a treaty gave access to the St. Lawrence River to the United States, and access to Lake Michigan to the Dominion of Canada.[133] The International Joint Commission was established in 1909 to help prevent and resolve disputes relating to the use and quality of boundary waters, and to advise Canada and the United States on questions related to water resources. Concerns over diversion of Lake water are of concern to both Americans and Canadians. Some water is diverted through the Chicago River to operate the Illinois Waterway but the flow is limited by treaty. Possible schemes for bottled water plants and diversion to dry regions of the continent raise concerns. Under the U.S. "Water Resources Development Act",[134] diversion of water from the Great Lakes Basin requires the approval of all eight Great Lakes governors through the Great Lakes Commission, which rarely occurs. International treaties regulate large diversions.
In 1998, the Canadian company Nova Group won approval from the Province of Ontario to withdraw 158,000,000 U.S. gallons (600,000 m3) of Lake Superior water annually to ship by tanker to Asian countries. Public outcry forced the company to abandon the plan before it began. Since that time, the eight Great Lakes Governors and the Premiers of Ontario and Quebec have negotiated the Great Lakes-Saint Lawrence River Basin Sustainable Water Resources Agreement[135] and the Great Lakes-St. Lawrence River Basin Water Resources Compact[136] that would prevent most future diversion proposals and all long-distance ones. The agreements strengthen protection against abusive water withdrawal practices within the Great Lakes basin. On December 13, 2005, the Governors and Premiers signed these two agreements, the first of which is between all ten jurisdictions. It is somewhat more detailed and protective, though its legal strength has not yet been tested in court. The second, the Great Lakes Compact, has been approved by the state legislatures of all eight states that border the Great Lakes as well as the U.S. Congress, and was signed into law by President George W. Bush on October 3, 2008.[137]
The Great Lakes Restoration Initiative, described as "the largest investment in the Great Lakes in two decades",[138] was funded at \$475 million in the U.S. federal government's Fiscal Year 2011 budget, and \$300 million in the Fiscal Year 2012 budget. Through the program a coalition of federal agencies is making grants to local and state entities for toxics cleanups, wetlands and coastline restoration projects, and invasive species-related projects.
## References
1. ^ Waples, James T. (2008). "The Laurentian Great Lakes" (PDF). North American Continental Margins: 73–81.
2. ^ "Great Lakes". US Epa.gov. June 28, 2006. Retrieved February 19, 2011.
3. ^ "LUHNA Chapter 6: Historical Landcover Changes in the Great Lakes Region". Biology.usgs.gov. November 20, 2003. Archived from the original on January 11, 2012. Retrieved February 19, 2011.
4. ^ Ghassemi, Fereidoun (2007). Inter-basin water transfer. Cambridge: Cambridge University Press. ISBN 978-0-521-86969-0.
5. ^ a b c "Great Lakes: Basic Information: Physical Facts". United States Environmental Protection Agency (EPA). May 25, 2011. Archived from the original on May 29, 2012. Retrieved November 9, 2011.
6. ^ Williamson, James (2007). The inland seas of North America: and the natural and industrial productions ... John Duff Montreal Hew Ramsay Toronto AH Armour and Co. Retrieved January 5, 2014.
7. ^ "The Top Ten: The Ten Largest Lakes of the World". infoplease.com.
8. ^ Rosenberg, Matt. "Largest Lakes in the World by Area, Volume and Depth". About.com Education.
9. ^ Hough, Jack (1970) [1763]. "Great Lakes". The Encyclopædia Britannica. 10 (Commemorative Edition for Expo'70 ed.). Chicago: William Benton. p. 774. ISBN 978-0-85229-135-1.
10. ^ "Large Lakes of the World". factmonster.com.
11. ^ a b Cordell, Linda S.; Lightfoot, Kent; McManamon, Francis; Milner, George (2008). Archaeology in America: An Encyclopedia: An Encyclopedia. ABC-CLIO. p. 1. ISBN 978-0-313-02189-3.
12. ^ Great Lakes. America 2050. Retrieved on December 7, 2016.
13. ^ a b Tom Bennett (1999). State of the Great Lakes: 1997 Annual Report. Diane Publishing. p. 1991. ISBN 978-0-7881-4358-8.
14. ^ Likens, Gene E. (2010). Lake Ecosystem Ecology: A Global Perspective. Academic Press. p. 326. ISBN 978-0-12-382003-7.
15. Grady, Wayne (2007). The Great Lakes. Vancouver: Greystone Books and David Suzuki Foundation. pp. 13, 21–26, 42–43. ISBN 978-1-55365-197-0.
16. ^ a b "Great Lakes Atlas: Factsheet #1". United States Environmental Protection Agency. March 9, 2006. Retrieved December 3, 2007.
17. ^ a b "Great Lakes Map". Michigan Department of Environmental Quality. Archived from the original on November 14, 2011. Retrieved November 27, 2011.
18. ^ See List of cities on the Great Lakes for a complete list.
19. ^ National Geophysical Data Center (1999). Bathymetry of Lake Erie and Lake Saint Clair. National Geophysical Data Center, NOAA. doi:10.7289/V5KS6PHK
20. ^ National Geophysical Data Center (1999). Bathymetry of Lake Huron. National Geophysical Data Center, NOAA. doi:10.7289/V5G15XS5
21. ^ National Geophysical Data Center (1999). Bathymetry of Lake Michigan. National Geophysical Data Center, NOAA. doi:10.7289/V5B85627
22. ^ National Geophysical Data Center (1999). Bathymetry of Lake Ontario. National Geophysical Data Center, NOAA. doi:10.7289/V56H4FBH
23. ^ National Geophysical Data Center (1999). Bathymetry of Lake Superior. National Geophysical Data Center, NOAA.
(the general reference to NGDC because this lake was never published, compilation of Great Lakes Bathymetry at NGDC has been suspended).
24. ^ National Geophysical Data Center (1999). Global Land One-kilometer Base Elevation (GLOBE) v. 1. Hastings, D. and P.K. Dunbar. National Geophysical Data Center, NOAA. doi:10.7289/V52R3PMS
25. ^ W. Bruce Bowlus (2010). Iron Ore Transport on the Great Lakes: The Development of a Delivery System to Feed American Industry. McFarland. p. 215. ISBN 978-0-7864-8655-7.
26. ^ "Michigan and Huron: One Lake or Two?" Pearson Education, Inc: Information Please Database, 2007.
27. ^ Wright, John W., ed. (2006). The New York Times Almanac (2007 ed.). New York: Penguin Books. p. 64. ISBN 978-0-14-303820-7.
28. ^ Home. Peninsula Township. Retrieved on December 7, 2016.
29. ^ Citation needed|May 2016
30. ^ Background Geology of the North Bay area. Archived July 24, 2010, at the Wayback Machine Retrieved on September 24, 2007
31. ^ Lake St. Clair summary report Archived April 16, 2016, at the Wayback Machine. Great Lakes.net. Retrieved on December 2, 2007.
32. ^ "Chapter 1:Introduction to Lake St. Clair and the St. Clair River". U.S. government U.S. Army. June 2004. Archived from the original on January 10, 2009. Retrieved June 8, 2008.
33. ^ "Movement Would Thrust Greatness on Lake St. Clair", Los Angeles Times, October 20, 2002
34. ^ https://www.epa.gov/greatlakes
35. ^ Dunn, Gary A (July 1, 1996). Insects of the Great Lakes Region. University of Michigan Press. p. 3. ISBN 978-0-472-06515-8.
36. ^ Huber, Norman King; Geological Survey (U.S.); United States. National Park Service (1975). The geologic story of Isle Royale National Park. Department of the Interior, Geological Survey: for sale by the Superintendent of Documents, U.S. Government Printing Office. p. 41.
37. ^ Manivanan, R. (January 1, 2008). Water Quality Modeling: Rivers, Streams, and Estuaries. New India Publishing. p. 114. ISBN 978-81-89422-93-6.
38. ^ Robert McCalla (January 1, 1994). Water Transportation in Canada. Formac Publishing Company. pp. 159–162. ISBN 978-0-88780-247-8.
39. ^ Coastal Sediments '07. ASCE Publications. January 1, 2007. p. 2215. ISBN 978-0-7844-7194-4. Retrieved April 16, 2013.
40. ^ United States. Bureau of the Census (1908). Transportation by water. 1906. Govt. Print. Off. p. 220.
41. ^ "The Great Lakes". US EPA. August 20, 2015.
42. ^ "State of the Great Lakes 2009 Highlights (PDF)". Environment Canada and USEPA. pp. 7–8. Retrieved July 7, 2013.
43. ^ Thomas Dietz; David Bidwell (December 1, 2011). Climate Change in the Great Lakes Region: Navigating an Uncertain Future. MSU Press. ISBN 978-1-60917-236-7.
44. ^ Robert Watson; Marufu Zinyowera; Richard Moss (1997). "The Regional Impacts of Climate Change". Intergovernmental Panel on Climate Change. United Nations. Archived from the original on June 9, 2019. Retrieved June 9, 2019. the following lake level declines could occur: Lake Superior −0.2 to −0.5 m, Lakes Michigan and Huron −1.0 to −2.5 m, and Lake Erie −0.9 to −1.9 m
45. ^ Bruce Elliott Johansen (2009). The Encyclopedia of Global Warming Science and Technology. p. 299. ISBN 978-0313377020. Retrieved June 9, 2019. A warming climate for inland lakes (notably the Great Lakes of North America) generally will not raise water levels, as in the oceans, but rather decrease water levels.
46. ^ Dan Kraker (April 23, 2013). "Great Lakes water levels reaching record lows". Minnesota Public Radio. Archived from the original on February 5, 2019. Retrieved June 9, 2019. Scientists at the Oceanic and Atmospheric Administration is studying the interplay between low water levels, shrinking ice cover and warm water temperatures, Gronewold said. They have already concluded that climate change is playing a role in determining Great Lakes water levels. "More recently, evaporation over lakes has steadily been increasing, largely due to increases in water surface temperature," Gronewold said. "That's a climate response
47. ^ Bolsenga, Stanley J.; Herdendorf, Charles E. (1993). Lake Erie and Lake Saint Clair Handbook. Wayne State University Press. p. 67. ISBN 978-0-8143-2470-7.
48. ^ a b jsonline.com: Lakes Michigan, Huron hit record low water level February 5, 2013
49. ^ Lisa Borre (November 20, 2012). "Warming Lakes: Climate Change and Variability Drive Low Water Levels on the Great Lakes". National Geographic. Retrieved June 9, 2019. Low water levels are not the only climate-related trend being observed on the Great Lakes.
50. ^ Aliya Haq. "Climate change is lowering Great Lakes water levels. Should Waukesha be allowed to tap into the Lakes?". NRDC. Archived from the original on February 10, 2019. Retrieved June 9, 2019.
51. ^ "Weekly Great Lakes Water Level Update"; Detroit District, Corps of Engineers, Department of the Army (April 17, 2005)
52. ^ Room, A. (2006). Placenames of the World: Origins And Meanings of the Names for 6,600 Countries, Cities, Territories, Natural Features And Historic Sites. McFarland. p. 150. ISBN 978-0-7864-2248-7.
53. ^ Room, A. (2006). Placenames of the World: Origins And Meanings of the Names for 6,600 Countries, Cities, Territories, Natural Features And Historic Sites. McFarland. p. 171. ISBN 978-0-7864-2248-7.
54. ^ Sioui, Georges E. (1999). Huron-Wendat. Jane Brierley. UBC Press. ISBN 978-0-7748-0715-9. Retrieved March 12, 2009.
55. ^ Fonger, Ron (May 3, 2007). "Genesee, Oakland counties adopt historic name for water group". The Flint Journal. Retrieved December 6, 2011.
56. ^ Weiland, Matt; Wilsey, Sean (October 19, 2010). State by State. HarperCollins. p. 226. ISBN 978-0-06-204357-3.
57. ^ Ylvisaker, Anne (2004). Lake Ontario. Capstone. p. 12. ISBN 978-0-7368-2211-4.
58. ^ Cayton, Andrew R.L.; Sisson, Richard; Zacher, Chris (November 8, 2006). The American Midwest: An Interpretive Encyclopedia. Indiana University Press. p. 161. ISBN 978-0-253-00349-2.
59. ^ Taylor, William W.; Schechter, Michael G.; Wolfson, Lois G. (2007). Globalization: Effects on Fisheries Resources. Cambridge University Press. p. 85. ISBN 978-1-139-46834-3.
60. ^ "Shorelines of the Great Lakes". Michigan Department of Environmental Quality. Archived from the original on July 14, 2014. Retrieved July 8, 2014.
61. ^ Van Schmus, W.R.; Hinze, W. J. (May 1985). "The Midcontinent Rift System" (PDF). Annual Review of Earth and Planetary Sciences. 13 (1): 345–83. Bibcode:1985AREPS..13..345V. doi:10.1146/annurev.ea.13.050185.002021. hdl:1808/104. Retrieved October 6, 2008.
62. ^ Larson, Grahame; Schaetzl, R. (2001). "Origin and evolution of the Great Lakes" (PDF). Journal of Great Lakes Research. 27 (4): 518–546. doi:10.1016/S0380-1330(01)70665-X. Archived from the original (PDF) on October 31, 2008. Retrieved March 4, 2009.
63. ^ "Lake levels report weighs Great Lakes basin's glacial legacy". Great Lakes Echo. June 8, 2009. Retrieved February 19, 2011.
64. ^ Fahnestock, R. K.; Crowley, D. J.; Wilson, M.; Schneider, H. (1973). "Ice Volcanoes of the Lake Erie Shore Near Dunkirk, New York, U.S.A." (PDF). Journal of Glaciology. 12 (64): 93–99. doi:10.3189/S0022143000022735. Retrieved May 25, 2018.
65. ^ a b "Natural Processes in the Great Lakes". The Great Lakes: An Environmental Atlas and Resource Book. Environmental Protection Agency. July 24, 2008. Retrieved November 27, 2011.
66. ^ "Great Lakes Water Levels Sensitive To Climate Change". Science Daily. January 14, 2009. Retrieved April 14, 2010.
67. ^ "Glossary". NOAA's National Weather Service.
68. ^ U.S. Fish and Wildlife Service. "Great Lakes Lake Sturgeon Web Site". fws.gov.
69. ^ Beeton, Alfred. "Great Lakes". Encyclopædia Britannica. Retrieved January 31, 2016.
70. ^ a b Anon (1972). The Great Lakes: An Environmental Atlas and Resource Book. Bi-national (U.S. and Canadian) resource book.
71. ^ Margaret Beattie Bogue (2001). Fishing the Great Lakes: An Environmental History, 1783–1933. Univ of Wisconsin Press. p. 180. ISBN 978-0-299-16763-9.
72. ^ Atlantic States Marine Fisheries Commission. Special report ... of the Atlantic States Marine Fisheries Commission. The Commission. p. 23.
73. ^ a b Macdonald, David; Service, Katrina, eds. (2009). Key Topics in Conservation Biology. John Wiley & Sons. p. 188. ISBN 978-1-4443-0906-5.
74. ^ a b c d The lake left me. It's gone., JS Online, August 13, 2011
75. ^ U.S. Environmental Protection Agency (1998). EPA, Great Minds?, Great Lakes!, Lake Guardian, Don't Miss The Boat With Environmental Education, March 1997. s.n. p. 7.
76. ^ "New EPA rules to target invasive species; Invaders have plagued Great Lakes for years". The Blade. ProQuest 380761083.
77. ^ a b "Our Threatened Great Lakes". Inland Seas Education Association. Archived from the original on April 3, 2013. Retrieved November 30, 2007.
78. ^ "Great Lakes Aquatic Nuisance Species". Great Lakes Commission. March 27, 2007. Retrieved November 30, 2007.
79. ^ Smith, Paul (February 24, 2009). "Gobies up, alewives down in Lake Michigan". Journal Sentinel. Retrieved August 6, 2010.
80. ^ "Predicting Invasive Species in the Great Lakes". U.S. Environmental Protection Agency. Retrieved August 6, 2010.
81. ^ Glassner-Shwayder, Katherine (July 2000). "Briefing Paper: Great Lakes Nonindigenous Invasive Species" (PDF). Great Lakes Nonindigenous Invasive Species Workshop. Archived from the original (PDF) on December 27, 2005. Retrieved August 6, 2010.
82. ^ "Asian Carp Risk Assessment for Canada by Fisheries and Oceans Canada" (PDF). CSAS. Retrieved August 6, 2010.
83. ^ "Petromyzon marinus Linnaeus 1758". USGS. Retrieved August 6, 2010.
84. ^ Riley, S.C.; Roseman, Edward F.; Nichols, S. Jerrine; O'Brien, Timothy P.; Kiley, Courtney S.; Schaeffer, Jeffrey S. (2008). "Deepwater demersal fish community collapse in Lake Huron" (PDF). Transactions of the American Fisheries Society. 137 (6): 1879–90. doi:10.1577/T07-141.1. Archived from the original (PDF) on June 3, 2013.
85. ^ Barbiero, R. P.; Barbiero, Richard P.; Balcer, Mary; Rockwell, David C.; Tuchman, Marc L. (2009). "Recent shifts in the crustacean zooplankton community of Lake Huron". Canadian Journal of Fisheries and Aquatic Sciences. 66 (5): 816–828. doi:10.1139/F09-036.
86. ^ Briscoe, Tony (July 5, 2019). "Minuscule microbes wield enormous power over the Great Lakes. But many species remain a mystery". Chicago Tribune. Retrieved July 5, 2019.
87. ^ Dempsey, Dave (2004). On the Brink: The Great Lakes in the 21st Century. Michigan State University Press. p. 48. ISBN 978-0-87013-705-1.
88. ^ "Evolution of the Great Lakes Water Quality Agreement", Paul Muldoon and Lee Botts, Michigan State University Press, 2005
89. ^ Recovery of Lake Erie Walleye a Success Story. Department of Natural Resources State of Michigan, U.S. (June 8, 2006)
90. ^ "Our Great Lakes" (PDF). binational.net. Archived from the original (PDF) on December 27, 2005.
91. ^ Milestone in Waukegan Harbor PCB Cleanup. Illinois Environmental Protection Agency, U.S. (Spring 1997)
92. ^ Knud-Hansen, Chris (February 1994) Historical Perspecivie Of The Phosphate Detergent Conflict Archived May 28, 2010, at the Wayback Machine. Working Paper 94-54. Colorado.edu. Retrieved on December 7, 2016.
93. ^ https://www.weather.gov/cle/LakeErieHAB, Lake Erie Harmful Algal Bloom (HAB)
94. ^ Spring Rain, Then Foul Algae in Ailing Lake Erie March 14, 2013 New York Times
95. ^ a b https://windsorstar.com/news/local-news/large-lake-erie-algal-bloom-nearing-colchester-tested-for-toxicity Archived August 11, 2019, at the Wayback Machine, Large Lake Erie algal bloom nearing Colchester tested for toxicity
96. ^ http://www.uwindsor.ca/dailynews/2019-08-07/uwindsor-researchers-test-waters-harmful-algae-bloom Archived August 12, 2019, at the Wayback Machine, UWindsor researchers test the waters for harmful algae bloom
97. ^ "Mercury Spills". Idph.state.il.us. Retrieved February 19, 2011.
98. ^ "Lake Erie Water Quality Past Present and Future" (PDF). Retrieved December 4, 2013.
99. ^ a b
100. ^ New Report: Solving Region's Sewage Crisis Will Create Jobs, Restore Great Lakes. Healthylakes.org (August 9, 2010). Retrieved on December 7, 2016.
101. ^ "Great Lakes". GLC.
102. ^ Williams, Kurt (February 13, 2019). "Monitoring algal blooms in the Great Lakes Basin". Great Lakes Echo.
103. ^ Burkhardt Steffen, Amoroso Gabi, Riebesell Ulf, Sültemeyer Dieter, (2001), CO2 and HCO3 ߚ uptake in marine diatoms acclimated to different CO2 concentrations, Limnology and Oceanography, 6, doi:10.4319/lo.2001.46.6.1378.
104. ^ Brian N. Popp, Edward A. Laws, Robert R. Bidigare, John E. Dore, et al., Geochimica et Cosmochimica Acta, (1998), Effect of Phytoplankton Cell Geometry on Carbon Isotopic Fractionation, Vol. 62, Iss. pp. 69-77.
105. ^ Durbin, E.G. (1977), "Studies on the Autecology of the Marine Diatom Thalassonira Nordenskioedill II. The Influence of Cell Size on Growth Rate, and Carbon, Nitrogen, Chlorophil a and Silica Content". Journal of Phycology, 13: 150–155.
106. ^ O'Shea, John; Meadows, Guy (June 23, 2009). "Evidence for early hunters beneath the Great Lakes". Proceedings of the National Academy of Sciences. 106(25): 10120–10123. The earliest human occupation in the upper Great Lakes is associated with the regional fluted-point Paleoindian tradition, which conventionally ends with the drop in water level to the Lake Stanley stage
107. ^ "Ancient Land and First Peoples". Wisconsin Historical Society. Retrieved February 13, 2020.
108. ^ Woodford, Arthur M. (1991). Charting the Inland Seas: A History of the U.S. Lake Survey. Wayne State University Press. p. 4. ISBN 978-0-8143-2499-8.
109. ^ Bernstein, Peter L. (2010). Wedding of the Waters: The Erie Canal and the Making of a Great Nation. W.W. Norton. p. 349. ISBN 978-0-393-32795-3.
110. ^ Danzer, Gerald A. (2011). Illinois: A History in Pictures. University of Illinois Press. p. 90. ISBN 978-0-252-03288-2.
111. ^ Wharton, George. "Great Lakes Fleet Page Vessel Feature – Burns Harbor". Boatnerd. Retrieved August 6, 2010.
112. ^ Gonzalez, Therese (2008). Great Lakes Naval Training Station. Arcadia Publishing. p. 71. ISBN 978-0-7385-5193-7.
113. ^ Lake Champlain, The Sixth Great Lake? – Geography – 03/02/98. Geography.about.com (March 6, 1998). Retrieved on July 12, 2013.
114. ^ Seelye, Katharine Q. (March 25, 1998). "Lakes Are Born Great, 5 Sniff, So Upstart Is Ousted". The New York Times. Retrieved November 14, 2013.
115. ^ Julie Bosman. "G+M: "Creeping up on unsuspecting shores: The Great Lakes" 28 Jun 2014". Theglobeandmail.com. Retrieved June 29, 2014.
116. ^ "Great Lakes low water levels could cost \$19B by 2050". cbc.ca. Retrieved June 29, 2014.
117. ^ a b Bogue, Magaret Beattie (2000). Fishing the Great Lakes: An Environmental History, 1783-1933. The University of Wisconsin Press. pp. 29–31.
118. ^ a b "Chapter 4: The Watery Boundary". United Divide: A Linear Portrait of the USA/Canada Border. The Center for Land Use Interpretation. Winter 2015.
119. ^ "Great Lake Seaway Cargoes – American Great Lakes Ports Association". www.greatlakesports.org.
120. ^ Grover, Velma I.; Krantzberg, Gail (2012). Great Lakes: Lessons in Participatory Governance. CRC Press. p. 334. ISBN 978-1-57808-769-3.
121. ^ "Great Lakes Circle Tour". Great-lakes.net. July 5, 2005. Archived from the original on July 25, 2010. Retrieved February 19, 2011.
122. ^ Thompson, Mark L. (1991). Steamboats & Sailors of the Great Lakes. Wayne State University Press. p. 210. ISBN 978-0-8143-2359-5.
123. ^ Strand, Kathryn Koutsky; Koutsky, Linda (2006). Minnesota Vacation Days: An Illustrated History. Minnesota Historical Society. p. 34. ISBN 978-0-87351-526-9.
124. ^ Toast of the Town: The Life and Times of Sunnie Wilson. Wayne State University Press. 2005. p. 30. ISBN 978-0-8143-2696-1.
125. ^ "MS Chi-Cheemaun About Us". Ontario Ferries. Archived from the original on November 29, 2014. Retrieved June 29, 2014.
126. ^ Stonehouse, Frederick (1985, 1998). Lake Superior's Shipwreck Coast, p. 267, Avery Color Studios, Gwinn, MI ISBN 0-932212-43-3
127. ^ Matile, Roger (April 11, 2004) "Has a famed Great Lakes mystery been solved?" Archived January 1, 2016, at the Wayback Machine Ledger-Sentinel, Oswego, Illinois.
128. ^ France claims historic Great Lakes wreck, Randy Boswell, Canwest News Service, February 17, 2009.
129. ^ Explorer says Griffin shipwreck may be found, Associated Press, June 24, 2014.
130. ^ "Century-old shipwreck discovered". Associated Press. September 10, 2007. Retrieved December 3, 2007.
131. ^ "Divers find 1780 British warship". BBC News. June 14, 2008. Retrieved June 15, 2008.
132. ^ "L.R. Doty, ship that sank in Lake Michigan 112 years ago, found largely intact near Milwaukee". Star Tribune. Minneapolis-St. Paul, Minnesota. June 24, 2010. Archived from the original on June 27, 2010. Retrieved June 28, 2010.
133. ^ Bowlus, W. Bruce (2010). Iron Ore Transport on the Great Lakes: The Development of a Delivery System to Feed American Industry. McFarland. p. 227, n.35. ISBN 978-0-7864-8655-7.
134. ^ "Federal Statute on Great Lakes. Water Diversions. Water Resources Development Act". Archived from the original on October 29, 2007. Retrieved October 29, 2007.CS1 maint: BOT: original-url status unknown (link). dnr.state.oh.us
135. ^ "Great Lakes—St" (PDF). Retrieved February 19, 2011.
136. ^ Agreement. Great Lakes-St Lawrence River Basin Water Resources. cglg.org. December 13, 2005
137. ^ Back to Water Conservation. www.greatlakes.org
138. ^ "Great Lakes Restoration Initiative home page". Archived from the original on February 6, 2016. | 2020-02-23 13:33:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29174870252609253, "perplexity": 13144.99728865119}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145774.75/warc/CC-MAIN-20200223123852-20200223153852-00261.warc.gz"} |
https://guides.peruzal.com/xamarin-android/persistence/ | # Persistence in Android¶
## Shared Preferences¶
You can use shared preferences to save arbitrarily data on the device.
Saving sensitive information
Please do not save sensitive information in shared preferences since they are stored in plain text files on the device.
To save preferences you use the ISharedPreferences interface.
### Get the default shared preferences¶
The default shared preferences are saved in a file thats prefixed your your app's package name. To get the default shared preferences use the the SharedPreferenceManager class as follows:
1 var sharedPreferences = PreferenceManager.GetDefaultSharedPreferences(this);
Then you can use the ISharedPreferencesEditor to add the preferences as follows:
1 var editor = sharedPreferences.Edit();
You can now use the editor to put the preferences using a key/value pair as follows :
1 editor.PutString("NICKNAME", "joseph");
You should call Commit() or Apply() when done to apply the change. Apply() commits the changes asynchronously.
### Creating a shared preference file¶
You can also create a named shared preferences file instead of using the default one. You use the GetSharedPreferences() method and supply a name of the preferences file and the mode to create it as follows:
1 var namesSharedPrefs = GetSharedPreferences("colors", FileCreationMode.Private);
You can perform the same operations as with the default shared preferences.
### Restoring shared preferences¶
To get the shared preferences, first get the shared preference and use the various get methods to retrieve the preferences as follows:
1 var nickname = sharedPreferences.GetString("NICKNAME", "defaultValue");
You need to specify a default value if the preference is not found.
### Clearing shared preferences¶
You can clear all the preferences by calling the Clear() method on the ISharedPreferenceEditor of the shared preferences as follows:
1 sharedPreferences.Edit().Clear().Apply();
We also called Apply() to commit the changes.
### Listening for preference changes¶
You can register a listener to be notified when the preferences changes using the RegisterOnSharedPreferenceChangeListener() method on the shared preferences. You will need to implement the interface ISharedPreferencesOnSharedPreferenceChangeListener as follows:
1 PreferenceManager.GetDefaultSharedPreferences(this).RegisterOnSharedPreferenceChangeListener(this);
and then implementing the interface on the activity as follows :
1 2 3 4 5 6 public class LoginActivity : AppCompatActivity, ISharedPreferencesOnSharedPreferenceChangeListener { public void OnSharedPreferenceChanged(ISharedPreferences sharedPreferences, string key) { ... } }
## Adding a Setting Screen Using PreferenceFragmentCompat¶
They are several ways to add a Settings/Preference Screen to your app :
1. Use the PreferenceActivity
2. Use the PreferenceFragment
3. Use the PreferenceFragmentCompat
## Adding an xml Resource File¶
For both methods you will need an an xml resource file to be added into the Resources folder under the xml directory.
1. Create the xml folder if it does not yet exists under the Resources directory
2. Create a new xml resource file into the just created xml resource directory or use the existing one.
3. Add the preferences. The preferences have the equivalent names to their views with the additional Preference suffix, e.g to add an EditText preference you use EditTextPreference. Also preferences use keys instead of id since they are stored as key/value pairs
Resources/xml/prefs.xml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
The settings UI is defined inside the following tags :
1 2 3 ...
We add categories to the settings UI by using the
1 2 3 ...
1 2 3 4 5
The ringtone preference will load the phone ringtones settings into your settings screen and lets the user pick a ringtone. You can also add a default ringtone
1 2 3 4 5 6 7
## Adding a Settings Screen using PreferenceActivity¶
The easiest way to add a settings screen is to create an activity that inherits from the PreferenceActivity class. We will use the xml resource defined above.
1. Create a new Activity
2. Inherit from PreferenceActivity instead of Activity or AppCompatActivity
3. n the OnCreate method, instead of setting the SetContentView method use AddPreferencesFromResource method. Notice this method is deprecated, so you will need to use the other methods.
PrefsActivity.cs
1 2 3 4 5 6 7 8 9 [Activity(Label = "PreActivity")] public class PrefsActivity : PreferenceActivity { protected override void OnCreate(Bundle savedInstanceState) { base.OnCreate(savedInstanceState); AddPreferencesFromResource(Resource.Xml.prefs); } }
You will need to define a menu so you can use it to open the settings screen.
1. Add the menu folder in the Resources folder if one does not yet exist
1 2 3 4 5 6
We will use the defined id on the menu to find which menu option have been selected.
Modify the activity you want to be able to access the settings from and create the menu.
MainActivity.cs
1 2 3 4 5 6 //Create the options menu public override bool OnCreateOptionsMenu(Android.Views.IMenu menu) { MenuInflater.Inflate(Resource.Menu.menu, menu); return base.OnCreateOptionsMenu(menu); }
Handle the menu selection as follows :
1 2 3 4 5 6 7 8 9 //Handle the selecting of the options menu public override bool OnOptionsItemSelected(Android.Views.IMenuItem item) { var id = item.ItemId; if (id == Resource.Id.action_menu) { StartActivity(new Android.Content.Intent(this, typeof(PrefsActivity))); } return true; }
Now when you run you should be able to get an option item called Settings. When click it, the settings screen should load up.
## Adding a Settings Screen Using a PreferenceFragment¶
Use the following steps to create a settings screen using the PreferenceFragment :
1. Create a class that derives from PreferenceFragment
2. In the OnCreate method use the AddPreferenceFromResource method to inflate the preference xml file PrefsFragment.cs
1 2 3 4 5 6 7 8 public class PrefsFrgament : PreferenceFragment { public override void OnCreate(Bundle savedInstanceState) { base.OnCreate(savedInstanceState); AddPreferencesFromResource(Resource.Xml.prefs); } }
3. In the activity you would like to show the preference, replace an exisiting container, usually FrameLayout with the PrefsFragment
Resources/layout/activity_main.axml
1 2 3 4 5 6 7 8 9 10
PrefsActivity.cs
Use the FragmentManager to replace the FrameLayout with the PrefsFragment
1 2 3 4 FragmentManager .BeginTransaction() .Replace(Resource.Id.content, new PrefsFragment()) .Commit();
PrefsActivity.cs
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 using Android.App; using Android.OS; using Android.Support.V4.App; using Android.Support.V7.App; using Android.Views; namespace sharedPreferenceDemo { [Activity(Label = "PrefsActivity", Theme = "@style/AppTheme", ParentActivity = typeof(MainActivity))] [MetaData("android.support.PARENT_ACTIVITY", Value = "md51c3958e33f8e72dae9076079df527ba2.MainActivity")] public class PrefsActivity : AppCompatActivity { protected override void OnCreate(Bundle savedInstanceState) { base.OnCreate(savedInstanceState); SetContentView(Resource.Layout.activity_prefs); FragmentManager .BeginTransaction() .Replace(Resource.Id.content, new PFrgament()) .Commit(); if (SupportActionBar != null) { SupportActionBar.SetDisplayHomeAsUpEnabled(true); } } public override bool OnOptionsItemSelected(IMenuItem item) { if (item.ItemId == Android.Resource.Id.Home) { NavUtils.NavigateUpFromSameTask(this); } return base.OnOptionsItemSelected(item); } } }
In the Oncreate method we check if we have an ActionBar and set the up arrow to display :
1 2 3 if (SupportActionBar != null) { SupportActionBar.SetDisplayHomeAsUpEnabled(true); }
Then we handle the clicking of the up arrow in the OnOptionsItemSelected methods :
1 2 3 4 5 6 7 public override bool OnOptionsItemSelected(IMenuItem item) { if (item.ItemId == Android.Resource.Id.Home) { NavUtils.NavigateUpFromSameTask(this); } return base.OnOptionsItemSelected(item); }
1 [Activity(Label = "PrefsActivity", Theme = "@style/AppTheme", ParentActivity = typeof(MainActivity))]
1 [MetaData("android.support.PARENT_ACTIVITY", Value = "md51c3958e33f8e72dae9076079df527ba2.MainActivity")]
You can find the MD5Sum, md51c3958e33f8e72dae9076079df527ba2 of the activity by check the generated AndroidManifest.xml file in the obj folder | 2019-11-14 03:37:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22238558530807495, "perplexity": 2927.6479756073077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667945.28/warc/CC-MAIN-20191114030315-20191114054315-00041.warc.gz"} |
http://observations.rene-grothmann.de/solving-partition-problems-with-linear-programming/ | # Solving Partition Problems with Linear Programming
Yesterday I promised to show how to solve partition problems with integer linear programming. Let us take the problem of yesterday: We want to split the numbers from 1 to 30 in 10 triplets (x,y,z) such that x+y+z=0 modulo 31. We will use linear problems of the following type.
$$Ax=b, \quad x_i \in \{0,1\}, \quad c^T x \to \text{max.}$$
There will be one variable in x for each possible triplet. The value of this variable determines if the triplet is in the selection (1) or not (0). So the j-th column of A refers to the triplet with number j. The i-th row of A is a linear constraint which makes sure that the number i is used only in one triplet. A contaoins only 0-1-values, and
$$a_{i,j} = 1 \quad\Leftrightarrow\quad i \in T_j = \{n_{1,j},n_{2,j},n_{3,j}\}.$$
The constraints will be
$$\sum_j a_{i,j} x_j = 1.$$
We just need any feasible point and could use any target function. Alternatively, we could use „less equal“ in the previous line and maximize the number of triplets used.
How to do that in EMT? We first store all possible triplets into one matrix.
>function makeT (n:index=31) ...
$T=[];$for i=1 to n-2;
$for j=i+1 to n-1;$ k=n-mod(i+j,n);
$if k!=i and k!=j then T=T_[i,j,k]; endif;$ end;
$end;$return T
$endfunction >makeT(7) 1 2 4 1 4 2 1 6 7 2 4 1 2 5 7 3 4 7 3 5 6 3 6 5 5 6 3 >T=makeT(31); Then we define the matrix A. >function makeA (T) ...$v=sort(unique(flatten(T)));
$A=zeros(cols(v),rows(T));$for i=1 to cols(v);
$for j=1 to rows(T);$ if any(v[i]==T[j]) then A[i,j]=1; endif;
$end;$end;
$return A;$endfunction
>makeT(5)
1 4 5
2 3 5
>fraction makeA(makeT(5))
1 0
0 1
0 1
1 0
1 1
>A=makeA(T);
>size(A)
[31, 405]
The function makeA() can take any matrix with partitions in its rows. The first command simply selects the numbers that appear in any partition. In our case, A is simply the vector [1,…,30].
Now we need to solve the problem with integer linear programming. We use the LPSOLVE library for this (given to EMT by the developers).
>function solveP (A,T) ...
$x=ilpsolve(A,ones(rows(A))',ones(cols(A)),$ vlb=zeros(cols(A)),vub=ones(cols(A)),>max);
$return T[nonzeros(x')];$ endfunction
>solveP(A,T)
1 5 25
2 7 22
3 11 17
4 6 21
8 9 14
10 23 29
12 24 26
13 19 30
15 20 27
16 18 28
The return value of the function ilpsolve() is a 0-1-vector. We want to print the triplets which are marked by 1 in this vector. The variables vlb and vub are lower and upper bounds for the variables. Interestingly, the solution works without these restrictions. Nevertheless, more restrictions usually mean shorter calculations.
Diese Website verwendet Akismet, um Spam zu reduzieren. Erfahre mehr darüber, wie deine Kommentardaten verarbeitet werden. | 2022-01-29 05:20:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49736443161964417, "perplexity": 1479.892884522602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299927.25/warc/CC-MAIN-20220129032406-20220129062406-00509.warc.gz"} |
https://www.physicsforums.com/threads/solution-for-these-differential-equations.97416/ | # Solution for these Differential equations
1. Oct 30, 2005
### Reshma
Hi everyone. I'm trying to solve these differential equations but I could not crack a single nut. I seem to have lost my memory on solving differential equations . Please help me refresh it by providing useful hints.
I'm unable to separate the variables in the following. Perhaps I'm missing out on something important.
1] $$\frac{dy}{dx} + 2y = y^2e^{2x}$$
2] $$2y\frac{dy}{dx} + y^2 = \frac{x}{2}e^{-x}$$
3] $$x^2\frac{dy}{dx} - 2xy = \frac{1}{x}$$
2. Oct 30, 2005
### saltydog
First one is a Riccati. Make the change of variables usually done for such equations and see what happens.
3. Oct 30, 2005
### Physics Monkey
For the second one, can you simplfy the derivative term? After a well chosen substitution, the differential equation becomes linear.
For the third one, the equation is linear first order and there is a general method available.
4. Oct 31, 2005
### Reshma
Thank you so much for the help.
Well, I was able to solve the third one!
Bringing the equation in the general form:
$$\frac{dy}{dx} + P(x)y = Q(x)$$
$$\frac{dy}{dx} - \frac{2}{x} y = \frac{1}{x^3}$$
Setting $$y = u(x)v(x)$$
So,
$$\frac{dy}{dx} = u\frac{dv}{dx} + v\frac{du}{dx}$$
$$u\left(\frac{dv}{dx} - \frac{2}{x}v\right) + v\frac{du}{dx} = \frac{1}{x^3}$$.....(1)
Solving for v:
$$\frac{dv}{dx} - \frac{2}{x} v = 0$$
On solving:
$$\ln v = \ln x^2$$
$$v = x^2$$
Again on substituting for v in (1):
$$u = -\frac{1}{4x^4} + C$$
General formula:
$$y = v(x)\int \frac{Q(x)}{v(x)} dx + (C)v(x)$$
$$y = -\frac{1}{4x^2} + C{x}^2$$
Hope I'm right.
Sorry I could not find any suitable substitution for the second one . Please help!
Last edited: Oct 31, 2005
5. Oct 31, 2005
### Reshma
Ricatti? I haven't studied any differential equation like that. How do you solve such equations.
6. Oct 31, 2005
### saltydog
First place it into standard form:
$$y^{'}+Q(x)y+R(x)y^2=P(x)$$
Now, make the transformation:
$$y=\frac{u^{'}}{Ru}$$
Can you now substitute this into the ODE? I'll start it for you:
$$y^{'}=\frac{Ruu^{''}-u^{'}(Ru^{'}+uR^{'})}{(Ru)^2}$$
right?
Make the other ones to get:
$$\frac{Ruu^{''}-u^{'}(Ru^{'}+uR^{'})}{(Ru)^2}+\frac{Qu^{'}}{Ru}+R\left(\frac{u^{'}}{Ru}\right)^2=P$$
Now simplify and obtain a second order in u. Solve, convert back to y, and I want a plot.
Edit: Suppose that last one looks a bit intimidating. That's just the general expression though. For your equation a lot of stuff just drops out leaving a simple second order to solve. Try it.
Last edited: Oct 31, 2005
7. Oct 31, 2005
### Physics Monkey
For the second one, focus on the term $$2 y \frac{dy}{dx}$$, can you write this as something more convenient? Hint: notice that the only other y term you have is $$y^2$$.
8. Nov 1, 2005
### Reshma
I had tried it:
Set,
$$u = y^2$$
So,
$$\frac{du}{dx} = 2y\left(\frac{dy}{dx}\right)$$
So the eqaution becomes,
$$\frac{du}{dx} + u = \frac{x}{2} e^{-x}$$
But, I'm still unable to separate the variables. Should I adopt a different method?
9. Nov 1, 2005
### Benny
I don't think you can separate variables. You could try the integrating factor technique since you have a first order linear differential equation.
10. Nov 1, 2005
### saltydog
Reshma, rearrange the equation to:
$$2ydy+y^2dx=\frac{x}{2}e^{-x}dx$$
or:
$$\left(\frac{x}{2}e^{-x}-y^2\right)dx-2ydy=0$$
Now, we can make this exact right? You know, the partial of M with respect to y, partial of N with respect to x, do that arithmetic, get some function of x or y, then e to the integral of that function is the integrating factor right? You know this makes two plots now.
Last edited: Nov 1, 2005
11. Nov 1, 2005
### Physics Monkey
What Salty suggested is a nice way of proceeding, or as Benny said, you now have a linear first order equation in u and a general approach exists as I said before.
12. Nov 2, 2005
### Reshma
Wow, thanks for your help, Saltydog and PhysicsMonkey. I got all the solutions!! | 2016-12-09 09:50:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8469318151473999, "perplexity": 1354.5120203624836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542693.41/warc/CC-MAIN-20161202170902-00123-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://www.projecteuclid.org/euclid.twjm/1500406239 | ## Taiwanese Journal of Mathematics
### On a Class of Nilpotent Distributions
#### Abstract
This paper presents a sufficient condition for two vector fields $X$ and $Y$ to have the squares noncommutative, i.e. $[X^2, Y^2] \not= 0$. We prove that if the vector fields $X$, $Y$ span a nilpotent distribution with nilpotence class 2, then the squares of the vector fields do not commute.
#### Article information
Source
Taiwanese J. Math., Volume 15, Number 2 (2011), 875-881.
Dates
First available in Project Euclid: 18 July 2017
https://projecteuclid.org/euclid.twjm/1500406239
Digital Object Identifier
doi:10.11650/twjm/1500406239
Mathematical Reviews number (MathSciNet)
MR2810186
Zentralblatt MATH identifier
1236.58007
Subjects
Primary: 53C99: None of the above, but in this section
Secondary: 53D99: None of the above, but in this section
#### Citation
Calin, Ovidiu; Chang, Der-Chen. On a Class of Nilpotent Distributions. Taiwanese J. Math. 15 (2011), no. 2, 875--881. doi:10.11650/twjm/1500406239. https://projecteuclid.org/euclid.twjm/1500406239
#### References
• R. Beals, B. Gaveau and P. C. Greiner, On a geometric formula for the fundamental solution of subelliptic Laplacians, Math. Nachr., 181 (1996), 81-163.
• R. Beals, B. Gaveau and P. C. Greiner, Hamilton-Jacobi theory and the heat kernel on Heisenberg groups, J. Math. Pures Appl., 79(7) (2000), 633-689.
• O. Calin and D. C. Chang, Sub-Riemannian Geometry, General Theory and Examples, Encyclopedia of Mathematics and Its Applications, Cambridge University Press, Vol. 126, 2009.
• O. Calin and D. C. Chang, Geometric Mechanics on Riemannian Manifolds: Applications to Partial Differential Equations, Applied and Numerical Analysis. Birhäuser, Boston, 2004.
• B. Gaveau, Systémes Dynamiques Associés a Certains Opérateurs Hypoelliptiques, Bull. Sc. Math., 102 (1978), 203-229.
• L. Hörmander, Hypo-elliptic second order differential equations, Acta Math., 119 (1967), 147-171.
• A. Hulanicki, The distribution of energy in the Brownian motion in the Gausssian field and analytic hypoellipticity of certain subelliptic operators on the Heisenberg group, Studia Mathematica, 56 (1976), 165-173.
• L. S. Schulman, Techniques and Applications of Path Integration, Dover, 1981. | 2020-01-17 16:35:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5969984531402588, "perplexity": 2210.678661712965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250589861.0/warc/CC-MAIN-20200117152059-20200117180059-00548.warc.gz"} |
https://studydaddy.com/question/linear-regression | QUESTION
# linear regression
1. Management has studied work patterns in the housekeeping department and estimates the number of hours to be worked as follows. Hours worked = (1,500 per month) + (0.50 X RVUs).For the coming month, management expects relative value units (RVU) to be 5,800. What should budgeted labor for the month be?
2. Last year, the price for thermometer covers in a pediatrician’s office was $0.05 each. This year, the covers cost$0.06 each. If the office purchased 10,000 thermometer covers this year, what is the price variance?
• @
• 10 orders completed
Tutor has posted answer for $11.00. See answer's preview$11.00 | 2019-04-21 15:07:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2736615538597107, "perplexity": 10812.042445596937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578531984.10/warc/CC-MAIN-20190421140100-20190421162100-00553.warc.gz"} |