url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
http://www.koreascience.or.kr/article/JAKO201823955285190.page | # ON WEIGHTED AND PSEUDO-WEIGHTED SPECTRA OF BOUNDED OPERATORS
• Athmouni, Nassim (Department of Mathematics Faculty of Sciences of Gafsa University of Gafsa) ;
• Baloudi, Hatem (Department of Mathematics Faculty of Sciences of Gafsa University of Gafsa) ;
• Jeribi, Aref (Department of Mathematics Faculty of Sciences of Sfax University of Sfax) ;
• Kacem, Ghazi (Department of Mathematics Faculty of Sciences of Sfax University of Sfax)
• Accepted : 2017.12.21
• Published : 2018.07.31
• 192 11
#### Abstract
In the present paper, we extend the main results of Jeribi in [6] to weighted and pseudo-weighted spectra of operators in a nonseparable Hilbert space ${\mathcal{H}}$. We investigate the characterization, the stability and some properties of these weighted and pseudo-weighted spectra.
#### Keywords
Fredholm operators;${\alpha}-Fredholm$ operator;pseudo-spectrum;weighted spectrum
#### References
1. A. Ammar, B. Boukettaya, and A. Jeribi, A note on the essential pseudospectra and application, Linear Multilinear Algebra 64 (2016), no. 8, 1474-1483. https://doi.org/10.1080/03081087.2015.1091436
2. A. Ammar, H. Daoud, and A. Jeribi, The stability of pseudospectra and essential pseu- dospectra of linear relations, J. Pseudo-Differ. Oper. Appl. 7 (2016), no. 4, 473-491. https://doi.org/10.1007/s11868-016-0150-3
3. S. C. Arora and P. Dharmarha, On weighted Weyl spectrum. II, Bull. Korean Math. Soc. 43 (2006), no. 4, 715-722. https://doi.org/10.4134/BKMS.2006.43.4.715
4. H. Baloudi and A. Jeribi, Left-right Fredholm and Weyl spectra of the sum of two bounded operators and applications, Mediterr. J. Math. 11 (2014), no. 3, 939-953. https://doi.org/10.1007/s00009-013-0372-z
5. L. Burlando, Approximation by semi-Fredholm and $semi-{\alpha}-Fredholm$ operators in Hilbert spaces of arbitrary dimension, Acta Sci. Math. (Szeged) 65 (1999), no. 1-2, 217-275.
6. L. A. Coburn and A. Lebow, Components of invertible elements in quotient algebras of operators, Trans. Amer. Math. Soc. 130 (1968), 359-365. https://doi.org/10.1090/S0002-9947-1968-0223901-2
7. E. B. Davies, Spectral Theory and Differential Operators, Cambridge Studies in Advanced Mathematics, 42, Cambridge University Press, Cambridge, 1995.
8. S. V. Djordjevic and F. Hernandez-Diaz, ${\alpha}-Fredholm$ spectrum of Hilbert space operators, J. Indian Math. Soc. 83 (2016), no. 3-4, 241-249.
9. S. V. Djordjevic, On ${\alpha}-Weyl$ operators, Advances in Pure Math. 6 (2016), 138-143. https://doi.org/10.4236/apm.2016.63011
10. G. Edgar, J. Ernest, and S. G. Lee, Weighing operator spectra, Indiana Univ. Math. J. 21 (1971/72), 61-80. https://doi.org/10.1512/iumj.1972.21.21005
11. A. Elleuch and A. Jeribi, New description of the structured essential pseudospectra, Indag. Math. (N.S.) 27 (2016), no. 1, 368-382. https://doi.org/10.1016/j.indag.2015.11.002
12. A. Elleuch, On a characterization of the structured Wolf, Schechter and Browder essential pseudospectra, Indag. Math. (N.S.) 27 (2016), no. 1, 212-224. https://doi.org/10.1016/j.indag.2015.10.003
13. I. C. Gohberg, A. S. Markus, and I. A. Feldman, Normally solvable operators and ideals associated with them, Trans. Amer. Math. Soc. 61 (1967), 63-84.
14. K. Gustafson and J. Weidmann, On the essential spectrum, J. Math. Anal. Appl. 25 (1969), 121-127. https://doi.org/10.1016/0022-247X(69)90217-0
15. A. Jeribi, Quelques remarques sur le spectre de Weyl et applications, C. R. Acad. Sci. Paris Ser. I Math. 327 (1998), no. 5, 485-490. https://doi.org/10.1016/S0764-4442(99)80027-5
16. A. Jeribi, Une nouvelle caracterisation du spectre essentiel et application, C. R. Acad. Sci. Paris Ser. I Math. 331 (2000), no. 7, 525-530. https://doi.org/10.1016/S0764-4442(00)01606-2
17. A. Jeribi, Some remarks on the Schechter essential spectrum and applications to transport equations, J. Math. Anal. Appl. 275 (2002), no. 1, 222-237. https://doi.org/10.1016/S0022-247X(02)00323-2
18. A. Jeribi, A characterization of the Schechter essential spectrum on Banach spaces and applications, J. Math. Anal. Appl. 271 (2002), no. 2, 343-358. https://doi.org/10.1016/S0022-247X(02)00115-4
19. A. Jeribi, On the Schechter essential spectrum on Banach spaces and application, Facta Univ. Ser. Math. Inform. No. 17 (2002), 35-55.
20. A. Jeribi, Fredholm operators and essential spectra, Arch. Inequal. Appl. 2 (2004), no. 2-3, 123-140.
21. A. Jeribi, Spectral Theory and Applications of Linear Operators and Block Operator Matrices, Springer, Cham, 2015.
22. E. Luft, The two-sided closed ideals of the algebra of bounded linear operators of a Hilbert space, Czechoslovak Math. J. 18(93) (1968), 595-605.
23. M. Schechter, Principles of Functional Analysis, Academic Press, New York, 1971.
24. L. N. Trefethen, Pseudospectra of matrices, in Numerical analysis 1991 (Dundee, 1991), 234-266, Pitman Res. Notes Math. Ser., 260, Longman Sci. Tech., Harlow, 1991.
25. F. Wolf, On the invariance of the essential spectrum under a change of boundary conditions of partial differential boundary operators, Nederl. Akad. Wetensch. Proc. Ser. A 62 = Indag. Math. 21 (1959), 142-147. | 2019-12-15 04:46:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7185478210449219, "perplexity": 1556.5758655388897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541301598.62/warc/CC-MAIN-20191215042926-20191215070926-00446.warc.gz"} |
https://tex.stackexchange.com/questions/334086/glossaries-gls-not-working-properly | # Glossaries \Gls not working properly
As I understand from the documentation of the glossaries package, only the first letter should be capitalized by the \Gls command. I don't understand why this command capitalizes ALL letters in my case, see code below. The problem occurs for \Gls as well as for \Glspl.
I also tried to uncomment the hyperref package, but it doesn't help. All other questions I found here didn't really match my problem.
\documentclass{article}
\usepackage[urlbordercolor={1 1 1},citebordercolor={1 1 1},linkbordercolor={1 1 1},backref]{hyperref}
\usepackage[toc]{glossaries}
\newglossary*{acronyms}{Acronyms}
\newglossaryentry{lmn}
{
type=acronyms,
name={LMN},
description={local model network},
first={\glsentrydesc{lmn} (\glsentrytext{lmn})},
plural={LMNs},
firstplural={\glsentrydesc{lmn}s (\glsentryplural{lmn})}
}
\begin{document}
\renewcommand{\acronymfont}[1]{\textsmaller{#1}}
\Glspl{lmn}
\glsresetall
\glspl{lmn}
% What is desired
Local model networks (LMNs)
\end{document}
Compiling this code on my machine (Mac OS X El Capitan, MacTex with latest updates via "Tex Live Utility") yields:
This is a known glossaries issue. Refering to page 113 I quote:
If the first thing in the link text is a command follow by a group, the upper casing is performed on the first object of the group. For example, if an entry has been defined as
\newglossaryentry{sample}{
name={\emph{sample} phrase},
sort={sample phrase},
description={an example}}
Then \Gls{sample} will set the link text to:
\emph{\MakeUppercase sample} Phrase
which will appear as Sample phrase.
In your case \MakeUppercase is taking the whole description. The simplest solution is to manually define the description for the first and firstplural keys:
\newglossaryentry{lmn}
{
type=acronyms,
name={LMN},
description={local model network},
first={local modern network (\glsentrytext{lmn})},
plural={LMNs},
firstplural={local model networks (\glsentryplural{lmn})}
} | 2019-09-23 11:02:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8029235601425171, "perplexity": 5268.878364561453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576355.92/warc/CC-MAIN-20190923105314-20190923131314-00072.warc.gz"} |
https://www.beatthegmat.com/if-a-and-b-are-positive-integers-what-is-the-value-of-the-product-ab-t329090.html?sid=a8c94a200b4ccdf63076373e4e84e959 | ## If $$a$$ and $$b$$ are positive integers, what is the value of the product $$ab?$$
##### This topic has expert replies
Legendary Member
Posts: 2276
Joined: 14 Oct 2017
Followed by:3 members
### If $$a$$ and $$b$$ are positive integers, what is the value of the product $$ab?$$
by VJesus12 » Wed Jan 12, 2022 10:42 am
00:00
A
B
C
D
E
## Global Stats
If $$a$$ and $$b$$ are positive integers, what is the value of the product $$ab?$$
(1) The least common multiple of $$a$$ and $$b$$ is $$48.$$
(2) The greatest common factor of $$a$$ and $$b$$ is $$4.$$ | 2022-05-23 21:47:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27950143814086914, "perplexity": 311.2176840197851}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662561747.42/warc/CC-MAIN-20220523194013-20220523224013-00478.warc.gz"} |
https://brilliant.org/problems/27-cubes-3-colors/ | # 27 Cubes, 3 Colors
Discrete Mathematics Level pending
You have 27 cubes of the following colors:
• 9 orange
• 9 green
• 9 blue
How many ways can you put them together to forma a 3x3x3 cube so that no row in any of the three Cartesian directions ($$x,y$$, and $$z$$) contains the same color?
Assume that all the cubes of one color are indistinguishable, so if you swap two it counts as the same arrangement.
Also, assume that the edges of the 3x3x3 cube are aligned with the x, y and z axes.
Image credit: http://www.paintingandvino.com/
× | 2018-07-21 02:24:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.895034670829773, "perplexity": 850.6506688711177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592150.47/warc/CC-MAIN-20180721012433-20180721032433-00119.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/116290-simple-question-r-module-homomorphism.html | # Thread: simple question on R module homomorphism
1. ## simple question on R module homomorphism
Let f: M ---> N be a R module homomorphism
P and Q are submodules of M and N respectively.
Show that:
(i) f(P) ={ f(p): p belongs to P} is a submodule of N
(ii) f^(-1)(Q) ={ m in M: f(m) belongs to Q} is a submodule of M
(iii) What are f(M) and f^(-1)(0)?
I have already proved part (i) and (ii)
My problem is part (iii). I don't know how to do it, it looks easy but I think it is a trick question. So, I have to bother you to help me on part(iii)
2. $f(M)$ is the image of f. $f^{-1}\{0\}$ is the kernal of f.
3. Yeah, I think you are just supposed to note that those are two very special cases of what you proved in i) and ii). If f(M)=N f is surjective (onto) and if $f^{-1}(0)=\{0\}$ then f is injective (one to one). | 2017-06-26 21:01:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6849632263183594, "perplexity": 694.1467078947891}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320865.14/warc/CC-MAIN-20170626203042-20170626223042-00040.warc.gz"} |
http://codeforces.com/blog/entry/46031 | ### DarthKnight's blog
By DarthKnight, history, 4 years ago, ,
Here is git repository to solutions of problems of this contest.
### Div.2 A
You should check two cases for YES:
1. x mod s = t mod s and t ≤ x
2. x mod s = (t + 1) mod s and t + 1 < x
Time Complexity:
Codes
### Div.2 B
Nothing special, right? just find the position of letters . and e with string searching methods (like .find) and do the rest.
Time Complexity:
Codes
### A
Do what problem wants from you. The only thing is to find the path between the two vertices (or LCA) in the tree. You can do this in since the height of the tree is . You can keep edge weights in a map and get/set the value whenever you want. Here's a code for LCA:
LCA(v, u):
while v != u:
if depth[v] < depth[u]:
swap(v, u)
v = v/2 // v/2 is parent of vertex v
Time Complexity:
Codes
### B
First of all starting_time of a vertex is the number of dfs calls before the dfs call of this vertex plus 1. Now suppose we want to find the answer for vertex v. For any vertex u that is not in subtree of v and is not an ancestor v, denote vertices x and y such that:
• x ≠ y
• x is an ancestor of v but not u
• y is an ancestor of u but not v
• x and y share the same direct parent; That is par[x] = par[y].
The probability that y occurs sooner than x in children[par[x]] after shuffling is 0.5. So the probability that starting_time[u] < starting_time[v] is 0.5. Also We know if u is ancestor of v this probability is 1 and if it's in subtree of v the probability is 0. That's why answer for v is equal to (depth is 1-based and sub[v] is the number of vertices in subtree of v including v itself). Because n - sub[v] - h[v] is the number of vertices like the first u (not in subtree of v and not an ancestor of v).
Thus answer is always either an integer or an integer and a half.
Time complexity:
Codes
### C
It gets tricky when the problem statement says p and q should be coprimes. A wise coder in this situation thinks of a formula to make sure this happens.
First of all let's solve the problem if we only want to find the fraction . Suppose dp[i] is answer for swapping the cups i times. It's obvious that dp[1] = 0. For i > 0, obviously the desired cup shouldn't be in the middle in (i - 1) - th swap and with this condition the probability that after i - th swap comes to the middle is 0.5. That's why .
Some people may use matrix to find p and q using this dp (using pair of ints instead of floating point) but there's a risk that p and q are not coprimes, but fortunately or unfortunately they will be.
Using some algebra you can prove that:
• if n is even then and q = 2n - 1.
• if n is odd then and q = 2n - 1.
You can confirm that in both cases p and q are coprimes (since p is odd and q is a power of 2).
The only thing left to handle is to find 2n (or 2n - 1) and parity. Finding parity is super easy. Also 2n = 2a1 × a2 × ... × ak = (...((2a1)a2)...)ak. So it can be calculated using binary exponential. Also dividing can be done using Fermat's little theorem.
Time complexity: O(klg(MAX_A)).
Codes
### D
Build the prefix automaton of these strings (Aho-Corasick). In this automaton every state denotes a string which is prefix of one of given strings (and when we feed characters to it the current state is always the longest of these prefixes that is a suffix of the current string we have fed to it). Building this DFA can be done in various ways (fast and slow).
Suppose these automaton has N states () and state v has edges outgoing to states in vector neigh[v] (if we define our DFA as a directed graph). Suppose state number 1 is the initial state (denoting an empty string).
If l was smaller we could use dp: suppose dp[l][v] is the maximum score of all strings with length equal to l ending in state v of our DFA when fed into it.
It's easy to show that dp[0][1] = 0 and dp[1][v] ≤ bv + dp[l + 1][u] for u in neigh[v] and calculating dps can be done using this (here bv is sum of a of all strings that are a suffix of string related to state v).
Now that l is large, let's use matrix exponential to calculate the dp. Now dp is not an array, but a column matrix. Finding a matrix to update the dp is not hard. Also we need to reform + and * operations. In matrix multiplying we should use + instead of * and max instead of + in normal multiplication.
Time complexity: .
Codes
### E
First of all, for each query of 1st type we can assume k = 1 (because we can perform this query k times, it doesn't differ).
Consider this problem: there are only queries of type 1.
For solving this problem we can use heavy-light decomposition. If for a vertex v of the tree we denote av as the weight of the lightest girl in it ( in case there is no girl in it), for each chain in HLD we need to perform two type of queries:
1. Get weight of the lightest girl in a substring (consecutive subsequence) of this chain (a subchain).
2. Delete the lightest girl in vertex u. As the result only au changes. We can find this value after changing in if we have the sorted vector of girls' weights for each vertex (and then we pop the last element from it and then current last element is the lightest girl, in case it becomes empty).
This can be done using a classic segment tree. In each node we only need a pair of integers: weight of lightest girl in interval of this node and the vertex she lives in (a pair<int, int>).
This works for this version of the problem. But for the original version we need an additional query type:
3. Increase weight of girls in a substring (consecutive subsequence) of this chain (a subchain) by k.
This can be done using the previous segment tree plus lazy propagation (an additional value in each node, type 3 queries to pass to children).
Now consider the original problem. Consider an specific chain: after each query of the first type on of the following happens to this chain:
1. Nothing.
2. Only an interval (subchain) is effected.
3. Whole chain is effected.
When case 2 happens, v (query argument) belongs to this chain. And this can be done using the 3rd query of chains when we are processing a 2nd type query (effect the chain v belongs to).
When case 3 happens, v is an ancestor of the topmost vertex in this chain. So when processing 1st type query, we need to know sum of k for all 2nd type queries that their v is an ancestor of topmost chain in current chain we're checking. This can be done using a single segment/Fenwick tree (using starting-finishing time trick to convert tree to array).
So for each query of 1st type, we will check all chains on the path to find the lightest girl and delete her.
Time Complexity:
Codes
### F
In the first thoughts you see that there's definitely a binary search needed (on r). Only problem is checking if there are such two points fulfilling conditions with radius r.
For each edge, we can shift it r units inside the polygon (parallel to this edge). The only points that can see the line coinciding the line on this edge are inside the half-plane on one side of this shifted line (side containing this edge). So problem is to partition these half-planes in two parts such that intersection of half-planes in each partition and the polygon (another n half-planes) is not empty. There are several algorithms for this propose:
Algorithm:
It's obvious that if intersection of some half-planes is not empty, then there's at least on point inside this intersection that is intersection of two of these lines (lines denoting these half-planes). The easiest algorithm is to pick any intersection of these 2n lines (n shifted half-planes and n edges of the polygon) like p that lies inside the polygon, delete any half-plane containing this point (intersection of deleted half-planes and polygon is not empty because it contains at least p) and check if the intersection of half-planes left and polygon is not empty (of course this part needs sorting half-planes and adds an additional log but we can sort the lines initially and use something like counting sort in this step).
Well, constant factor in this problem is too big and this algorithm will not fit into time limit. But this algorithm will be used to prove the faster algorithm:
Algorithm:
In the previous algorithm we checked if p can be in intersection of one part. Here's the thing:
Lemma 1: If p is inside intersection of two half-planes (p is not necessarily intersection of their lines) related to l - th and r - th edge (l < r) and two conditions below are fulfilled, then there's no partitioning that in it p is inside intersection of a part (and polygon):
1. At least one of the half-planes related to an edge with index between l and r exists that doesn't contain p.
2. At least one of the half-planes related to an edge with index greater than r or less than l exists that doesn't contain p.
Because if these two lines exist, they should be in the other part that doesn't contain p and if they are, intersection of them and polygon will be empty(proof is easy, homework assignment ;)).
This proves that if such partitioning is available that p is in intersection of one of them, then it belongs to an interval of edges(cyclic interval) and the rest are also an interval (so intersection of both intervals with polygon should be non-empty). Thus, we don't need p anymore. We only need intervals!
Result is, if such partitioning exists, there are integers l and r (1 ≤ l ≤ r ≤ n) such that intersection of half-planes related to l, l + 1, ..., r and polygon and also intersection of half-planes related to r + 1, r + 2, ..., n, 1, 2, ..., l - 1 and polygon are both non-empty.
This still gives an algorithm (checking every interval). But this lemma comes handy here:
We call an interval(cyclic) good if intersection of lines related to them and polygon is non-empty.
Lemma 2: If an interval is good, then every subinterval of this interval is also good.
Proof is obvious.
That gives and idea:
Denote interval(l, r) is a set of integers such that:
• If l ≤ r, then interval(l, r) = {l, l + 1, ..., r}
• If l ≤ r, then interval(l, r) = {r, r + 1, ..., n, 1, ..., l}
(In other words it's a cyclic interval)
Also MOD(x) is:
• x iff x ≤ n
• MOD(x - n) iff x > n
(In other words it's modulo n for 1-based)
The only thing that matters for us for every l, is maximum len such that interval(l, MOD(l + len)) is good (because then all its subintervals are good).
If li is maximum len that interval(i, MOD(i + len)) is good, we can use 2-pointer to find values of l.
Lemma 3: lMOD(i + 1) ≥ li - 1.
Proof is obvious in result of lemma 2.
Here's a pseudo code:
check(r):
len = 0
for i = 1 to n:
while len < n and good(i, MOD(i+len)): // good(l, r) returns true iff interval(l, r) is good
len = len + 1
if len == 0:
return false // Obviously
if len == n:
return true // Barney and Lyanna can both stay in the same position
l[i] = len
for i = 1 to n:
if l[i] + l[MOD(i+l[i])] >= n:
return true
return false
good function can be implemented to work in (with sorting as said before). And 2-pointer makes the calls to good to be .
So the total complexity to check an specific r is .
Time Complexity:
Codes
• +80
» 4 years ago, # | 0 No matter what number of problems I have solved on probability I won't be able to solve it :DIt was easy to know there was some formula on levels , subtree sizes etc. but yet I failed =(
• » » 4 years ago, # ^ | 0 lol! same here :P
» 4 years ago, # | +5 Fastest editorial ever,BTW, Hack test case for Div2 B is "1.0e0".
• » » 4 years ago, # ^ | 0 In problem B, "d contains no trailing zeros" could basically mean that d can have leading 0s. For example, 0.0001e1 should be printed as 0.001; 0.01e2 as 1; 0.01e5 as 1000; and 1.02e5 as 102000.
• » » » 4 years ago, # ^ | 0 d can never be non-zero when a is zero. So 0.0001e1 is an invalid input.
» 4 years ago, # | 0 How fast the tutorial is !
» 4 years ago, # | 0 Wow! Editorial is published so quickly.
» 4 years ago, # | -6 Lightning fast editorial! Fun contest (even though I am horrible in controlling trees so I couldn't solve C and D)For Div 2 E, I was stuck on how to power 2s, (I knew the logarithm time trick but I didn't know how to stack the powers since there is the 2^{n-1} stuff and all that) so I decided to make a inverse table..Fun contest. Thank you!
» 4 years ago, # | ← Rev. 2 → 0 Tried to hack solution A(div.2):int cur = t; while (cur < x) { cur += s; if (check(cur)) { found(); break; } }by test: t = 0, s = 2 x = 1e9 can't get why this solution get AC on this test, not TLE. :\ Result of hacking: Failed
• » » 4 years ago, # ^ | 0 I don't see why this would result in TLE. A solution with O(n) where n = 5e8 (or 1e9) can run in less than 1 second. Why do you think this should be TLE?
• » » » 4 years ago, # ^ | 0 Usually the golden rule is that a computer can handle around 107 operations in around one second. However, this applies to big-O analises because it usually omits a lot of constants. In this case, it's just a simple loop, so 109 operations run easily in one second.
» 4 years ago, # | 0 Is there a way to solve div1A without hash maps? I got the path part fairly quickly but I was unable to store the costs of the roads and search for them efficiently. Using arraylists to store the paths and search the costs is on the order of 128k^2 and won't run in time.
• » » 4 years ago, # ^ | 0 before every query of type 2 you can sort your edge weight array and then use binary search to look for the edges in path of u-v for the current query. The only thing to take note is that as you are storing in the array there will be multiple instances of the same edges with same/different weights, so after binary search when you find your edge just look to the left and right of the element until you get different edge and keep adding the weights to the answer. I myself code in C so came up with this solution but was too lazy to code so don't know whether it will pass or not :P
• » » » 4 years ago, # ^ | ← Rev. 7 → 0 I tried this with a little modification and it just barely ran under time. My modification is that instead of sorting before every query of type 2, instead, only sort before a query of type 2 when there has been a query of type 1 that could have possibly messed up the sorting pattern. This only requires one bool sorted and two lines of code, but without this, it will not run under time.The worst case for this is complete alternating between queries 1 and 2. For query 1s, we just have for each query since each edge can be processed in constant time, so this gives us the term . For query 2s, from query 1s, we have , so sorting is and we do this for O(q) queries. However, there are edges between two nodes and we need to do a binary search for each edge, this gives us and we need to do this for O(q) queries. Finally, we need to account for duplicate edges. Since we will only encounter each edge once, the duplicate edges overall only take up O(q) time for each query, so it's O(q2) overall. In total, we get since that's the biggest term we get, which is from the sorting. Using q = 1000 and MAX_V = 1018, we get about O(9.5*10^7) which explains why this is just under a second since O(10^8) is usually just under a second because of a constant factor of around 10-30 operations times the number in the O-analysis.This is faster than the array list solution, which is . However, using a binary search tree like map from the STL is still better because we simply need to process edges for time for O(q) queries, giving us taking a way the whole factor of O(q) from our solution. However, the editorial ignores the part because it is smaller than the term.
» 4 years ago, # | 0 I thought, I will read the editorial in the morning. But I got surprised. Really fastest editorial..:P
» 4 years ago, # | ← Rev. 4 → 0 About Div 1 C, what I noticed about the numerator after generating a couple of answers and using the power of OEIS, is that . And , so we can still find this in logarithmic time if you (like me) missed the formula given in the editorial.
• » » 4 years ago, # ^ | 0 I tried doing this, but it gave me TLE and was much more complex to code. I guess the formula of the editorial is the simplest way!Also, the editorial's formula is also on the OEIS page for Jacobsthal.
• » » » 4 years ago, # ^ | +5 I got AC with stupid optimisations,the limits are strict for B and C div 1.
• » » 4 years ago, # ^ | ← Rev. 2 → 0 I also solved this problem same way as you. The only difference is I am applying Fermat little theorem, which helps to further reduce time complexity to O(log 10^9 + 7)
• » » » 4 years ago, # ^ | 0 Hello, there! How dou you apply Fermat's little theorem here? Can you explain a little, please?
• » » » » 4 years ago, # ^ | ← Rev. 3 → 0 we have a^p = a (mod p) with p is prime, so with n = k*p + x, we have a^n = a^(k*p)*a^x. Notice that a^(k*p) = a^p*a^p ... = a^k (mod p), so, we can continue to factorize k = h*p + y ... until we got something that less than p, then we stop. You can take a look at my submission
• » » » » » 4 years ago, # ^ | 0 Ok, but in our problem who are a and p?
» 4 years ago, # | ← Rev. 2 → +13 In Div1C, heavy artillery to find the answer :) sage: m = matrix(QQ, 3, 3, [[1/2, 1/2, 0], [1/2, 0, 1/2], [0, 1/2, 1/2]]) sage: j, p = m.jordan_form(transformation=True) sage: m == p * j * p.inverse() True sage: j [ 1| 0| 0] [----+----+----] [ 0| 1/2| 0] [----+----+----] [ 0| 0|-1/2] sage: mid = vector(QQ, [0, 1, 0]) sage: var("x, y, z") (x, y, z) sage: mid * p * diagonal_matrix([x, y, z]) * p.inverse() * mid 1/3*x + 2/3*z => formula is 1 / 3 * 1K + 2 / 3 * ( - 1 / 2)K
• » » 4 years ago, # ^ | +10 Hah nice :). In fact that was a recurrence of form "x_n and y_n are linearly dependent on x_{n-1} and y_{n-1}" and such recurrence can be transformed to recurrence "x_n is linearly dependent on x_{n-1} and x_{n-2}" and there are known methods for solving them (yours included :P).
» 4 years ago, # | ← Rev. 4 → -8 For problem E, one can find the product modulo 1e9+6 and then use binary exponentiation instead of repeated exponentiation.EDIT: It doesn't seem to work, I thought I made an implementation error during the contest but fixing that doesn't get AC. :/http://codeforces.com/contest/697/submission/19133170EDIT2: Previous code had overflow issues. Here's the fixed AC code:http://codeforces.com/contest/697/submission/19133853
» 4 years ago, # | 0 In Div1C, xmod - 1 = 1 for any numbers or matrices so you can just calculate the "value" in mod and do something. It will be O(N).
• » » 4 years ago, # ^ | +8 Nice!
• » » 4 years ago, # ^ | ← Rev. 2 → +10 The "for any matrices" part is not true. For example we may consider Fibonacci numbers matrix F=[1,1;1,0] and mod=13. F^12 (mod 13)=[233, 144;144,48] (mod 13) = [12,1;1,11]. And the least power, for which F^k (mod 13) = [1,0;0,1] is k=28. And it's even not divisible 12.It may work in this particular problem, but won't work in general.
• » » » 4 years ago, # ^ | ← Rev. 2 → 0 Wow! Then, it's just coincide or there're some conditions for this? I was pretty sure when I was writing this.
• » » » » 4 years ago, # ^ | +5 I don't know. Just guessed that there may be problems, when there is no square root of 5 for Fibonacci numbers. And 13 was a suitable one. If we have a sequence x[n]=A x[n-1], all eigenvalues of A exist and distinct mod P and none of them equals to 0 mod P, then it will definitely be periodic with period that is a divisor of p-1. Thus Ap - 1 = I (mod p). Can't say anything more general.
• » » » » 4 years ago, # ^ | 0 You might find interesting part about Pisano period here.
• » » » » 4 years ago, # ^ | +20 The reason is that roots of characteristic polynomial exist in field modulo prime, though they are integers in this problem. On the other cases such statement is wrong.
» 4 years ago, # | +3 Any problems similar to div2D / div1B ?
» 4 years ago, # | ← Rev. 2 → +1 LCA I use for D2.C / D1.A: #define D(X) (64-__builtin_clzll((X))) LL LCA(LL u, LL v) { if (u>=(D(u)-D(v)); return u^v ? u >> D(u^v) : u; }
» 4 years ago, # | ← Rev. 2 → 0 What is int(math.log2(2 ** 49 - 1)) in Python?IT'S 49...Then look at your exploded prob A.
• » » 4 years ago, # ^ | +25 it indeed is 49 my friend !!
• » » 4 years ago, # ^ | 0 Yes, for some weird reason, (int)(log2((1LL<<49)-1)) is 49 in C++ as well. :(
• » » » 4 years ago, # ^ | 0 C++ likely has the same rounding error as Python. Do not cast the int and then print it. You should get 49.000000.
• » » 4 years ago, # ^ | ← Rev. 3 → +3 This is called "rounding error." Leave out the int() and you'll get math.log2(2 ** 49-1) == 49 because Python rounds the floating point number to 49. Instead, you should do a binary search on the array [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768, 65536, 131072, 262144, 524288, 1048576, 2097152, 4194304, 8388608, 16777216, 33554432, 67108864, 134217728, 268435456, 536870912, 1073741824, 2147483648, 4294967296, 8589934592, 17179869184, 34359738368, 68719476736, 137438953472, 274877906944, 549755813888, 1099511627776, 2199023255552, 4398046511104, 8796093022208, 17592186044416, 35184372088832, 70368744177664, 140737488355328, 281474976710656, 562949953421312, 1125899906842624, 2251799813685248, 4503599627370496, 9007199254740992, 18014398509481984, 36028797018963968, 72057594037927936, 144115188075855872, 288230376151711744, 576460752303423488, 1152921504606846976].
» 4 years ago, # | 0 That's very weird same code in my compiler (sublime text ) shows correct output but not on codeforces.Is that a problem from codeforces judging system or my IDE?
• » » 4 years ago, # ^ | 0 I had similar problems a week ago. I'm not sure it downcastes right. Try using floor indead.
• » » 4 years ago, # ^ | 0 I had this same problem during the contest, but I realized that the problem was because I was using long instead of long long which was OK on my computer, but not on CodeForces' servers. Make sure you are not having any integer overflow and all of your types are correct. CodeForce's custom test is helpful for debugging this.
» 4 years ago, # | +9 I need some help with Div.2 D...Let v, u, x, y be vertices as the editorial says. I understand that the probability of starting_time[u] < starting_time[v] is 0.5. And I get that, if u is an ancestor of v, then this probability is 1, and if u is in the subtree of v, then this probability is 0.But I can't understand why these facts imply that the answer for v is .
• » » 4 years ago, # ^ | +1 Check this, the approach is a bit different: http://codeforces.com/blog/entry/45991?#comment-305083
• » » 4 years ago, # ^ | ← Rev. 2 → +5 We will denote E(v) as the expected time value of vertex v (the answer for v).Then, we will compute the answer based on the probabilty that each vertex u ≠ v is visited before v. So , where p(u) is the probability that u is visited before v. Then we will have p(u) = 1 in case u is an ancestor, p(u) = 0 in case u is a children and else we will have p(u) = 1 / 2 .We have depth[v] nodes with p(u) = 1 and n - sub[v] - (depth[v] - 1) nodes with p(u) = 1 / 2, which leads us to the formula.Edit: Actually formula on the editorial was wrong, it should be n - sub[v] - (depth[v] - 1), where depth begins at 1.
• » » » 4 years ago, # ^ | 0 Thanks, really cool idea and implementation!
• » » » 4 years ago, # ^ | +1 sorry to bother. can you explain more on how did you get formula E[v] = sigma p[u] * 1 : u != v ?? all i know is that E[X] = sigma x p(x) such that X is a random variable. what i think is that here for a fixed v, let X be a random variable of the value of starting_time[v]. now we should calculate the probabilities. p(1) = the probability that starting_time[v] = 1 , p(2) = the probability that starting_time[v] = 2, ... then we will have E[v] = sigma p(x) * x : x >= 1. so how did you get your formula? did you use another random variable or what?
• » » » » 4 years ago, # ^ | ← Rev. 4 → +9 Suppose we generate all possible arrays for dfs in-order permutation, all of then equiprobable. Let's denote tv the time of vertex v and N the number of permutations we computed.You agree ? Where tv is the value a given vertex would have in permutation p. is outside summation because all permutations are equiprobable, so all of them have probabilty .However you may notice this is the same as . For every vertex u, u will be visited before v: in 0 of the permutations, if u is inside subtree of v. in N of the permutations, if v is inside subtree of u. in N / 2 of the permutations, otherwise. Then, we get to the equation mentioned.
• » » » » » 4 years ago, # ^ | +1 thanks a lot. your explanation is great!
• » » » » » 4 years ago, # ^ | 0 Nice explanation. The one in the editorial is a little confusing because of introducing the terms 'x' and 'y', but this one is more lucid and descriptive. Thanks. :)
• » » » » » 4 years ago, # ^ | 0 I too was stuck in the explanation mentioned in the tutorial ....really very beautifully explained
• » » » » » 4 years ago, # ^ | 0 Very nice explanation! But why exactly u will be visited before v in N/2 of the permutation (in the third case)? Could you detail a little, please?
• » » » » » » 4 years ago, # ^ | 0 Look at a set of siblings, the array of direct children of some vertex. Then, after shuffling his array, there is nos reason for a vertex u to appear more than half of the permutations before v, as all permutations are equiprobable. That is, by symmetry, half of the permutations will have u < v and other half will have v < u.You can extend this to the subtree of vertex u and v: a children x of vertex u will have 1 / 2 probability of happening before vertex v (think like it will inherit the probability from its parent) and also x will have 1 / 2 probability of happening after a vertex y that is a children of vertex v.
• » » » » » » » 4 years ago, # ^ | 0 I got it now. Thank you!
• » » » » » » » 3 years ago, # ^ | 0 Thanks for your explanation. Not many people explain this way.
» 4 years ago, # | +29 Really weak tests for D :(Kostroma and zeliboba hacked my solution in several minutes after contest.Correct answer (Arterm's solution)Wrong answer (my solution)Runtime exception (adamant's solution)Could you add at least this test for upsolving if you don't mind?
• » » 4 years ago, # ^ | +8 My solution is ok. It's just stack overflow. Works in custom test.
» 4 years ago, # | 0 Div2B question C: Some algebra? Please help. Can't understand how to get the values of p and q.
• » » 4 years ago, # ^ | -10 Use reduction to prove those formulas.
• » » 4 years ago, # ^ | +4
» 4 years ago, # | +4 Div1-C: What's the proof for the even/odd formula?
• » » 4 years ago, # ^ | +13 maybe my solution comments helps you : http://codeforces.com/contest/697/submission/19143565numerator is a geometric sequence and you can find it's summation with formula a*(p^n-1) / (p-1).if you simplify my formulas you will reach editorial formulas.
• » » 4 years ago, # ^ | +12 Express the dp[i]=(1 — dp[i-1])*0.5 in terms of fraction..then you can see that powers of 2 alternate in signs in numerator and denominator is 2^(n-1)...you can divide numerator in two halves....(2^0 + 2^2 + ...+2^(n-2)) — (2^1 + 2^3 + ... + 2^(n-3)) for evens and (2^1 + 2^3 + ...+2^(n-2)) — (2^0 + 2^2 + ..+ 2^(n-3)) for odds.. then you can use property of Geometric progression in numerator to get to the formula in the editorial
• » » 4 years ago, # ^ | ← Rev. 4 → +7 Thanks for the help! I found a more intuitive way (for me) to find the formulas. and so on.When n is odd, the base case will be dp1 = 0 so we have a geometric progression 20, 22, 24, ... with terms. So and q = 2n - 1.When n is even, the base case will be dp0 = 1 so in this case and q = 2n. After simplifying by 2 we get the formulas from the editorial.
» 4 years ago, # | 0 Could someone explain how the formula for Div 2D was derived in a more detailed way?
• » » 4 years ago, # ^ | 0 Check my comment above.
» 4 years ago, # | ← Rev. 2 → +18 In 696F - ...Dary! I used the fact that there is some optimal placement with the two points on sides of the polygon. The editorial doesn't explain it, does it? Am I missing something? Can anybody prove it?I got accepted with the following approach (19136466). For every of O(n2) intervals of sides I ternary search the optimal point (a point on one of sides, in order to minimize the maximum distance to lines in the fixed interval). There is no halfplanes intersection, only calculating the distance between a point and a line.It's a pity that during the contest I had a bug in my ternary search. Though after the contest I didn't get AC at the first try anyway (precision).
• » » 4 years ago, # ^ | +18 After a contest I thought a bit more and realized many new things. For a fixed interval the distance to only the first line and the last line matters. So, for each interval it's enough to find a point on some side with equal distance to the first and the last line in the interval. It makes my solution instead of .Furthermore, what I tried to ternary search is an angle bisector. It gives us O(n2) solution! Instead of coding it, I improved my first submission and now I find an agle bisector by binary search. Thanks to it, my code is short and it still doesn't use anything more complicated that the distance between line and point. The complexity is , my submission: 19140242.
• » » » 4 years ago, # ^ | ← Rev. 2 → +18 There is also one additional observation that helps.If either i = j or the angle between vectors Pi Pi + 1 and Pj Pj + 1 is positive (as can be checked with cross product), call pair (i, j) good. The optimal solution can only be reached for intervals with both pairs (i, j) and (j + 1, i - 1) being good (with obvious interpretation modulo n). Now there are only O(n) candidate intervals to check.What I did was checking for every interval if it can produce the optimal solution, and for every candidate interval I did a linear search for the segment intersected by the angle bisector on the interval. The total complexity is O(n2). It is also possible to improve this to by using two pointers to find the candidate intervals and binary searching the intersection segment, but O(n2) was reasonably fast and easier to code :P
• » » » » 4 years ago, # ^ | ← Rev. 3 → +11 During the contest I assumed that only good pairs (i, j) matter, as you described. It easily implies a proof that the two points should be on sides of the polygon and this is why my solution works. How to prove what you wrote?Oh, you are right about O(n) candidates. I had this thought at some moment but I noticed that for fixed i there may be many j that both (i, j) and (j + 1, i - 1) are good. So I thought there are O(n2) candidates but yeah, it's easy to show that it's O(n) in total (because: two pointers). EDIT. I implemented approach with two pointers (well, with three pointers). Submission: 19150097. Of course, it can be improved to O(n) by replacing binary search with some O(1) formulas.amd, you invented an awesome problem.
• » » » » » 4 years ago, # ^ | ← Rev. 2 → 0 My sketch of the proof is as follows:For a minimal circle covering a non-good pair (i, j) (it has bigger radius than the circle covering [j + 1, i - 1]), we find a maximal good sub-pair (i', j') (meaning i' and j' lie on interval [i, j] mod n). Having a circle cover a smaller interval decreases the ray.When considering the circle covering [j' + 1, i' - 1], we see that this is also formed by a good pair (otherwise we could either increase j' or decrease i').As one of j' + 1 and i' - 1 necessarily belongs to the interval [i, j], we get immediately that the minimal ray of the circle covering [j' + 1, i' - 1] is smaller than that of a minimal circle covering [i, j].From this, we see that any non-candidate interval is dominated by a candidate interval.I second that this problem was very good, thank you amd for this.EDIT: Also, I don't think I understand how checking the intersection point of the polygon and angle bisector can be done in O(1), unless it is amortized? Even if we calculate the line bisecting the angle in constant time, don't we need to search for the side it intersects somehow?
• » » » » » » 4 years ago, # ^ | +1 a non-good pair (i,j) (it has bigger radius than the circle covering [j+1, i-1]) Why does it have bigger radius? we get immediately that the minimal ray of the circle covering [j'+1, i'-1] is smaller that [i, j] How do we get it immediately? Sorry, I don't see it. how checking the intersection point of the polygon and angle bisector can be done in O(1)? I use three pointers: start, whereMid and end. An interval is [start, end] and I binary search on side whereMid. Check my code for details.
• » » » » » 11 months ago, # ^ | 0 Amazing!I'm preparing a training for my classmates(we each collect some amazing problems and make up an exam),and I found this interesting problem. I have already implemented O(n3logn) (code) , O(n2logn) (code) , O(n2) (code) , O(nlogn) (code) and O(n) (code) ,they all have passed the data of this problem.(Because my submissions can be seen as my classmates all follow me,I used my test account.)But now there is a problem:I try to make a big convex(with n > = 20000 , random the points on a big circle) and change my array to map,the result is there is no pairs (i, j) that both (i, j) and (j + 1, i - 1) are calculated. And all of the O(n) submissions couldn’t pass the data . Your code couldn’t neither (but it can pass 696f)(code).I don't know why and I believe that my data maker is correct(all the code can pass the data that n < = 1000).The data.You passed the problem long time ago. Hope you can remember the solution and I’m looking forward to you reply. Thank you (And sorry for my poor English), I'm your big fan!
• » » » » » » 11 months ago, # ^ | +8 I can't download your input file. Can you upload it in some different website? Can be pastebin.You should write a verificator program that will check if the input file is correct. Then let's think about the correctness of solutions.
• » » » » » » » 11 months ago, # ^ | 0 Ubuntu PastebinEmm....I made a convex among the points and it's 20000 points on it as my expectation.Sorry I couldn't reply you soon because there was 1 am seven hours ago.
• » » » » » » » » 11 months ago, # ^ | 0 And it was 1am for me seven hours ago, when you posted your comment ;pYour input doesn't satisfy the problem constraints at all. N is too big and coordinates aren't integers.
• » » » » » » » » » 11 months ago, # ^ | 0 Sorry I may ignored something: I want to change the problem constraints (so that the participants who use O(n) could get 100 points and who use O(n2logn) could get some points less than 100 , many important contests in China like give points in this way). Now the problem is: I made bigger input and all of the codes (O(n)) couldn't pass the data. Of course I change T best_r[][] into (unordered_)map best_r[] and scanf("%d%d",&x,&y) into scanf("%lf%lf",&x,&y).
• » » » » » » » » » 11 months ago, # ^ | 0 What you're talking about is called subtasks (subtasks for partial points) and is indeed common.Either input is invalid or the O(n) solutions are incorrect. Try to find a small countertest and analyse it. And write an input verifier, as I told you.I'm not surprised that my solution doesn't work because it was made for different constraints. You would have to go through it carefully to catch all places where something should be changed because of bigger N or non-integer coordinates.
» 4 years ago, # | +13 Hi, I think solution to div1 D (div2 F) has a small typo. The dp calculation should be:dp[l + 1][u] >= b[u] + dp[l][v] ?
» 4 years ago, # | 0 In Div 1. D, I was not sure if matrix multiplication would work well if I change the definitions of + and *. In particular, I was not sure if associativity of matrix multiplication stands true if I change those.Associativity means A·(B·C) = (A·B)·CCan anyone help me proving this?
• » » 4 years ago, # ^ | +20 It can be done directly by writing what (i, j) cell of ABC equals to.You can find a proof in this PDF.
» 4 years ago, # | ← Rev. 3 → 0 Oh my, maths maths and maths, that's the moment when it has to feel bad to be a high school student... Can somebody tell me what should I search to learn about the proof behind the formula of Div2E? It looks similar to GS but it isn't it. =/Edit: Whoops just found a link above in the comments, here in case if someone missed it too.
» 4 years ago, # | -10 Whom I've hacked hacked me....
» 4 years ago, # | ← Rev. 2 → +4 Scan in one line using scanf
» 4 years ago, # | 0 For Div2B what should be the output for 0.798e1 ?07.98 or 7.98 ?
• » » 4 years ago, # ^ | ← Rev. 3 → +3 7.98 as it is mentioned in the question that Output doesn't contain leading zeroes.
• » » » 4 years ago, # ^ | 0 Then the cases are weak . Because my code gets AC and i print 07.98 :P
• » » » » 4 years ago, # ^ | 0 From the task : Also, b can not be non-zero if a is zero. This test is not correct :)
• » » » » 4 years ago, # ^ | 0 Yes, look like that they haven't included such cases.
• » » 4 years ago, # ^ | 0 7.98
» 4 years ago, # | 0 what is h[v] and depth[v] in div2D editorial?
• » » 4 years ago, # ^ | ← Rev. 2 → 0 I wrote down the complete before before arriving at ans of Div 2 'D' , if anyone wants to have a look u can message me
• » » » 4 years ago, # ^ | 0 it is not what I asked
• » » 4 years ago, # ^ | ← Rev. 2 → +4 depth[v]=level of node v counted from root (root has level 1)h[v]=depth[v]-1n-sub[v]-h[v]=number of nodes that are neither in subtree of v nor an ancestor of v
» 4 years ago, # | 0 In div1C , how can we solve the recurance without using wolfram?
• » » 4 years ago, # ^ | ← Rev. 3 → +9 it can turn into the form: f(x) — k = -1/2*(f(x-1) — k), so f(x) — k is geometric sequenceEdit: I don't know why the minus sign becomes "&mdash" in codeforces
» 4 years ago, # | ← Rev. 2 → 0 Can someone please tell me what is wrong with my solution to Div2 C? It got WA for test case 12.I used the same idea as mentioned in the editorial. I just simulated the procedures (namely update and findCost) using the concept of Lowest Common Ancestor(LCA).Thanks in advance.
• » » 4 years ago, # ^ | +1 #define pi pair
• » » » 4 years ago, # ^ | 0 You are absolutely right! Thanks. That was so stupid of me! Very much disappointed. :(
• » » 4 years ago, # ^ | +1 u and v are too large to store in an int, and your costs map's keys are pair, so it overflows.
• » » » 4 years ago, # ^ | 0 Yeah, thanks. That was so stupid of me to overlook that!:(
» 4 years ago, # | 0 How to solve div2C ? I read editorial and saw some codes , but I really don't understand // If there hadn't been (so) infinite number of intersections it would have been much easier :D
• » » 4 years ago, # ^ | ← Rev. 2 → 0 For any pair of vertices, there will be a root(famously knows as Lowest common ancestor) such that you can reach one of the nodes by traversing left from root and reach other node by traversing right from root. Such traversal will give shortest path.Let's make it more clear below.For nodes 8 and 10, LCA is 2.For nodes 9 and 3, LCA is 1.For nodes 13 and 14, LCA is 3.How to calculate LCA?Parent of a node x is x/2. You can confirm this from above image.parent[5]=2 parent[12]=6Insert all parents of a node in a set. Then search for all parents of other node in this set. First node which you will find in the set will be LCA. Then you can easily traverse from each nodes to LCA and add value to each edge or calculate cost of path.Code
• » » » 4 years ago, # ^ | 0 oh, thanks , i got it!
» 4 years ago, # | 0 Great Editorial & Contest :))
» 4 years ago, # | 0 Why this decision on the problem B is wrong double d; cin >> d; cout.setf(ios_base::fixed); dtoa(d,str); cout<
» 4 years ago, # | 0 In div2 D, I know depth[v] will added to our answer but I cannot understand how did we get first term? Can we solve it by calculating starting_time for each vertex and dividing by total number of paths possible(which maybe large)? For example in first testcase total number of paths are 12.
• » » 4 years ago, # ^ | +1 The first time is actually the delay term (the delay here means the starting time between a node and its parent).Consider such a node u and its parent v, then for other nodes that is also having v as its parent, the total number of nodes of these nodes and their subtrees is actually n - sub[v] - h[v]. As each of them have 0.5 probability to contribute to the delay, the delay term would be .
• » » 4 years ago, # ^ | 0 Why depth[v] is added can someone explain using linearity of expectation.
• » » » 4 years ago, # ^ | 0 The depth is actually the time running from root to the target node u. That is, the expected value of all the delay with shift by this value, as no matter what is the search sequence, at least you have to go through the shortest path from the root to the node.
» 4 years ago, # | 0 In Div2 C, I am getting WA at Test Case-39. Can't find any bug in the code.Link -> Solution
• » » 4 years ago, # ^ | +4 Instead of log2(u) , use log2l(u) Here is the submission after modification: http://codeforces.com/contest/697/submission/19171319 Thank you
• » » » 4 years ago, # ^ | 0 Thanks a lot! :)
» 4 years ago, # | 0 For div2 D can anyone explain how to calculate the expected Value ?
• » » 4 years ago, # ^ | ← Rev. 4 → +8 Suppose the children of vertex v are c1, c2, ..ck.Let note the probability pi, j, (i ≠ j) that in the permutation position of ci will be smaller than position of cj,then because you can fix position of ci,let it be t,then the position of cj must be greater that t,if t = 1 we have k - 1 ways,t = 2 we have k - 2 ways and so on,but , and the remaining numbers we can add in (k - 2)! ways.Let note S(ci) be the number of vertices in the subtree ci.Now the answer for the cith vertex is ,because E[x + y] = E[x] + E[y].
• » » » 2 years ago, # ^ | 0 Hi, can you please explain why . I can understand that E[ci] = E[v] + 1 but I could not understand the other part. Why are we taking the summation of number of vertices in subtree times the probability that Ci comes earlier than Cj. According to E[x] formula, shouldnt we do: E[c] = E[v] + 1 + (1/n!)*sigma(delay caused by i-th permutaion), where v is parent of c having n children.
» 4 years ago, # | +5 Can anyone explain the matrix exponentiation approach for solving the recurrence for Div2E ?
» 4 years ago, # | 0 Can someone explain Div 1 D in more detail?
» 4 years ago, # | 0 Hello! Regarding Div 2 E, you need to compute 2^(n-1) % MOD. I know how to compute 2^n % MOD using binary exponentiation, but how do you get 2^(n-1) % MOD , as n may be huge?
• » » 3 years ago, # ^ | 0 Just compute x=2^n and then do x=x/2 using fermat's little theorem.
» 3 years ago, # | 0 I made a nice explanation for Div2 D / Div1 BHere
» 3 years ago, # | 0 If anybody needs in more detailed solution of problem 'Div1 C. Please'. You can read it here by looking at my post.
» 3 years ago, # | ← Rev. 3 → 0 Actually, for div2 E there's another solution without even using dp. If you come up with the transition matrix, namely: 1 1 0 1 0 1 0 1 1 Then you can simply use the exponential power with the following formula: A^B=A^(B%phi(C) + phi(C)) (mod C) , where phi(x) is the Euler function.22299208
» 2 years ago, # | 0 In problem Div-1 C , Let, x = 2^(n-1) — 1, and y = 2^(n-1) + 1;How can you ensure x , y are always divisible by 3 ?Is it really necessary ? If x is not divisible by 3 , then wouldn't (x/3) be a non-integer ?I'm a bit confused
» 11 months ago, # | +8 Oh...the monster is so scary!!!!I have to F12.... | 2020-01-20 19:35:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5978860855102539, "perplexity": 1133.4449626486255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250599718.13/warc/CC-MAIN-20200120165335-20200120194335-00344.warc.gz"} |
https://www.cvxpy.org/_modules/cvxpy/expressions/constants/parameter.html | # Source code for cvxpy.expressions.constants.parameter
"""
This file is part of CVXPY.
CVXPY is free software: you can redistribute it and/or modify
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
CVXPY is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with CVXPY. If not, see <http://www.gnu.org/licenses/>.
"""
from cvxpy import settings as s
from cvxpy.expressions.leaf import Leaf
import cvxpy.lin_ops.lin_utils as lu
[docs]class Parameter(Leaf):
"""Parameters in optimization problems.
Parameters are constant expressions whose value may be specified
after problem creation. The only way to modify a problem after its
creation is through parameters. For example, you might choose to declare
the hyper-parameters of a machine learning model to be Parameter objects;
more generally, Parameters are useful for computing trade-off curves.
"""
PARAM_COUNT = 0
def __init__(self, shape=(), name=None, value=None, **kwargs):
self.id = lu.get_id()
if name is None:
self._name = "%s%d" % (s.PARAM_PREFIX, self.id)
else:
self._name = name
# Initialize with value if provided.
self._value = None
super(Parameter, self).__init__(shape, value, **kwargs)
def get_data(self):
"""Returns info needed to reconstruct the expression besides the args.
"""
return [self.shape, self._name, self.value, self.attributes]
def name(self):
return self._name
# Getter and setter for parameter value.
@property
def value(self):
"""NumPy.ndarray or None: The numeric value of the parameter.
"""
return self._value
@value.setter
def value(self, val):
self._value = self._validate_value(val)
@property
"""Gives the (sub/super)gradient of the expression w.r.t. each variable.
Matrix expressions are vectorized, so the gradient is a matrix.
Returns:
A map of variable to SciPy CSC sparse matrix or None.
"""
return {}
def parameters(self):
"""Returns itself as a parameter.
"""
return [self]
def canonicalize(self):
"""Returns the graph implementation of the object.
Returns:
A tuple of (affine expression, [constraints]).
"""
obj = lu.create_param(self, self.shape)
return (obj, [])
def __repr__(self):
"""String to recreate the object.
"""
attr_str = self._get_attr_str()
if len(attr_str) > 0:
return "Parameter(%s%s)" % (self.shape, attr_str)
else:
return "Parameter(%s)" % (self.shape,) | 2019-01-22 06:15:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6078005433082581, "perplexity": 13504.574755646192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583829665.84/warc/CC-MAIN-20190122054634-20190122080634-00218.warc.gz"} |
https://www.lmfdb.org/EllipticCurve/Q/435344/r/ | # Properties
Label 435344.r Number of curves $1$ Conductor $435344$ CM no Rank $0$
# Related objects
Show commands: SageMath
sage: E = EllipticCurve("r1")
sage: E.isogeny_class()
## Elliptic curves in class 435344.r
sage: E.isogeny_class().curves
LMFDB label Cremona label Weierstrass coefficients j-invariant Discriminant Torsion structure Modular degree Faltings height Optimality
435344.r1 435344r1 $$[0, -1, 0, 958, 98383]$$ $$1257728/55223$$ $$-4264813974512$$ $$[]$$ $$449280$$ $$1.1032$$ $$\Gamma_0(N)$$-optimal
## Rank
sage: E.rank()
The elliptic curve 435344.r1 has rank $$0$$.
## Complex multiplication
The elliptic curves in class 435344.r do not have complex multiplication.
## Modular form 435344.2.a.r
sage: E.q_eigenform(10)
$$q - q^{3} + 2q^{5} - q^{7} - 2q^{9} - 2q^{11} - 2q^{15} - 6q^{19} + O(q^{20})$$ | 2021-07-26 17:05:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.180668443441391, "perplexity": 10810.558173546431}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152144.81/warc/CC-MAIN-20210726152107-20210726182107-00658.warc.gz"} |
https://stats.stackexchange.com/questions/270620/what-is-the-benefit-of-only-performing-a-random-split-once-at-the-beginning-duri | # What is the benefit of only performing a random split once at the beginning during Kfold cross-validation?
I had a discussion with my colleague about Kfold and StratifiedKfold validation in scikit learn.
My understanding of these two ways of validation is that at the beginning of the validation process, the data set is randomly divided in k-folds. k-1 of those folds end up in the train dataset. 1 of those folds ends up in the validation data set. Validations are performed k times. With each repetition, what is in the training or validation dataset is shifted.
My colleague argued that it would be better to decide what is and is not in the train dataset or validation dataset anew. Edit: Scikit-learn also has selectors for this sort of validation named "ShuffleSplit" and "StratifiedSHuffling".
I had no mathematical intuition for why one way is superior to another way.
The benefit of only doing the initial split is that what ends up in the validation dataset is non-redundant. But is that really a benefit?
So my question is this:
Is he right? What is the benefit (intuitively or mathematically) to the actual implementation of Kfold or StratifiedKfold?
Edit: Can someone provide me with an intuition for what to pick when?
• The benefit of only 1 split is that all samples are used both for training ($k-1$) times and exactly once for validation. Whereas the advantage of random sub-sampling is that the ratio of training to test size is independent of number of folds. – Łukasz Grad Mar 29 '17 at 18:49
• @Lukasz Grad: Which is better for what cases? Is there a sweet-spot for doing only 1 split at the beginning? – Thornhale Mar 29 '17 at 19:00
• Another option for smaller datasets is iterated K-fold validation with shuffling. Just repeat K-fold multiple times, shuffling only at the beginning of each. So (# iterations)*K trained models in total. – Austin Jul 7 '18 at 1:07
## 1 Answer
The benefit of determining the contents of each fold at the beginning rather than re-sampling is that you avoid bias by possibly selecting a single observation for more than one training or testing set. This is only a benefit if you care that no records are over-represented in your validation.
If the argument is that new folds should be generated without replacement after each fold has been tested, it can be shown that the likelihood of a single observation to appear in each fold is unaffected and they are therefore equivalent approaches.
Stratified methods most commonly used in cases where classes are unbalanced - to avoid, for example, folds where there are no (or very few, or just inefficiently few) positive examples in a sparse binary classification problem. This applies both to StratifiedShuffleSplit and StratifiedKFold methods.
In order to determine which to use between K-Fold and ShuffleSplit, though, we'll have to understand some key differences between the methods.
1. In K-fold, the model is trained on each iteration at a proportion of the data set equal to $\frac{k-1}{k}$, i.e. for $k=5$ the model is trained on 80% of the data at each iteration and for $k=10$ the model is trained on 90% of the data. There are $k$ training iterations in the algorithm.
In ShuffleSplit, the model is trained at each iteration on a defined train_size. The default size of the training set for the scikit-learn implementation is 90%. The number of iterations is parameterized (n_splits).
You can therefore configure two validation strategies that have the same number of training runs and are trained on the same proportion of the data, I.e. K-fold validation with K=10 and ShuffleSplit with train_size = .9 and n_splits = 10.
No difference here.
1. K-Fold is guaranteed to both train and test on every observation of the dataset an equal number of times ($k-1$ and 1 times, respectively).
ShuffleSplit does not guarantee this proportion- by re-sampling at each iteration, duplicate members of the test set can be selected twice, or several times. Some observations may not be represented in the test set.
This is a point for using K-Fold.
Some other discussion has brought up training time as a point in favor of ShuffleSplit- that is, you can configure ShuffleSplit to run with less demanding coverage (i.e. train_size = .8, n_splits = 5). This is only a strong argument for complexities that can't be approximated by adjusting the parameter $k$- as $k=5$ for the parameters mentioned above for ShuffleSplit.
For example, K-fold can't approximate train_size = .8, n_splits = 3. The tradeoff is that each configuration of ShuffleSplit that would be more computationally efficient than K-Fold also provides less complete validation coverage- in this example (train_size = .8, n_splits = 3), the model could never be tested on more than 60% of the data, and would almost always be tested on less than that.
TL;DR-
K-Fold is generally better except in edge cases where the computational demand of K-Fold isn't supported by a need for comprehensive cross-validation coverage.
• Just for clarification then, going by what you said above. Is K-Fold equivalent to ShuffleSplit validation if for the latter it is shown that we are are generating new folds without replacement after each fold has been tested? (Is Shuffle-Split implemented that way?) – Thornhale Mar 29 '17 at 23:45
• No, these two are not equivalent- ShuffleSplit is basically just iterations of train/test splitting. I.e. in iteration 1, you train/test split on the whole set, in iteration 2 you re-split the set and test again etc. What I was trying to communicate was unrelated to ShuffleSplit directly- basically, if you form the k folds sequentially rather than all at once, it's equivalent to doing it all at once. That statement just covers one possible interpretation of the question. – Thomas Cleberg Mar 30 '17 at 13:20
• Ah, ok. That part is clear. The only part that is not clear to me then is: When should I prefer ShuffleSplit over Kfold? And when should I prefer Kfold over SHuffleSplit? – Thornhale Mar 30 '17 at 13:27
• Added more to clarify the tradeoffs of this decision. – Thomas Cleberg Mar 30 '17 at 15:45 | 2020-04-06 09:57:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5223259925842285, "perplexity": 1007.6159722021864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371620338.63/warc/CC-MAIN-20200406070848-20200406101348-00146.warc.gz"} |
https://jp.maplesoft.com/support/help/maplesim/view.aspx?path=ArrayTools%2FAppend | ArrayTools - Maple Programming Help
Home : Support : Online Help : Programming : Low-level Manipulation : Matrices, Vectors, and Arrays : ArrayTools : ArrayTools/Append
ArrayTools
Append
append element to Array
Calling Sequence Append(A,expr,opts)
Parameters
A - : Array or Vector expr - : expression to append opts - : (optional) inplace=true or inplace=false
Options
• inplace = true or false
Specifies whether the append operation should reuse the existing Array or generate a new Array. The default value is true.
Description
• The Append(A,expr) command appends the expression expr to the end of the Vector or the 1-D Array, A.
• When inplace=false is specified, the append operation allocates a new Array and does not modify the existing one. Otherwise, the existing object is modified.
• This function is part of the ArrayTools package, so it can be used in the short form Append(..) only after executing the command with(ArrayTools). However, it can always be accessed through the long form of the command by using ArrayTools[Append](..).
Examples
> $\mathrm{with}\left(\mathrm{ArrayTools}\right):$
> $A≔\mathrm{Array}\left(\left[2,3,5\right]\right)$
${A}{≔}\left[\begin{array}{ccc}{2}& {3}& {5}\end{array}\right]$ (1)
> $\mathrm{Append}\left(A,\frac{1}{x}\right)$
$\left[\begin{array}{cccc}{2}& {3}& {5}& \frac{{1}}{{x}}\end{array}\right]$ (2)
> $\mathrm{Append}\left(A,\mathrm{sin}\left(x\right)\right)$
$\left[\begin{array}{ccccc}{2}& {3}& {5}& \frac{{1}}{{x}}& {\mathrm{sin}}{}\left({x}\right)\end{array}\right]$ (3)
> $v≔⟨1,2x⟩$
${v}{≔}\left[\begin{array}{c}{1}\\ {2}{}{x}\end{array}\right]$ (4)
> $\mathrm{Append}\left(v,3{x}^{2},\mathrm{inplace}=\mathrm{false}\right)$
$\left[\begin{array}{c}{1}\\ {2}{}{x}\\ {3}{}{{x}}^{{2}}\end{array}\right]$ (5)
> $v$
$\left[\begin{array}{c}{1}\\ {2}{}{x}\end{array}\right]$ (6)
Compatibility
• The ArrayTools[Append] command was introduced in Maple 18.
• For more information on Maple 18 changes, see Updates in Maple 18. | 2021-05-13 03:57:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 13, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8312721848487854, "perplexity": 2543.835917700511}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992721.31/warc/CC-MAIN-20210513014954-20210513044954-00073.warc.gz"} |
https://www.techwhiff.com/issue/how-do-properties-of-water-influence-this-relationship--631580 | # How do properties of water influence this relationship
###### Question:
How do properties of water influence this relationship
### Which of the following diseases is the result of mosaicism in specific cells?
Which of the following diseases is the result of mosaicism in specific cells?...
### How does having three branches of government make our lawmaking process more fair? Make sure to use the phrase “checks and balances” in your answer.
How does having three branches of government make our lawmaking process more fair? Make sure to use the phrase “checks and balances” in your answer....
### The trans-Saharan trade increased a result of
The trans-Saharan trade increased a result of...
### In damselflies a basal quadrangular cell in the wing venation is called
in damselflies a basal quadrangular cell in the wing venation is called...
### Which philosopher would start with a tabula rasa and then develop ethical standards? a. Plato b. John Rawls c. Aristotle d. Robert Nozick
Which philosopher would start with a tabula rasa and then develop ethical standards? a. Plato b. John Rawls c. Aristotle d. Robert Nozick...
### 30 POINTS PLEASE I NEED HELP!!!!!! DONT GIVE CAP DONT GIVE CAP AND PLEASE BE HONEST!!! 30 POINTS ON THE LINE IT WOULD MEAN A LOT!! :-( 1. I am currently $13$ times as old as my granddaughter. Next year, I will be $11$ times as old as my granddaughter. How old am I now? 2. Gina has $15$ dollars more than twice as much money as her sister Maria. If Gina gives Maria $30$ dollars, then Gina will have half as much money as her sister. How many dollars does Gina have? 3. A rectangular plot of land ha
30 POINTS PLEASE I NEED HELP!!!!!! DONT GIVE CAP DONT GIVE CAP AND PLEASE BE HONEST!!! 30 POINTS ON THE LINE IT WOULD MEAN A LOT!! :-( 1. I am currently $13$ times as old as my granddaughter. Next year, I will be $11$ times as old as my granddaughter. How old am I now? 2. Gina has $15$ dollars more...
### Rewrite the point-slope equation in slope-intercept form: y - 3 = 2(x + 4) A) y = 2x + 5 B) y = 2x + 7 C) y = 2x + 9 D) y = 2x + 11
Rewrite the point-slope equation in slope-intercept form: y - 3 = 2(x + 4) A) y = 2x + 5 B) y = 2x + 7 C) y = 2x + 9 D) y = 2x + 11...
### Directions: Match the person with the appropriate investments. Explain your thinking: Desmond: Desmond, an 85-year-old retireeCarmen: Carmen, a 30-year-old attorney Indira: Indira, a 55-year-old electrician who expects to retire in 10 years InvestmentsMatch them with an explanation a. 100% invested in a mutual fund investing in new high-tech companies b. Mutual funds with 80% stocks, 20% bonds c. Mutual funds with 50% stocks, 50% bonds d. Mutual funds with 30% stocks, 70% bonds (I'm mainly doing
Directions: Match the person with the appropriate investments. Explain your thinking: Desmond: Desmond, an 85-year-old retireeCarmen: Carmen, a 30-year-old attorney Indira: Indira, a 55-year-old electrician who expects to retire in 10 years InvestmentsMatch them with an explanation a. 100% invested ...
### Evaluate the following expression when x = 3 and y = 5: x^2 + y^3 2+x
Evaluate the following expression when x = 3 and y = 5: x^2 + y^3 2+x...
### Find the volume of this rectangular prism
Find the volume of this rectangular prism ...
### Explain the importance of hydrogen compounds
explain the importance of hydrogen compounds ...
### (-2t(t+1), t<6 g(t) = { ,6<<< 12 | 2+2 + 3+ .+> 12
(-2t(t+1), t<6 g(t) = { ,6<<< 12 | 2+2 + 3+ .+> 12...
### How did the continents come to exist
How did the continents come to exist...
### HELPPPPPPPPPPPPPPPPPPPPPPPPPPP
HELPPPPPPPPPPPPPPPPPPPPPPPPPPP...
### Three times two less than a number is greater that or equal to five times the number.find all of the numbers that satisfy the given conditions let n=a number.choose the inequality that represents the given relationship
Three times two less than a number is greater that or equal to five times the number.find all of the numbers that satisfy the given conditions let n=a number.choose the inequality that represents the given relationship...
### 5-37. Using a Karnaugh map, reduce the following equations to a min mum form. (a) X = ABC + AB + AB (b) Y = BC + ABC + BC (c) Z - ABC + AB C + ABC + ABT
5-37. Using a Karnaugh map, reduce the following equations to a min mum form. (a) X = ABC + AB + AB (b) Y = BC + ABC + BC (c) Z - ABC + AB C + ABC + ABT... | 2022-12-08 17:14:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47151756286621094, "perplexity": 2971.3194449843445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711344.13/warc/CC-MAIN-20221208150643-20221208180643-00680.warc.gz"} |
http://math-mprf.org/journal/articles/2013/1/ | Issue 1 (2013)
ARTICLES
Persistent Random Walks, Variable Length Markov Chains and Piecewise Deterministic Markov Processes P. Cenac, B. Chauvin, S. Herrmann, P. Vallois (pages 1 - 50) Markov Approximation of Chains of Infinite Order in the $\bar{d}$-metric S. Gallo, M. Lerasle, D.Y. Takahashi (pages 51 - 82) A Non-Linear Model of Trading Mechanism on a Financial Market V. Belitsky, Yu.M. Suhov, N.D. Vvedenskaya (pages 83 - 98) The Geometry of Markov Chain Limit Theorems J. Bernhard (pages 99 - 124) Convergence in Total Variation of an Affine Random Recursion in $[0,p)^{k}$ to a Uniform Random Vector C. Asci (pages 125 - 140) A Note on the Convergence of Probability Functions of Markov Chains J. Baczynski, M.D. Fragoso (pages 141 - 148) Duality and Cones of Markov Processes and Their Semigroups M. Moehle (pages 149 - 162) | 2019-02-17 02:46:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17166711390018463, "perplexity": 2121.482785089281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481428.19/warc/CC-MAIN-20190217010854-20190217032854-00545.warc.gz"} |
http://vm.udsu.ru/issues/archive/issue/2017-2-9 | +7 (3412) 91 60 92
## Archive of Issues
Russia Izhevsk
Year
2017
Volume
27
Issue
2
Pages
257-266
Section Mathematics Title Scattering and quasilevels in the SSH model Author(-s) Tinyukova T.S.a Affiliations Udmurt State Universitya Abstract Topological insulator is a special type of material that represents an insulator in the interior (“in bulk”) and conducts electricity on the surface. The simplest topological insulator is a finite chain of atoms in polyacetylene. In the last decade topological insulators are actively studied in the physics literature. A great interest to topological insulators (and also to topologically similar superconducting systems) is due to the presence of a link between “volume” and “boundary”. In this article, we have studied the discrete model SSH (Su-Schrieffer-Heeger) for polyacetylene. This model describes an electron in a one-dimensional chain of atoms with two alternating amplitudes of the transition to a neighboring atom. We have found the spectrum and resolution of this operator. The quasilevels (eigenvalues and resonances) in the case of a small potential have been investigated. In addition, we obtained a solution of the Lippmann-Schwinger equation and asymptotic formulas for the probability of transmission and reflection in case of small perturbation. Keywords resolution, spectrum, eigenvalue, resonance, Lippmann-Schwinger equation, probability of reflection UDC 517.958, 530.145.6 MSC 81Q10, 81Q15 DOI 10.20537/vm170209 Received 1 February 2017 Language Russian Citation Tinyukova T.S. Scattering and quasilevels in the SSH model, Vestnik Udmurtskogo Universiteta. Matematika. Mekhanika. Komp'yuternye Nauki, 2017, vol. 27, issue 2, pp. 257-266. References Hasan M.Z., Kane C.L. Colloquium: topological insulators, Reviews of Modern Physics, 2010, vol. 82, issue 4, pp. 3045-3067. DOI: 10.1103/RevModPhys.82.3045 Bardarson J.H., Moore J.E. Quantum interference and Aharonov-Bohm oscillations in topological insulators, Rep. Progr. Phys., 2013, vol. 76, no. 5, 056501. DOI: 10.1088/0034-4885/76/5/056501 Asbóth J.K., Oroszlány L., Pályi A. A short course on topological insulators: band-structure topology and edge states in one and two dimensions, Lecture Notes in Physics, 2016, vol. 919. DOI: 10.1007/978-3-319-25607-8 Ruzicka F. Hilbert space inner products for $\mathcal {PT}$-symmetric Su-Schrieffer-Heeger models, International Journal of Theoretical Physics, 2015, vol. 54, issue 11, pp. 4154-4163. DOI: 10.1007/s10773-015-2531-4 Leijnse M., Flensberg K. Introduction to topological superconductivity and Majorana fermions, Semiconductor Science and Technology, 2012, vol. 27, no. 12, 124003. DOI: 10.1088/0268-1242/27/12/124003 Tinyukova Т.S. Two-dimensional difference Dirac operator in the strip, Vestn. Udmurt. Univ. Mat. Mekh. Komp'yut. Nauki, 2015, vol. 25, issue 1, pp. 93-100 (in Russian). DOI: 10.20537/vm150110 Tinyukova Т.S. Scattering in the case of the discrete Schr\"odinger operator for intersected quantum wires, Vestn. Udmurt. Univ. Mat. Mekh. Komp'yut. Nauki, 2012, issue 3, pp. 74-84 (in Russian). DOI: 10.20537/vm120308 Full text | 2018-05-26 19:35:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28345486521720886, "perplexity": 2638.7210651463197}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867859.88/warc/CC-MAIN-20180526190648-20180526210648-00152.warc.gz"} |
https://mcbachmann.de/post/file-management-fdupes-with-hardlinks/ | Fdupes is a very useful tool - it searches your harddisk for duplicated files and helps you to keep it structured. Currently, it only supports deleting duplicated files, which is not suitable for my music collection, because I want to store the same music in different places (eg. to make a collection for different reasons like music for a wedding and so on). One possibility would be to add tags to the music. But this would take very long to search through all and you can’t just copy the preselected files with just a filemanager. The solution is: fdupes should hardlink instead of deleting.
Warnung
Use this only if you have a backup. The patch is new and not tested on a wide basis. This means it can destroy all your files! It also has no interactive modus. Use it on your own risk._
Thanks to Javier Fernández-Sanguino Peña its now possible to hardlink instead of delete duplicates (currently this is only available for deb-package-managers).
First you need to get these 3 files from a Debian FTP:
• fdupes_1.50-PR2-2.diff.gz
• fdupes_1.50-PR2-2.dsc
• fdupes_1.50-PR2.orig.tar.gz
Then download the whole patch from this website: Bug#284274: Patch for the hardlink replacement bug request and save it as fdupes.patch.
If your system is not already prepared to compile software, you’ll need at least the following packages:
• build-essential
• devscripts
• debhelper
• dpatch
After that run this command to extract the source:
dpkg-source -x fdupes_1.50-PR2-2.dsc
Then change to the source directory and apply the patch:
cd fdupes-1.50-PR2
patch -p1 <../fdupes.patch
Now compile the source and make a debian package with the following command:
debuild -us -uc
After that you can install/upgrade the package easily with dpkg:
cd ..
sudo dpkg -i fdupes*deb
Happy hardlinking - the new switch is "-L".
Sven | 2022-06-29 16:29:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17476557195186615, "perplexity": 3298.81278913156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103640328.37/warc/CC-MAIN-20220629150145-20220629180145-00716.warc.gz"} |
https://www.physicsforums.com/threads/inverse-square-law-in-1-dimensional-space.417874/ | # Inverse Square Law in 1 dimensional space
1. Jul 23, 2010
### Izhaki
Sorry for what could be a rather very stupid question. I'm not an expert in physics nor maths.
The inverse square law defines that strength is inversely proportional to the square of the distance from the source. This is justified by working out the surface area of a sphere, which involves a squared radius (distance).
Continuing this line of thought (based on the circumference of a circle), on a 2 dimensional space, I'd assume that we'll have the inverse proportional law. (/r or /d).
My question is, what law governs a 1 dimensional field? It appears to me all sensible that the rule will be the inverse square-root (/sqrt(d)). But, for the life of me, I can't work out why.
Izhaki
2. Jul 23, 2010
### K^2
No. It's a constant. The series is algebraic, not geometrical. So for any N-dimensional space, the law is thus.
$$1/R^{N-1}$$
For N=1, that comes out to just 1. No R-dependence.
3. Jul 23, 2010
### Izhaki
Thanks,
/1 was my second option after /d, but as it implies that in a one dimensional space, a particle has the same force effect on a particle 3 places to its right, and a particle 5 places to its right. Which doesn't make much sense.
I take your answer as the correct one, but was wondering if you can point me to its source / proof? I don't need an explanation, just keywords to google.
Thanks again,
Izhaki
4. Jul 23, 2010
### Born2bwire
You should think more about why the inverse square law comes about.
In three dimensions, let's say I have a isotropic point source. This means that it emits a spherical wavefront. In a lossless medium, the energy density of this wave front on the spherical shell must remain constant. However, the area of the shell grows as the wave propagates out. The surface area of the shell is proportional to r^2. Thus, we would expect that the energy density to drop off by 1/r^2 to keep the energy across the entire surface constant.
In two-dimensions, the point source (or a line source in 3D) now emits circular wavefronts that have a circumference that is proportional to r. Thus the energy must drop off by 1/r.
In one-dimension, the point source simply emits a wavefront that flows along a 1D line. The energy density does not decrease as the wavefront travels because there is no geometrical spreading of the wavefront. Hence, it is constant.
You could also look at the point source solutions for wave equations. In 3D, it's something like
$$\frac{e^{ikr}}{r}$$
In 2D:
$$H_\alpha^{1}(k\rho)$$
In 1D:
$$Acos(kx)+Bsin(kx)$$
The asymptotic behavior of the above is 1/r^2, 1/r and 1 for the power/energy (square of the amplitudes).
5. Jul 23, 2010
### Izhaki
Brilliant,
The word 'wavefront' pretty much explained it all for me. On 3 dimensions the wavefront expands sphere-like, in 2D circle-like, and in 1D it doesn't expand at all (by way of loose visual analogy, it's always same-size two points getting away from the source).
Also, thanks for the wave equations reference.
If I'm taking your explanation a step further, would it be right to say that (strength-like) physical quantities are 'transmitted'. In other words, the earth's gravity is not really a static force that pulls the moon to earth, but rather a dynamic travelling force (wavefront-like) that pulls the moon to earth? (If I'm not mistaken, gravity travels at the speed of light, which kind of supports this assumption.) By the way, I'm talking of Newtonian gravity, not Einstein.
Sorry for using everyday ambiguous language for what is really mathematical concepts...
Last edited: Jul 23, 2010
6. Jul 23, 2010
### Born2bwire
No, in Newtonian gravity there are no waves and the gravitational force is instantaneous. When we talk about the inverse square law it should be in the context of something like a wave equation. The fact that the gravitational force between bodies is similar in respect to the energy of a wave from a point source is coincidental here. The idea of waves and finite speed of gravitation is only something that comes about in terms of quantum field theory and general relativity.
7. Jul 23, 2010
### Izhaki
OK,
So last (rather historical) question then:
Was the inverse square law an excepted, yet unexplained law of nature until General Relativity and Quantum field theory came about?
In other words, did any previous attempts to put this concept into a coherent proof failed? Newton used it, but could never explain it?
8. Jul 23, 2010
### K^2
Actually, even with instantaneous gravity/electric forces you can understand the inverse square law. Basically, it's just Gauss' Law applied to whatever dimensionality you happen to have. In N dimensional space, Gauss Surface will always be N-1 dimensional. And that gives you the correct power on the inverse law.
9. Jul 23, 2010
### Izhaki
Thanks,
I'm not 100% fluent on Gauss' law, but definitely will look into it.
Thanks again,
Izhaki
10. Jul 23, 2010
### Izhaki
Thanks,
I'm not 100% fluent on Gauss' law, but definitely will look into it.
Thanks again,
Izhaki | 2018-06-22 17:50:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6765633225440979, "perplexity": 674.5039678052464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864740.48/warc/CC-MAIN-20180622162604-20180622182604-00570.warc.gz"} |
http://mathoverflow.net/questions/99932/cartesian-product-of-graphs | # Cartesian product of graphs
Let $G, H$ be two infinite connected graphs. Suppose that we can color $G$ and $H$ in $m$ and $n$ colors respectively so that monochromatic clusters (i.e. monochromatic connected components) are of uniformly bounded diameters. Is it possible to color the Cartesian product $G\times H$ in $m+n-1$ colors so that the diameters of monochromatic clusters are bounded. Here $G \times H$ is the graph with vertices $(x,y), x\in G, y\in H$ and $(x,y)$ connected with $(p,q)$ iff either $x=p$ and $y,q$ are adjacent in $H$ or $y=q$ and $x,p$ are adjacent in $G$.
Example. Let $Z_3$ be the graph with vertices $i\in \mathbb{Z}$ and edges $(i,j)$ where $|i-j|\le 3$. It can be colored in 2 colors as follows: color intervals $[10k+1, 10k+5]$, $k\in \mathbb{Z}$, in black and intervals $[10k+6,10(k+1)]$ in white. All monochromatic clusters have diameters 2. Then $Z_3\times Z_3$ can be colored in 3=2+2-1 colors so that all monochromatic clusters have uniformly bounded diameters. The clusters are bricks.
Update The question above turned out to be too easy and not what I wanted to ask. Here is the real question.
Let $G, H$ be two infinite connected graphs. Pick a number $\lambda\ge 1$. Suppose that we can color $G$ and $H$ in $m$ and $n$ colors respectively so that monochromatic $\lambda$-clusters (i.e. maximal subsets $X$ where any two vertices $u,v$ are connected by a monochromatic sequence $u=x_0, x_1, ..., x_k=v$ where the distance between $x_i, x_{i+1}$ is at most $\lambda$) are of uniformly bounded diameters. Is it possible to color the Cartesian product $G\times H$ in $m+n-1$ colors so that the diameters of monochromatic $\lambda$-clusters are bounded. Here $G \times H$ is the graph with vertices $(x,y), x\in G, y\in H$ and $(x,y)$ connected with $(p,q)$ iff either $x=p$ and $y,q$ are adjacent in $H$ or $y=q$ and $x,p$ are adjacent in $G$.
In the first version of the question $\lambda=1$. For an example, consider the graph $\mathbb{Z}$ (a line), and $\lambda=3$.
-
If David's answer worked before (of which I am unsure), should it not also work here, as to get to a different cluster of the same color one needs lambda + 1 steps "in both directions"? Gerhard "Still Worried At First Version" Paseman, 2012.06.18 – Gerhard Paseman Jun 18 '12 at 22:55
David's proof does not work for $\lambda=2$ even for $G=H=\mathbb{Z}$ because you can switch directions. A fancier reason is that the asymptotic dimension of the plane is 2, so you need 3 colors for almost all $\lambda$. The question asks if the dimension of direct product at scale $\lambda$ is the sum of dimensions of the factors. – Mark Sapir Jun 19 '12 at 4:01
Yes. In fact you only need max$(m, n)$ colors. Let's assume $m \leq n$ (switching $G$ and $H$ if necessary.) Number the colors from 0 to $m - 1$ and 0 to $n - 1$. If $x$ has color $i$ and $y$ has color $j$, then give $(x, y)$ color $i + j$ mod $n$. Then the monochromatic connected components of the cartesian product are the cartesian products of the monochromatic connected components. | 2014-03-12 15:22:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8973796963691711, "perplexity": 184.8440723813258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021900438/warc/CC-MAIN-20140305121820-00062-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://deepai.org/publication/escaping-the-gradient-vanishing-periodic-alternatives-of-softmax-in-attention-mechanism | # Escaping the Gradient Vanishing: Periodic Alternatives of Softmax in Attention Mechanism
Softmax is widely used in neural networks for multiclass classification, gate structure and attention mechanisms. The statistical assumption that the input is normal distributed supports the gradient stability of Softmax. However, when used in attention mechanisms such as transformers, since the correlation scores between embeddings are often not normally distributed, the gradient vanishing problem appears, and we prove this point through experimental confirmation. In this work, we suggest that replacing the exponential function by periodic functions, and we delve into some potential periodic alternatives of Softmax from the view of value and gradient. Through experiments on a simply designed demo referenced to LeViT, our method is proved to be able to alleviate the gradient problem and yield substantial improvements compared to Softmax and its variants. Further, we analyze the impact of pre-normalization for Softmax and our methods through mathematics and experiments. Lastly, we increase the depth of the demo and prove the applicability of our method in deep structures.
• 1 publication
• 105 publications
• 102 publications
11/23/2020
### Exploring Alternatives to Softmax Function
Softmax function is widely used in artificial neural networks for multic...
04/14/2021
### Sparse Attention with Linear Units
Recently, it has been argued that encoder-decoder models can be made mor...
05/08/2018
### Online normalizer calculation for softmax
The Softmax function is ubiquitous in machine learning, multiple previou...
09/19/2016
### A Cheap Linear Attention Mechanism with Fast Lookups and Fixed-Size Representations
The softmax content-based attention mechanism has proven to be very bene...
06/12/2020
### Sparse and Continuous Attention Mechanisms
Exponential families are widely used in machine learning; they include m...
11/21/2021
### Efficient Softmax Approximation for Deep Neural Networks with Attention Mechanism
There has been a rapid advance of custom hardware (HW) for accelerating ...
03/23/2022
### Enhancing Classifier Conservativeness and Robustness by Polynomiality
We illustrate the detrimental effect, such as overconfident decisions, t...
## 1 Introduction
Gradient Vanishing in Attention block: Since transformer is a pixel-wise process, the distribution of the input is more important than that for CNN. For CNN, the deviation caused by the value exceeding the expected range will be diluted in patches. The statistical assumption that the input is normal distributed supports the gradient stability of Softmax. However, there are always some values that exceed the expected value range, causing the gradient vanishing. This situation becomes worse in the attention mechanisms. The input of Softmax represents to the relationships of embedding (patches), and the distribution of the input should be variant for different images. That means in the attention mechanisms, there are always a part of the input stuck in the saturation area as shown in Figure 2, leading to the gradient vanishing and a long training.
Motivations: We are interested in the observation from the transformer-based models that the formation of attention corresponding to the objects often seems to lag behind that corresponding to the boundary, as shown in the first row of Figure 1. Attention can be formed on the boundary in the early stages of training, but slowly appear on objects only in the mid-late stages. However, object should be a more preferred position to put attention on, since there are no inductive bias such as translation equivariance and locality 1, and objects should be more important than boundary for transformer. By investigating the value of , we find that the value corresponding to the object is larger than boundary, and is more likely to fall into the saturation area of Softmax. Therefore, we speculate that, object is indeed more important, but it is difficult to form attention on object, since the value of is too large and falls into the saturation area. In contrast, the value of boundary is moderate, so attention can be formed smoothly. We believe that this situation is one of the reasons why transformer needs a long training.
In this work, we suggest that replacing the exponential function by periodic functions, and we delve into some potential periodic alternatives of Softmax from the view of value and gradient. Through experiments on a simply designed demo referenced to LeViT, our method is proved to be able to alleviate the gradient problem and perform better compared to Softmax and its variants.
To summarize, the main contributions of this paper are:
• Explore the gradient performance of Softmax in the transformer block, and prove that the input of Softmax is not normal distributed and the gradient vanishing does happens;
• Introducing a series of periodic alternatives for Softmax in attention mechanisms, which compress the input into the unsaturation area by periodic functions to escape the gradient vanishing;
• Explore the impact of pre-normalization for Softmax and our methods and make an observation that pre-normalization is just a conditional solution but not always a good choice;
## 2 Related works
There are few studies on alternatives of Softmax, since Softmax is mostly used to output classification results, and we can avoid the gradient vanishing by using a joint gradient analytical solution of Softmax and Cross-Entropy. But in the attention mechanism, Softmax is used alone, so the gradient vanishing problem appears. There are other works devoted to enhancing the input feature of Softmax by normalization 3; 4; 5; 6; 7. However, they are all focused on the representation of features, but not addressing the gradient vanishing problem.
Taylor softmax: Vincent et al. 8 used second order Taylor series approximation for as and derive the Taylor softmax as follows:
Sj=1+xj+0.5⋅x2j∑di1+xi+0.5⋅x2i,xj⊆(x1,x2…,xj,…xd)
where is the dimension of the input of Taylor softmax. Since the quadratic function changes smoothly, Taylor softmax can generate more soft classification results to alleviate overfitting. However, when used without Cross-Entropy, Taylor softmax will cause gradient vanishing too, because of the saturation area.
Soft-margin softmax: Liang et al. 9 introduced a distance margin into Softmax to strengthen the inter-class compactness and inter-class separability between learned features. The Soft-margin (SM) softmax can be described as follows:
Sj=exj−1exj−1+∑di≠jei,xj⊆(x1,x2…,xj,…xd)
where is manually set and when is set to zero, the SM-Softmax becomes identical to the original Softmax. SM-Softmax can be considered as a shifted version of Softmax. Similar to Taylor softmax, SM-Softmax and its variant, Ensemble Soft-Margin Softmax 10, are proposed to encourage the discriminability of features, and the gradient vanishing problem is still not addressed.
SM-Taylor softmax: Kunal et al. 11 explored higher order Taylor series approximation of to come up with an order Taylor softmax where:
fn(x)=n∑i=0xii!
They proved that is always positive definite if n is even. Additionally, they combined the strengths of Taylor softmax and Soft-margin softmax, and proposed the SM-Taylor softmax as follows:
Sj=fn(xj)∑difn(xi),xj⊆(x1,x2…,xj,…xd)
However, it is still a method to enhance features, but not a solution to the gradient problem.
## 3 Formulation
For convenience, we denote input, inter-value and scores by , and . In this section, we try to build some periodic functions as the alternatives of Softmax. In addition, there are five aspects to determine whether a function is a favorable alternative: (1) value stability; (2) gradient stability; (3) saturation area; (4) zero-region gradient; (5) information submerged. Furthermore, when judging the gradient-related properties of the functions, we only consider aspects related to instead of the other elements contained by the input. Since the scores are mapped by , the correlation between the gradient and the other elements is unavoidable for the periodic functions. The plots of and more discussion on how the other elements in the input influence the gradient of are provided in A.1.
According to the research in the Taylor Softmax function proposed by Vincent et al. 8, and the higher order Taylor Softmax proposed by Kunal et al. 11, it is reasonable to map the input with a monotonic function, since the input can adapt to a suitable value range as parameters update. Therefore, we suggest to use a periodic function to compress the input, so as to avoid approximating the small input to a fixed value (to keep them positive), and also avoid the output being too large to have an appropriate gradient.
### 3.1 Softmax
What Softmax does is mapping the input to an inter-value , , and mapping the inter-value to scores . The exponential function keeps the negative input positive, but also makes the positive input extremely high. For a large input , is too large and dominates , which means , ; and for a small input , , and since , . Therefore, the major cause of gradient vanishing in Softmax is that we need to compress the value with unknown upper and lower into . To do this, there has to be a saturation area for large and small value.
Before the discussion of the alternatives of Softmax, it is necessary to clarify the advantages of Softmax. First of all, the output of Softmax is positive definite. And due to exponential function, the difference between inputs will be magnified, that means Softmax can show the difference between inputs well, which is a good characteristic for attention mechanisms. Besides, according to the definition:
Ssoftmax j=exj/d∑iexi,xj⊆(x1,x2…,xj,…xd)∂Sj/∂xj=M⋅exj/(M+exj)2,M=d∑i≠jexi∂2Sj∂x2j=M⋅exj⋅(exj+M)−2−2⋅M⋅e2xj⋅(exj+M)−3
Let , we have:
M⋅exj⋅(exj+M)−2−2⋅M⋅e2xj⋅(exj+M)−3=0exj=M Extre (∂Sj∂xj)=M⋅M/(M+M)2=1/4
which means the max gradient of Softmax is 0.25, so it will not cause gradient explosions. Additionally, in spite of the saturation problem, no matter how large the other elements of input are, a sufficiently large value can always get an appropriate gradient.
### 3.2 Sin-max-constant / Cos-max:
When it comes to periodic functions, there is a good reason to use sine function since it is widely used and derivable. Therefore, we set the to keep the function positive definite following the suggestion in 12, and Sin-max-constant is defined as follows:
Ssinmaxj=1+sin(xj)d+M+sin(xj),M=d∑i≠jsin(xi)
where represents to the dimension of input. Sin-max compress into and for , there is no saturation area.
However, let , then , we have:
E(Ssinmaxj)=1+sin(xj)d=1d+sin(xj)d,d≫sin(xj)
E(Ssinmaxj)≈1d
which means as the dimension of increases, the influence of will be weakened by the constant term, causing to be overwhelmed and cannot be mapped to correctly. Besides, consider from the view of gradient:
∂Sj∂xj=(M+d−1)⋅cos(xj)(M+d+sin(xj))2
E(∂Sj∂xj)=(−sin(xj)+d−1)⋅cos(xj)d2
Similar to the , as the dimension of increases, the gradient of Sin-max will drop to zero, and gradient vanish will occur in the entire value range. The main reason for these defects is the constant term 1 in .
We try to remove the constant term in , and the expression becomes to:
Ssinmaxj=sin(xj)M+sin(xj),M=d∑i≠jsin(xi)
Now let , then , we have:
E(Ssinmaxj)=sin(xj)0→±∞
which means the value of is unstable, causing the network to have a great risk of breaking down. And for gradient, since and , we have:
∂Sj∂xj=M⋅cos(xj)(M+sin(xj))2
E(∂Sj∂xj)=−0.5⋅sin(2⋅xj)0→±∞
which is also unstable because is not positive definite.
The reason why is not good is similar to Sin-max-constant where the difference within will be submerged due to the constant term as the dimension growing. Let , and , we can define Cos-max as:
Scosmaxj=cos(xj)M+cos(xj),M=d∑i≠jcos(xi)∂Sj∂xj=−M⋅sin(xj)(M+cos(xj))2
Assume that strictly belongs to , then Cos-max can be considered as the Sin-max shifted to a positive definite range. However, a gradient vanishing problem will appear when clusters in the zero-region. Besides, from the view of gradient stability, we have:
∂2Sj∂x2j=−M⋅cos(xj)⋅(M+cos(xj))−2+2⋅M⋅sin(xj)⋅(M+cos(xj))−3
Let , we have:
cos(xj)−2⋅sin(xj)⋅(M+cos(xj))−1=0
−4+(M2+1)⋅cos2(xj)+2⋅M⋅cos3(xj)+cos4(xj)=0
According to the solving provided in A.5, we can have:
Extre(∂Sj∂xj)M∈(−∞,+∞)
As increases, Extreme approaches , which means Cos-max is gradient unstable, causing the network to have a great risk of breaking down like Sin-max.
### 3.3 Sin2-max-shifted
To ensure that is positive definite, and no extra constant term is introduced, is a reasonable choice. So we can define Sin2-max as:
Ssin2maxj=sin2(xj)M+sin2(xj),M=d∑i≠jsin2(xi)
Note that although , Sin2-max is not just a scaled double-frequency version of Cos-max owing to the constant terms. Therefore, the numerical and gradient characteristics of Sin2-max and Cos-max are different.
As shown in Figure 4, the possible problem of Sin2-max is that, assuming , most of clusters in the region close to 0 so most of the gradients close to 0, which makes the parameters difficult to update. To solve this ‘conditional’ problem, we can shift to the non-zero region by adding a phase to .
Let We have:
cos(2⋅xj)=−12⋅(2⋅M+1±√8+(2⋅M+1)2)
xj for max(∂Sj∂xj)=12arcos(−12⋅(2⋅M+1±√8+(2⋅M+1)2))
Unfortunately, for will change with , so we have to find an approximate solution. Besides, with change, the gradient will oscillate in the range of , causing gradient explosion or vanishing in the entire value range. Since and has a same cycle of period , the period of is . We set to make the gradient stable. So we get Sin2-max-shifted as follows:
∂Sj∂xj=M⋅sin(2⋅xj+0.5⋅π)(M+sin2(xi+0.25⋅π))2
### 3.4 Sin-Softmax
From another view, instead of replacing the exponential function, compressing the input into the unsaturation area is also a reasonable choice. To keep the gradient of input in zero region away from 0, we choose sine but not cosine. We define Sin-Softmax as follows:
Ssin-softmaxj=esin(xj)M+esin(xj),M=d∑i≠jesin(xi)
∂Sj∂xj=M⋅esin(xj)⋅cos(xj)(M+esin(xj))2
Sin-Softmax can be considered as a periodic-normalized version of Softmax, which is also similar to the periodic activation function proposed in SirenNet
13. The best part of Sin-Softmax is that, the input is compressed into the well performed region of Softmax by periodic function, so the value and gradient are both stable, as show in Figure 4 and A.1. Additionally, is positive definite owing to the exponential function, and the gradient in the zero-near region is also well performed. However, the possible defect of Sin-Softmax is that, the largest score can only be times the smallest, since for
sin(xj)∈(−1,+1),esin(xj)∈(1e,e)
which might cause oblivion of the most contributing value covered in a large number of low contributing values, as the dimension of the score maps (or the number of embeddings) increasing.
### 3.5 Siren-max
Inspired by SirenNet 13 where sine is used as the activation function, we define a with benefic gradient properties by:
f(x)=sin(x)1−sin(x)
Since , to make positive definite, we add to and define Siren-max as follows:
Ssiren-max j=1+sin(xj)2−2∗sin(xj)/(M+1+sin(xj)2−2∗sin(xj)),M=d∑i≠j1+sin(xi)2−2∗sin(xi)
Note that, the upper bound of is infinity, so adding a constant term to it will not cause the difference between being submerged like Sin-max-constant. As shown in Figure 4, there is no saturation area in Siren-max and it is well performed in zero-near region. The possible defect is that, the gradient has periodic jump points which might make training unstable.
### 3.6 Pre-normalization
Note that the pre-normalization discussed is not parameterized like Batch-norm 14, Layer-norm 15 or Group-norm 16. We denote the pre-normalization by normalizing the elements of in row. Considering the saturation problem, normalizing the input is also a reasonable operation. However, since the distribution of attention score maps is different for variant images, the normalization can hardly compress the maps into a specified value range precisely. Besides, the function also might be saturated, and the saturation area shifts with and . Therefore, the gradient situation of is similar to that of Softmax, which may cause gradient vanishing too. As a result, although pre-normalization can roughly gather the values in a specified range, but on the contrary, it may bring new gradient problems. More discussion and the gradient plots of normalization, pre-normalized version of Softmax and periodic alternatives are provide in A.2.
## 4 Experiments
To eliminate the unexpected effects of various tricks, the experiments are operated on a simply designed demo referenced to LeViT 1, as shown in Figure 5.
In experiments, there is an observation that most of the gradient of Softmax are very small, and only a small part of updates can be successfully back-propagated, even in the early stages of training, as shown in Figure 6. This phenomenon proves our point that the value, which are used to generate the attention scores, are related to the input images content and are not strictly normal distributed. Therefore, even if divided by following the original designed transformer block, the value still might fall into the saturation area of Softmax, making updates difficult. In our method, there is no saturation area in the functions, so the gradient is satisfactory at each training stage, which promotes the updating of parameters. More 3D graphs of gradients extracted from experiment are provide in A.3.
As shown in Figure 1 and Figure 7, due to the gradient vanishing problem, Softmax might cause difficulty in the formation of attention, especially in the early stages of training. We observe that attention is formed more smoothly on the boundary, and on the contrary, the attention corresponding to the objects can only be formed in the later stages of training. A possible reason is that the scores of the boundary between object and content are moderate, so the gradient flows smoothly. While, the scores of the objects are larger and might fall into the saturation area of Softmax, causing the gradient vanishing and the formation of attention being locked. While under the periodic alternatives, the attention is updated unrestrained on the image, which strengthens our arguments.
The gradient performance in the zero-region is crucial for training, and the early breaking down of training under Cos-max and Sin2-max can be ascribed to this. Besides, the stability of gradient is also very important. Since there are jump points within the range of input, the training under Siren-max breaks down too. In addition, since the input is submerged in the constant terms, the training under Sin-max-constant diverges.
Encouragingly, Sin2-max-shifted, Sin-Softmax exceed Softmax in the result just as we speculate, and norm-Siren-max is also surprisingly well performed. The result are shown in Table 2. The major drawbacks of Cos-max and Sin-max-constant are gradient performance in zero-region and information submergence respectively, which cannot be optimized by pre-normalization. As for Siren-max, pre-normalization optimizes the distribution of input and helps Siren-max avoid the gradient jump points, resulting in a satisfactory performance. Softmax can also be improved since pre-normalization helps the input escape from the saturation area to some degree. However, Sin2-max-shifted and Sin-Softmax are not subject to input distribution, so they cannot get benefits from pre-normalization. On the contrary, since pre-normalization will bring unexpected gradient problems, the performance of norm-Sin2-max-shifted and norm-Sin-Softmax decrease slightly. The plots and complete results of the experiment are provided in A.4.
## 5 Conclusion
Through the visualization of attention and gradient extracted from transformer blocks, we prove that in the attention mechanism, Softmax does lead to the gradient vanishing problem and makes training difficult. To address the problem, we propose a series of periodic alternatives of Softmax, and the experimental results prove that Sin-Softmax, Sin2-max-shifted, and norm-Siren-max are better performed than Softmax in attention mechanism. Additionally, we make an observation that pre-normalization is just a conditional solution but not always a good choice.
In the periodic alternatives, embedding requiring more attention does not necessarily require a larger value, which makes the generation of , more free, and it is hard to say whether this will lead to unexpected problems. This change might affect the representation of the model, and we will explore how this change happens in further works. | 2022-09-25 07:46:53 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8206483721733093, "perplexity": 1007.9595593122743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00230.warc.gz"} |
https://www.physicsforums.com/threads/what-is-the-angular-momentum-of-the-bullet.228748/ | # What is the angular momentum of the bullet
1. Apr 14, 2008
### rasgar
1. The problem statement, all variables and given/known data
A wooden block of mass M resting on a frictionless, horizontal surface is attached to a rigid rod of length L and of negligible mass. The rod is pivoted at the other end. A bullet of mass m traveling parallel to the horizontal surface and perpendicular to the rod with speed v hits the block and becomes embedded in it.
What is the angular momentum of the bullet- block system? (For the following answers, use M for the mass M, m for the mass m, and L for the length .)
2. Relevant equations
L=Iw=rP
3. The attempt at a solution
Well I easily got the solution with the known v (m*v*l), but the question asks for the solution with respect to only the mass of the bullet and the block, and the length of the rod. Since angular momentum has time as a part of its unit, I don't see how I'm supposed to do it. I thought I could find the torque and take the integral with respect to time, but the force of the system isn't known, so I'm completely lost. I think the physics teacher solved this, but I missed class, and there's a test in two days (I can't make it to his office hours either).
#### Attached Files:
• ###### p11-39.gif
File size:
10.4 KB
Views:
120
2. Apr 14, 2008
### cepheid
Staff Emeritus
I'm not sure what your issue is, but here's my take on how to approach the problem. Using conservation of momentum, you can figure out the momentum of the system the at very instant of collision. This is the linear momentum with which the bullet+block would move off in a straight line if it weren't attached to the rod. The rod provides centripetal force, so the momentum does change, but only in direction. It's magnitude remains the same as it was at the instant of collision. So, by conservation of momentum:
mv = (m+M)v'
where v' is the velocity of the bullet + block after collision.
From the definition of angular momentum, the magnitude of the angular momentum is given by the the length of the rod times the linear momentum. Since the letter L is being used for the length of the rod, I'll just use J for angular momentum:
J = L*p = L*(M+m)v' = L*(m+M)*((m/(M+m))v = Lmv
3. Apr 14, 2008
### rasgar
"J...=Lmv"
Yea I actually got that
"Well I easily got the solution with the known v (m*v*l)"
The problem is I have to do the problem solely with M, m, and L (I accidentally put lower case L there sorry). Thanks for your response though.
4. Apr 14, 2008
### Staff: Mentor
Your answer is correct--and the answer must incorporate the speed of the bullet. I suspect you're misinterpreting the instructions. (Where does it say that your answer can only have M, m, & L? Did you leave out part of the problem statement?)
5. Apr 14, 2008
### rasgar
Actually I just realized what my mistake was after the second post I made. The homework i had was online and the response checker was case-sensitive. I made the L a caps and got it right, sorry for the trouble guys. | 2017-06-24 22:56:27 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8067474365234375, "perplexity": 402.824590008238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320362.97/warc/CC-MAIN-20170624221310-20170625001310-00190.warc.gz"} |
https://www.nature.com/articles/s41559-019-0799-0?error=cookies_not_supported&code=351a360b-b409-4223-baa0-b71af0201cbd | # Global patterns and drivers of tree diversity integrated across a continuum of spatial grains
## Abstract
Controversy remains over what drives patterns in the variation of biodiversity across the planet. The resolution is obscured by lack of data and mismatches in their spatial grain (scale), and by grain-dependent effects of the drivers. Here we introduce cross-scale models integrating global data on tree-species richness from 1,336 local forest surveys and 282 regional checklists, enabling the estimation of drivers and patterns of biodiversity across spatial grains. We uncover grain-dependent effects of both environment and biogeographic regions on species richness, with a striking positive effect of Southeast Asia at coarse grain that disappears at fine grains. We show that, globally, biodiversity cannot be attributed purely to environmental or regional drivers, as the regions are environmentally distinct even within a single latitudinal band. Finally, we predict global maps of biodiversity at local (plot-based) and regional grains, identifying areas of exceptional beta-diversity in China, East Africa and North America. By allowing the importance of drivers of diversity to vary with grain in a single model, our approach unifies disparate results from previous studies regarding environmental versus biogeographic predictors of biodiversity, and enables efficient integration of heterogeneous data.
## Main
Why are there fewer than 100 species of trees that live in the millions of km2 of boreal forests in Eurasia and North America1,2, while there can be hundreds of species co-occurring in as little as 50 ha in tropical forests of South America and Asia3? What drives global variation in the numbers of species that live in different places, and where exactly are the places of highest biodiversity? The fundamental scientific appeal of these questions can be traced back at least to Humboldt4, yet understanding biological diversity has taken on new urgency as it faces threats from increasing human pressure. However, despite decades of research and hypotheses proposed5,6,7,8,9, there has been lack of consensus on the determinants of global variations in diversity, and for most taxa the global map of biodiversity is still largely incomplete.
The most important obstacle to answering the fundamental questions about drivers and patterns of biodiversity is a lack of data, especially in places where diversity is thought to be highest10,11. But even in regions and taxa that have been well sampled, the data are a heterogeneous mixture of point observations, survey plots, and regional checklists, all with varying area and sampling protocol11. For example, for trees, there are hundreds of 0.1 ha Gentry forest plots mostly in the New World12, hundreds of 1 ha ForestPlots.net plots throughout tropical forests13, dozens of CTFS-ForestGEO plots of more than 2 ha (www.forestgeo.si.edu), hundreds of published regional checklists14, and hundreds to thousands of other published surveys and checklists scattered throughout the published and grey literature. These together hold key information on the global distribution of tree biodiversity, and there are initiatives that mobilize this information over large scale15, yet the lack of methods to address differences in sampling has so far prevented their integration for the purpose of model-based prediction and inference.
Further, as could be said for many problems in ecology, attempts to map global biodiversity and to assess its potential drivers are severely complicated by the issues of spatial scale16,17,18,19: The most straightforward, but fundamental, issue is that the number of species (S) increases nonlinearly with area20. This is why patterns in the variation of biodiversity from place to place cannot be readily inferred from sampling locations of varying area. However, even when sets of sampling locations do have a constant area (hereafter grain), a spatial pattern of S observed at small grains will usually differ from a pattern observed at large grains21,22,23. Examples include the grain dependence of altitudinal21 and latitudinal24,25 diversity gradients. The reason for this is that beta-diversity (the ratio between fine-grain alpha-diversity and coarse-grain gamma-diversity) typically varies over large geographic extents26. Finally drivers and predictors of diversity have different associations with S at different grains27,28,29,30. For example, at global and continental extents, the association of S with topography increases with grain in Neotropical birds30 and the association with temperature increases with grain in global vertebrates29 and eastern Asian and North American trees31. Thus, biodiversity should ideally be studied, mapped, and explained at multiple grains22.
Although the abovementioned scaling issues are well known21,27,32,33, methods that explicitly incorporate grain-dependence within statistical models of biodiversity, which would allow cross-grain inference and predictions, are lacking. Furthermore, it has been common to report patterns and drivers of biodiversity at a single grain, resulting in pronounced mismatches of spatial grain among studies, but also offering an opportunity for synthesis. An example is the debate over whether biodiversity is more associated with regional proxy variables for macroevolutionary diversification and historical dispersal limitation, or with ecological drivers that include climatic and other environmental drivers, as well as biotic interactions9,33,34,35,36. Although climate and other ecological factors usually play a strong role (but see ref. 37), studies differ in whether they view residual effects of biogeographic regions on diversity (after accounting for climate and environment), as being weak38,39,40 or strong41,42,43. Even within the same growth form of organisms—trees—there is debate regarding whether environment6,31,44,45,46 or regional history47,48,49,50 are more important in driving global patterns. And yet, these studies are rarely done at a comparable spatial grain, and perhaps not surprisingly, studies from smaller plot-scale analyses6,46 typically conclude a strong role for environmental variation in driving patterns of biodiversity, whereas large-grain analyses49,51 demonstrate a strong role of historical biogeographic processes.
Here we propose a cross-grain approach that allows estimation of the role of contemporary environmental and regional predictors of, and prediction of global patterns in, tree species richness across a continuum of spatial grains, from small forest plots (for example, 0.01 or 0.1 ha) up to entire continents. Our study has three main goals: (1) by explicitly considering spatial grain as a modifier of the influence of ecology versus regional biogeography, we aim to synthesize results among studies, and illustrate how the importance of these processes varies with grain. Apart from the well-known grain-dependent effects of environment, we also focus on the so-far-overlooked grain-dependent effects of biogeographic regions. (2) The novelty of the approach is to model grain-dependence of every predictor (spatial, regional, or ecological) within a single model as having a statistical interaction with area, which enables the integration of an unprecedented volume of heterogeneous data from local surveys and country-wide checklists. Although such interaction has been tested occasionally24,43,52, it has not been applied to both spatial and environmental effects, nor for data integration and cross-grain predictions. (3) Finally, we take the advantage of being able to predict biodiversity patterns at any desired grain and we map the estimates of alpha-, beta-, and gamma-diversity of trees across Earth.
## Results and Discussion
### Macroecological patterns
To explain the observed global variation of tree diversity (Fig. 1), we specified two models that predict S by grain-dependent effects of environmental variables, but differ in the way they model the grain-dependent regional component of biodiversity: model REALM attributes residual variation of S to locations’ membership within a pre-defined biogeographic realm (as in ref. 7), while model SMOOTH estimates the regional imprints in S directly from the data using smooth autocorrelated surfaces. Both models explain more than 90% of the deviance of the data (Supplementary Table 1), both predict a value for S that matches the observed S (Supplementary Fig. 1), and they give good out-of-sample predictive performance (Supplementary Figs. 2 and 3). This is in line with other studies from large geographical extents, where 70–90% model fits are common even for relatively simple climate-based models7,31,46,53,54.
Next, we used model SMOOTH to predict patterns of S and beta-diversity over the entire mainland, on a regular grid of large hexagons of 209,903 km2 (Fig. 2a) and on a grid of local plots of 1 ha (Fig. 2b). On average, for a given 1 ha plot or hexagon, the 95% prediction interval spans 1 order of magnitude around the median predicted S (Supplementary Fig. 4h,j), with highest prediction uncertainty in areas with extreme environments or with no plot data, such as deserts and arctic regions (Supplementary Fig. 4k,l). We predict a pronounced latitudinal gradient of S at both grains (Fig. 2a,b and Supplementary Fig. 5), which matches other empirical studies of trees55, all vascular plants56, and other groups (ref. 57, pages 662–667). However there are also differences between the patterns at the two grains, particularly in China, East Africa, and southern North America (Fig. 2c). These are regions with exceptionally high beta-diversity and are in the dry tropics and sub-tropics with high topographic heterogeneity—examples include the Ethiopian Highlands and Mexican Sierra Madre ranges, which have sharp environmental gradients and patchy forests, resulting in relatively low local alpha-diversity but high regional gamma-diversity. The exception is the predicted high beta-diversity in China, where the historical component of beta-diversity dominates the effect of environmental gradients (compare Fig. 2c and Fig. 2f ). This exception has also been suggested previously31,47,50,58, and is discussed below.
### Grain-dependent effects of region
Any geographic pattern (for example, a gradient or a regionally elevated richness) of S that remains after accounting for the effect of environmental drivers can be seen as a ‘region effect’, potentially reflecting unique diversification history and dispersal limitations of a given region. Although model REALM treats the region effects on S as discrete, while model SMOOTH treats them as continuous, both models reveal similar grain-dependence of these regional effects. At coarse grains (that is area >100 km2), model REALM shows that the regional anomaly of S that is independent of environment is highest in the Indo-Malay region, followed by parts of the Neotropics, Australasia, and Eastern Palaearctic (Fig. 3 and Supplementary Fig. 6). A similar pattern emerges at coarse grains from model SMOOTH, in which particularly China and Central America are hotspots of environmentally independent S (that is, there are strong effects of biogeographic regions) (Fig. 2d). This follows the existing narrative7,50 where tree diversity is typically highest, and anomalous from the climate-driven expectation, in eastern Asia. However, at the smaller plot grain, the regional biogeographic effects are present, but weaker in both the REALM (Fig. 3) and SMOOTH (Fig. 2e) models. Further, the regional effects shift away from the Indo-Malay and the Neotropical regions (REALM model) or China and Central America (SMOOTH model) at the coarse grains towards the Equator, particularly to Australasia, at the plot grain (Figs. 2e and 3).
These results can be viewed through the logic of species–area relationship (SAR), and its link to alpha-, beta-, and gamma-diversity20,59: if environmental conditions are constant (or statistically controlled for) then S depends only on area and on specific regional history. Since these interact, what emerge are region-dependent SARs in model REALM (Fig. 3), which are equivalent to grain-dependent effects of regions in model SMOOTH (Fig. 2). In both models, what geographically varies is the environmentally-independent local S (Fig. 2e) and regional S (Fig. 2d), as well as their ratio (Δ in Fig. 2f), which directly links to the slopes of relationships in Fig. 3. One way to explain this through different range dynamics in different parts of the world. Areas with high levels of environmentally independent S at large grains, such as China and Central America, could have historically accumulated species that are spatially segregated with relatively small ranges, for example, by being climate refuges (as in Europe60), or owing to dispersal barriers and/or large-scale habitat heterogeneity50. This would lead to increased regional richness but contribute less to local richness, leading to stronger regional effects at larger grains than at smaller grains, as we observed. An alternative explanation of the pattern would be elevated diversification rates at large grains in China and Central America; however, we think this is unlikely, given that these areas do not exhibit elevated diversification rates in other groups42,61.
We also found pronounced autocorrelation in the residuals of the REALM model at the country grain, but low autocorrelation at both grains in the residuals of model SMOOTH (Supplementary Fig. 8). Residual autocorrelation in S is the spatial structure that was not accounted for by environmental predictors; it can emerge as a result of dispersal barriers or a particular evolutionary history in a given location or region62,63. The autocorrelation in REALM residuals thus indicates that the discrete biogeographical regions (Fig. 3) fail to delineate areas with unique effects on S. These are better derived directly from the data, for example, using the splines in model SMOOTH (Fig. 2d,e). As such, the smoothing not only addresses a prevalent nuisance (that is, biased parameter estimates due to autocorrelation64), but can also be used to delineate the regions relevant for biodiversity more accurately than the use of a priori defined regions.
### Grain-dependent effects of environment
Generally, the signs and magnitudes of the coefficients of environmental predictors (Fig. 4) at the plot grain are in line with those observed elsewhere7. However, as far as we are aware, only Kreft and Jetz43 modelled richness–environment associations as grain-dependent by using the statistical interactions between an environment and area. In our analyses, several of these interaction terms were significant in both models REALM and SMOOTH (Fig. 4). This is in agreement with some previous work29,30,31, but contrasts with Kreft and Jetz43 who detected no interaction between area and environment at the global extent in plants. However, the lack of area by environment interaction in their study might have been due to a limited range of areas (grains) examined. We detected clear grain dependence, supported by both models, in the effects of tree density and gross primary productivity (GPP, a proxy for energy input); both effects decrease with area (Fig. 4). The reason for this is that, as area increases, large parts of barren, arid, and forest-free land are included in the large countries such as Russia, Mongolia, Saudi Arabia, or Sudan, diluting the importance of the total tree density at large grains.
We failed to detect an effect of elevation span at fine grains (probably because the elevation data themselves were coarse-grained; see Supplementary Discussion), but it emerged at coarse grains (Fig. 4), in line with other studies29,30. This suggests that topographic heterogeneity is important over large areas in which clear barriers (mountain ranges and deep valleys) limit colonization and promote diversification65, or that it creates refuges in which species can persist during adverse environmental conditions66. Also note the high uncertainty around the effects in the climate-related variables across grains (Fig. 4). A probable source of this uncertainty is the co-linearity between environmental and regional predictors (see below in ‘Regions versus environment’). This prevented us from detecting the grain-dependency of the effect of temperature, although we expected it on the basis of previous studies29,31. Finally, we detected a consistent positive effect of mainland as compared to islands, which is expected67. However, the effect had broad credible intervals across all grains (Fig. 4); this uncertainty is likely caused by our binomial definition of islands, by the lack of consideration of distance from mainland, and by the classification of some of countries as mainland, although they also overlap islands (see Supplementary Methods and Supplementary Discussion).
### Regions versus environment
We used deviance partitioning68,69 to assess the relative importance of biogeographic regions versus environmental conditions in explaining the variation of S across grains. At the global extent, the independent effects of biogeographic realms strengthened towards coarse grain, from 5% at the plot grain to 20% for country grain in model REALM (Fig. 5a). In contrast, the variation of S explained uniquely by environmental conditions (around 14%, Fig. 5a) showed little grain dependence. However, and importantly, at both grains, roughly half of the variation of S is explained by an overlap between biogeographic realms and environment, and it is impossible to tease these apart owing to the co-linearity between them. In other words, biogeographic realms also tend to be environmentally distinct (Supplementary Figs. 9 and 10); that is, they are not environmentally similar replicates in different parts of the world (see also ref. 7 for a similar conclusion). The same problem prevails when the Earth is split into two halves and when the partitioning is done in each half separately (Fig. 5b,c). This climate–realm co-linearity at the global extent weakens our ability to draw conclusions about the relative importance of contemporary environment versus historical biogeography, as by accounting for environment, we inevitably throw away a large portion of the regional signal, and vice versa. Thus, we caution interpretations of analyses such as ours and others7,37,38,40,70 inferring the relative magnitude biogeographic versus environmental effects merely from contemporary observational data.
Given this covariation, we cannot clearly say whether environment or regional effect are more important in driving patterns of richness. We can, however, make statements about the grain dependence of both environment and region, as above. The climate–realm co-linearity is likely responsible for the inflated uncertainty71 around the effects of environmental predictors (Fig. 4) and biogeographic realms (Fig. 3), but there remains enough certainty about the effects of some predictors, such as tree density or GPP (Fig. 4), which are more orthogonal to climate and regions.
To overcome the global co-linearity problem and to better answer the classical question of whether diversity is more influenced by historical or contemporary processes, we suggest the following alternative strategies: (1) analyse smaller subsets of data in which environmental and regional data are less collinear, such as across islands72 or biogeographic boundaries50,73 with similar environments, but different history; (2) use historical data from fossil or pollen records74; (3) use long-term range dynamics or other patterns reconstructed from phylogenies75,76; (4) use predictors reflecting past environmental conditions77,78 or predictors that statistically interact with time79; finally, (5) we see a promise in the emerging use of process-based and mechanistic models in macroecology80,81, which can predict multiple patterns, ideally at multiple grains, and as such can offer a strong test82 of the relative importance of historical biogeography versus contemporary environment in generating biodiversity.
### Conclusions
We have compiled a global dataset on tree species richness, and used it to integrate highly heterogeneous data in a model that contains grain-dependence as well as spatial autocorrelation, and predicts patterns of biodiversity across grains that span 11 orders of magnitude, from local plots to the entire continents. This is an improvement of data, methods, and concepts, and importantly, we reveal a critical grain-dependence in both regional and environmental predictors. We propose that this grain-dependence, together with the confounding co-linearity between environment and geography, is the reason why studies comparing the importance of environmental versus historical biogeographic predictors of global diversity patterns have come to disparate conclusions. Studies using smaller-grained data tend to find strong influence of environment6,46, whereas those that use larger-grained data find strong effect historical biogeography49,51. We reconcile this with a grain-explicit analysis and show that smaller-grain (alpha-diversity) patterns are less influenced by regional biogeography than larger-grained (gamma-diversity) patterns. Finally, we suggest that the advantages of having a formal statistical way to directly embrace grain dependence are twofold: not only will it allow ecologists to test grain-explicit theories, but it is precisely the same grain dependence that will allow integration of heterogeneous, messy, and haphazard data from various taxonomic groups, especially the data deficient ones. This is desperately needed in the field that has restricted its global focus to a small number of well-surveyed taxa.
## Methods
### Data on S at the plot grain
We compiled a global database of tree species richness from 1,933 forest plots; these were taken from published database compilations7,12,13,83,84,85,86, from national forest inventory surveys87,88,89, others were extracted manually from primary sources90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132. From this set of plots we then selected only those with unique geographic coordinates, and with data on the number of individual trees, minimum diameter at breast height (DBH), and area of the plot. We made the effort to include only plots that spanned a contiguous area and in which all trees within the plot above the minimum DBH were determined. In cases where there were several plots with the exact same geographic coordinates, we chose the plot with the largest area. If areas were the same, we chose one plot randomly. This left us with 1,336 forest plots for our main analyses. Although all of these plots are in forests, the authors of the primary studies still differ in which individuals are actually determined. For instance, authors may include or exclude lianas. Thus, in the main analyses we included all plots that have the following morphological scope: ‘trees', ‘woody species', ‘trees and palms', ‘trees and shrubs', ‘trees and lianas', ‘all living stems'. In a parallel sensitivity analysis we used a more stringent selection criteria to create a subset of the data (see below).
### Data on S at the country grain
We compiled data on tree species richness of 282 countries and other administrative units (US and Brazilian states, Chinese provinces). We downloaded the data from BONAP taxonomic data center at http://bonap.net/tdc for the United States133, from ref. 134 for the provinces of China, from Flora do Brasil 2020 at http://floradobrasil.jbrj.gov.br135, and from Botanic Gardens Conservation International database GlobalTreeSearch14 (accessed 18 August 2017) for the rest of the world. To download the data from GlobalTreeSearch, we used the Selenium software interfaced through a custom R script. We note that there are more potential data sources that could have been further leveraged to make our dataset even larger, both at the country and the plot grain, and perhaps also at the intermediate grain. However, our priority has been to make the data for this paper open, and thus we use only the easily available open databases and primary published sources.
### Sensitivity to data sources and tree definition
Data sources vary in their definition of what a tree is, and for the main analyses presented here we used an inclusive and broad definition, which gave us the advantage of larger N. To be sure that our results are robust to this definition, and also robust to a potential co-linearity between data sources and biogeographic regions, we performed an analysis on a subset of the data, selected using the following rules: (1) at the plot grain, we used only plots with trees defined as ‘trees' or with DBH ≥1 cm, (2) at the country grain we used USA, Brazil and China as complete spatial units and we did not ‘disaggregate’ them to the smaller administrative regions. This gave us a subset of 1,166 plots and 183 countries. The results obtained from this subset were similar to the results obtained from the full dataset (see figures in https://github.com/petrkeil/global_tree_S/tree/master/Figures/Subset_data_sensitivity_analysis), and thus we consider our results to be robust to data source and tree definition.
### Predictors of species richness
For each plot and each country, we extracted characteristics that are proxies for environmental heterogeneity, energy availability, productivity, climatic limits, climatic stability, insularity, and regional or spatial variables. These are known, or have been hypothesized, to be associated with plant species richness7,43,45,46,136 (Supplementary Methods). Specifically, we calculated the following predictors of species richness: area, latitude and longitude of its centroid, membership in a discrete biogeographical realm, its location on mainland or island, difference between highest and lowest altitude, mean gross primary productivity, mean annual temperature, mean isothermality, precipitation in the driest quarter of the year, and mean precipitation seasonality. For each plot we also noted minimum DBH that was used as a criterion to include tree individuals in a study. All continuous predictors were standardized to 0 mean and unit variance before statistical modelling. Area and tree density were log transformed. See Supplementary Methods for a detailed description of each predictor, its source reference, hypothesized effect on S, and original spatial grain.
### Cross-grain models and grain-dependent effects
Our core approach is that ‘grain dependence’ of an effect of a predictor can be modelled using a statistical interaction between the predictor and area. Specifically, imagine a log–linear relationship between expected mean species richness $$\hat S_i$$ and an environmental predictor xi at site i, defined as $$\log \hat S_i = \alpha + \beta _ix_i$$, where α is the intercept and βi is the slope (effect) that linearly depends on logarithm of area Ai of site i as $$\beta _i = \gamma + \delta \log (A_i)$$. By substitution we get
$$\log \hat S_i = \alpha + x_i\gamma + x_i\log (A_i)\delta$$
(1)
where γ is grain-independent effect of predictor xi and δ is the effect of the statistical interaction between xi and log(Ai). By estimating the γ and δ coefficients we can then plot the overall effect βi as a function of area (for example, in Fig. 4). Extending this logic, we built statistical models that treat environmental and regional predictors of species richness as grain-dependent. Specifically, we built two models (REALM and SMOOTH) representing the same general idea of grain-dependency, but each implementing it in a somewhat different way. These models are not mutually exclusive, but are complementary approaches to the same problem.
### Model REALM
This model follows the traditional approach to assess regional effects on S, that is, variation of S that is not accounted for by environmental predictors can be accounted for by membership in pre-defined discrete geographic regions (as in ref. 7), also known as realms. We extend this idea by assuming that the effect of biogeographic regions interacts with area (that is, grain). That is, there are different SARs at work in each region. These SARs set the mean richness at a given grain, and the environmental variables then predict variation around that mean. Formally, observed species richness Si in ith plot or country is a negative binomial random variable $$S_i \sim \mathrm{NegBin}\left( {\hat S_i,\theta } \right)$$, where
$$\log \hat S_i = \alpha _j + \mathop {\sum }\limits_{k = 1}^3 \log (A_i)^k{{\beta}} _{j,k} + X_i \boldsymbol{\gamma} + X_i\log (A_i)\boldsymbol{\delta}$$
(2)
and where αj are the area-independent effects of jth region (one of them is the intercept), $$\mathop {\sum }\limits_{k = 1}^3 \log (A_i)^k{{\beta}} _{j,k}$$ is the interaction between a third-order polynomial of area A and the jth region; we have chosen the third-order polynomial to ensure an ability to produce the well-known tri-phasic effect of area20. Xiγ is the term for area-independent effects of environmental predictors in a matrix X, and AiXiδ is the interaction term between area A and X. Parameters to be estimated are the vectors α, β, γ, δ, and the dispersion parameter θ. If we only had a single predictor x, the model would be specified in R package ‘mgcv’137 as gam(S REALM + poly(A,3):REALM + x + x:A, family = ’nb’), where REALM is a factor identifying the regions. We use the negative binomial distribution (specifically, its mean and dispersion parametrization) since it can deal with over-dispersion of the response, it was used in a key single-grain study7 that we wish to contrast with ours, and it allows calculation of Akaike information criterion (AIC) and Bayesian information criterion (BIC). Also note that the interaction terms with log(Ai) are linear—this is an intentional simplification to make the idea presented here clearer, but we suggest that future studies may consider non-linear interaction terms.
### Model SMOOTH
In this model we avoid using discrete biogeographic regions; instead, we use thin-plate spline functions (hereafter, splines)137 of geographic coordinates. This allows us (1) to identify the areas of historically accumulated S directly from the data, freeing us from the need to use pre-defined geographic realms, and (2) it accounts for spatial autocorrelation in model residuals at the same time64. As above, $$S_i \sim \mathrm{NegBin}(\widehat S_i,{\mathrm{\theta }})$$, but now
$$\log S_i = \alpha + \sum\limits_{k=1}^3{ \log (A_i)^k\boldsymbol{\beta}_k} + X_i\boldsymbol{\gamma} + X_i \log (A_i) \boldsymbol{\delta} + s_1(\mathrm{Lat, Lon})\mathrm{Plt}_i + s_2(\mathrm{Lat, Lon})\mathrm{Cntr}_i$$
(3)
The first difference from the REALM model is that α and β do not vary geographically, but there is a single global species–area relationship (see also Supplementary Fig. 7). The second difference is the spline functions s1 and s2 (each with 14 spline bases), and with Plti and Cntri as binary (0 or 1) variables specifying if an observation i is a plot or a country. If we only had a single predictor x, the model would be specified in R package ‘mgcv’ as gam(S s(Lat, Lon, by = Plt.or.Cntr, bs = ’sos’, k = 14) + poly(A, 3) + x + x:A, family = ’nb’), where Plt.or.Cntr is a factor identifying if an observation is a plot or a country.
### Null model
To set a baseline for the performance of models REALM and SMOOTH, we also fitted a ‘null’ model with only the intercept α and the dispersion parameter θ. The model writes as $$S_i \sim NegBin(\alpha ,\theta )$$. The performance (R2, AIC, BIC) of models REALM and SMOOTH was then judged relative to this null model.
### Model fit, diagnostics, and inference
For the initial model assessment, optimizing the number of spline nodes, extraction of the splines, extraction of residual autocorrelation, and for AIC and BIC calculations, we fitted the models using maximum likelihood (gam function in R package ‘mgcv’137). For Bayesian inference and for assessment of uncertainty about model parameters and predictions, we fitted the models using Hamiltonian Monte Carlo (HMC) sampler Stan138, interfaced through R function ‘brm’ (package ‘brms’139) with 3 chains, 3,000 iterations with 1,000 as a warmup, and every 10th iteration kept for inference. For all parameters we used uninformative prior distributions that are the default setting in the ‘brm’ function. Visual check of the HMC chains showed excellent convergence. To measure model fit, we used plots of observed versus predicted values of S, and we also calculated AIC and BIC, which we additionally compared with AIC and BIC of the ‘null’ model with only the constant intercept α (Supplementary Table 1). To assess spatial autocorrelation in species richness and in the residuals of both models, we used spatial correlograms with Moran’s I as a function of geographic distance (Supplementary Fig. 8), with distance bins of 200 km, using correlog function in R package ‘ncf’.
### Global predictions
To demonstrate the ability of our statistical approach to predict patterns of S at any arbitrarily chosen grain, we used model SMOOTH to make predictions in a set of artificially generated plots (each with an area of 1 ha) and hexagons (each with an area of 209,903 km2) distributed at regular distances across the global mainland. We used R package ‘dggridR’ to generate both. We used hexagons since they suffer almost no geometrical distortion of their shape due to the geographic projection of Earth. We further eliminated all plots for which at least one environmental variable was unavailable, and hexagons with less than 50% of mainland area, which left us with 9,761 local plots and 620 hexagons (Fig. 2). For each plot and hexagon we extracted the same predictors as for the empirical data, using exactly the same procedures. We then plugged these predictors in to the SMOOTH model, generated the expected $$\hat S$$ (see equation (3)), and mapped it across the 1 ha plots (hereafter $$\hat{S}_{\mathrm{plot}}$$ or alpha-diversity) and hexagons ($$\hat{S}_{\mathrm{hex}}$$ or gamma-diversity); we also mapped the ratio gamma/alpha, which is beta-diversity. Finally, we extracted the smooth region effects s2(Lat,Lon) and s1(Lat,Lon) in the hexagons and 1 ha plots respectively, these are the spline functions from equation (3) using the geographic coordinates of the centroids of the hexagons or the 1 ha plots.
### Cross-validation and external validation of the predictions
To assess predictive performance of the models, we employed two approaches: first, we used fourfold cross-validation in which the original dataset was split to 4 folds (fractions) with approximately equal N; each of these folds then served as a test dataset which was compared with predictions of a model fitted using the other 3/4 of the data (the training dataset). Instead of doing a computationally intensive Bayesian cross-validation, we performed the cross-validation using the maximum likelihood model fitting, and thus we report no prediction intervals, and we report results of this exercise as plots of observed versus mean predicted richness (Supplementary Fig. 2).
Second, we performed an external validation of the coarse-grained predictions against an independently assembled dataset that was not used in model training, and comes as a fundamentally different data type: point observations. Specifically, we amassed data on point observations from three databases: (1) The RAINBIO database (http://rainbio.cesab.org/) of African vascular plants distributions140, (2) the BIEN 3+ database87,140,141,142,143,144,145,146,147,148,149 (http://bien.nceas.ucsb.edu/bien/) for the New World plant observations accessed through ‘BIEN’ R package150, and (iii) the high-resolution EU-Forest database of tree occurrences in Europe151, although records in the latter come from standardized surveys, rather than haphazard observations. We restricted the RAINBIO records to only those with habit = 'tree', and BIEN records to those with whole plant woodiness = 'woody'. Based on these records, we then calculated the number of observations (records) and species richness in the 209,903 km2 hexagons. We then excluded all under-sampled grid cells, as those with at least 4,000, 10,000, and 1,000 records per hexagon in RAINBIO, BIEN, and EU-Forest respectively. The observed richness in these hexagons was then plotted against predictions (and their full Bayesian prediction intervals) of model SMOOTH.
As requested by the BIEN data use policy, we also acknowledge the herbaria that contributed data to this work: FCO, UNEX, LPB, AD, CVRD, FURB, IAC, IB, INPA, IPA, MBML, UBC, UESC, UFMA, UFRJ, UFRN, UFS, ULS, US, USP, RB, TRH, ZMT, BRIT, MO, NCU, NY, TEX, U, UNCC, A, AAU, GH, AS, ASU, BAI, B, BA, BAA, BAB, BACP, BAF, BC, BCRU, BG, BH, SEV, BM, MJG, BOUM, BR, C, CANB, CAS, CAY, CEN, CHR, CICY, CIMI, COA, COAH, CP, COL, CONC, CORD, CRAI, CU, CS, CTES, CTESN, DAO, DAV, DS, E, ENCB, ESA, F, UVIC, FLAS, FR, FTG, FUEL, G, GB, GLM, K, GZU, HAL, HAMAB, HAST, HBG, HBR, HO, HRP, HSS, HU, HUSA, IBUG, ICN, IEB, ILL, FCQ, ABH, INEGI, UCSB, ISU, SD, JUA, ECON, USF, TALL, CATA, KSTC, LAGU, KU, LA, GMDRC, LD, LEB, LI, LIL, CNH, MACF, LL, LOJA, LP, LPAG, MGC, LPS, IRVC, JOTR, LSU, DBG, HSC, MELU, NZFRI, M, MA, CSUSB, MB, MBM, UCSC, UCS, JBGP, OBI, MCNS, ICESI, MEL, MEN, TUB, MERL, MEXU, FSU, MG, MICH, BABY, SCFS, SACT, JROH, SBBG, SJSU, MNHM, MNHN, SDSU, MOR, MSC, SFV, CNS, JEPS, CIB, VIT, MU, PGM, MVM, PASA, BOON, ND, NE, NHM, NMB, NMSU, NSW, O, CHSC, CHAS, CDA, OSC, P, UPS, SGO, PH, SI, POM, PY, QMEX, TROM, RM, RSA, S, SALA, SANT, SNM, SP, SRFA, TAIF, TU, UADY, UAM, UAS, UB, UC, UCR, UEC, UFG, UFMT, UJAT, ULM, UNM, UNR, UT, UTEP, VAL, VEN, W, WAG, WELT, WIS, WTU, WU, ZT, CUVC, AAS, BHCB, PERTH.
### Partitioning of deviance
To estimate the relative effects of contemporary environment versus biogeographic regions, we used partitioning of deviance68,69, an approach related to variance partitioning152. Specifically, the deviance from the null model with no predictors is partitioned to (1) a fraction explained by environmental variables and their interaction with area, (2) to region effects represented by biogeographic realms and their interaction with area, (3) to their overlap (caused by co-linearity between environment and realms), and (4) to their independent effects. We used only model REALM to do the partitioning, since it does not contain area (grain) as a standalone term, which makes the partitioning easier to interpret in terms of the purely environmental versus regional fraction. We did the partitioning at the global extent (using data from all biogeographic realms), but also for two hemispheric subsets in an attempt to reduce the co-linearity between realms and environment: (1) the Nearctic and Palaearctic realms, which represent the boreal, temperate and sub-tropical realms of the northern hemisphere, (2) the Neotropic, Afrotropic, Indo-Malay and Australasian realms that represent the sub-tropics and tropics around the equator.
### Reporting Summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
## Data availability
All data and R codes used for the analyses are available under CC-BY 4.0 license in a GitHub repository at https://github.com/petrkeil/global_tree_S, which is also mirrored at figshare at https://figshare.com/articles/global_tree_S/7461509. Please note that if the data on species richness are reused, the original data sources should be credited.
## References
1. 1.
Fine, P. V. A. & Ree, R. H. Evidence for a time-integrated species-area effect on the latitudinal gradient in tree diversity. Am. Nat. 168, 796–804 (2006).
2. 2.
Frodin, D. G. Guide to Standard Floras of the World (Cambridge Univ. Press, Cambridge, 2001).
3. 3.
Losos, E. & Leigh, E. G. Tropical Forest Diversity and Dynamism (Univ. of Chicago Press, Chicago, 2004).
4. 4.
Hawkins, B. A. Ecology’s oldest pattern? Trends Ecol. Evol. 16, 470 (2001).
5. 5.
Storch, D., Bohdalková, E. & Okie, J. The more-individuals hypothesis revisited: the role of community abundance in species richness regulation and the productivity-diversity relationship. Ecol. Lett. 21, 920–937 (2018).
6. 6.
Currie, D. J. et al. Predictions and tests of climate-based hypotheses of broad-scale variation in taxonomic richness. Ecol. Lett. 7, 1121–1134 (2004).
7. 7.
Ricklefs, R. E. & He, F. Region effects influence local tree species diversity. Proc. Natl Acad. Sci. USA 113, 674–679 (2016).
8. 8.
Wiens, J. J. et al. Niche conservatism as an emerging principle in ecology and conservation biology. Ecol. Lett. 13, 1310–1324 (2010).
9. 9.
Rabosky, D. L. & Hurlbert, A. H. Species richness at continental scales is dominated by ecological limits. Am. Nat. 185, 572–583 (2015).
10. 10.
Meyer, C., Kreft, H., Guralnick, R. & Jetz, W. Global priorities for an effective information basis of biodiversity distributions. Nat. Commun. 6, 8221 (2015).
11. 11.
Jetz, W., McPherson, J. M. & Guralnick, R. P. Integrating biodiversity distribution knowledge: toward a global map of life. Trends Ecol. Evol. 27, 151–159 (2012).
12. 12.
Phillips, O. L. & Miller, J. S. Global Patterns of Plant Piversity: Alwyn H. Gentry’s Forest Transect Data Set (Missouri Botanical Garden Press, St. Louis, 2002).
13. 13.
Sullivan, M. et al. Diversity and carbon storage across the tropical forest biome. Sci. Rep. 7, 39102 (2017).
14. 14.
GlobalTreeSearch Online Database (BCGI, 2017); https://www.bgci.org/global_tree_search.php
15. 15.
Enquist, B. J., Condit, R., Peet, R. K., Schildhauer, M. & Thiers, B. M. Cyberinfrastructure for an integrated botanical information network to investigate the ecological impacts of global climate change on plant biodiversity. PeerJ Preprints 4, e2615v2 (2016).
16. 16.
Levin, S. A. Multiple scales and the maintenance of biodiversity. Ecosystems 3, 498–506 (2000).
17. 17.
Chave, J. The problem of pattern and scale in ecology: what have we learned in 20 years? Ecol. Lett. 16, 4–16 (2013).
18. 18.
Chase, J. M. Spatial scale resolves the niche versus neutral theory debate. J. Veg. Sci. 25, 319–322 (2014).
19. 19.
Leibold, M. A. & Chase, J. M. Metacommunity Ecology (Princeton Univ. Press, Princeton, 2017).
20. 20.
Storch, D. The theory of the nested species–area relationship: geometric foundations of biodiversity scaling. J. Veg. Sci. 27, 880–891 (2016).
21. 21.
Rahbek, C. The role of spatial scale and the perception of large-scale species-richness patterns. Ecol. Lett. 8, 224 (2005).
22. 22.
Rahbek, C. & Graves, G. R. Detection of macro-ecological patterns in South American hummingbirds is affected by spatial scale. Proc. R. Soc. B 267, 2259–2265 (2000).
23. 23.
Chase, J. M. & Knight, T. M. Scale-dependent effect sizes of ecological drivers on biodiversity: why standardised sampling is not enough. Ecol. Lett. 16, 17–26 (2013).
24. 24.
Blowes, S. A., Belmaker, J. & Chase, J. M. Global reef fish richness gradients emerge from divergent and scale-dependent component changes. Proc. R. Soc. B 284, 20170947 (2017).
25. 25.
Kraft, N. J. B. et al. Disentangling the drivers of β diversity along latitudinal and elevational gradients. Science 333, 1755–1758 (2011).
26. 26.
Buckley, L. B. & Jetz, W. Linking global turnover of species and environments. Proc. Natl Acad. Sci. USA 105, 17836–17841 (2008).
27. 27.
Shmida, A. & Wilson, M. V. Biological determinants of species diversity. J. Biogeogr. 12, 1–20 (1985).
28. 28.
Böhning-Gaese, K. Determinants of avian species richness at different spatial scales. J. Biogeogr. 24, 49–60 (1997).
29. 29.
Belmaker, J. & Jetz, W. Cross-scale variation in species richness–environment associations. Glob. Ecol. Biogeogr. 20, 464–474 (2011).
30. 30.
Rahbek, C. & Graves, G. R. Multiscale assessment of patterns of avian species richness. Proc. Natl Acad. Sci. USA 98, 4534–4539 (2001).
31. 31.
Wang, Z., Brown, J. H., Tang, Z. & Fang, J. Temperature dependence, spatial scale, and tree species diversity in eastern Asia and North America. Proc. Natl Acad. Sci. USA 106, 13388–13392 (2009).
32. 32.
Whittaker, R. J., Willis, K. J. & Field, R. Scale and species richness: towards a general, hierarchical theory of species diversity. J. Biogeogr. 28, 453–470 (2001).
33. 33.
Ricklefs, R. E. Intrinsic dynamics of the regional community. Ecol. Lett. 18, 497–503 (2015).
34. 34.
Vázquez-Rivera, H. & Currie, D. J. Contemporaneous climate directly controls broad-scale patterns of woody plant diversity: a test by a natural experiment over 14,000 years. Glob. Ecol. Biogeogr. 24, 97–106 (2015).
35. 35.
Fine, P. V. A. Ecological and evolutionary drivers of geographic variation in species diversity. Annu. Rev. Ecol. Evol. Syst. 46, 369–392 (2015).
36. 36.
Harmon, L. J. & Harrison, S. Species diversity is dynamic and unbounded at local and continental scales. Am. Nat. 185, 584–593 (2015).
37. 37.
Wiens, J. J., Pyron, R. A. & Moen, D. S. Phylogenetic origins of local-scale diversity patterns and the causes of Amazonian megadiversity. Ecol. Lett. 14, 643–652 (2011).
38. 38.
Hawkins, B. A., Porter, E. E. & Diniz-Filho, J. A. F. Productivity and history as predictors of the latitudinal diversity gradient of terrestrial birds. Ecology 84, 1608–1623 (2003).
39. 39.
Algar, A. C., Kerr, J. T. & Currie, D. J. Evolutionary constraints on regional faunas: whom, but not how many. Ecol. Lett. 12, 57–65 (2009).
40. 40.
Dunn, R. R. et al. Climatic drivers of hemispheric asymmetry in global patterns of ant species richness. Ecol. Lett. 12, 324–333 (2009).
41. 41.
Araújo, M. B. et al. Quaternary climate changes explain diversity among reptiles and amphibians. Ecography 31, 8–15 (2008).
42. 42.
Belmaker, J. & Jetz, W. Relative roles of ecological and energetic constraints, diversification rates and region history on global species richness gradients. Ecol. Lett. 18, 563–571 (2015).
43. 43.
Kreft, H. & Jetz, W. Global patterns and determinants of vascular plant diversity. Proc. Natl Acad. Sci. U.S.A. 104, 5925–5930 (2007).
44. 44.
Currie, D. J. & Paquin, V. Large-scale biogeographical patterns of species richness of trees. Nature 329, 326 (1987).
45. 45.
Francis, A. P. & Currie, D. J. Global patterns of tree species richness in moist forests: another look. Oikos 81, 598–602 (1998).
46. 46.
Šímová, I. et al. Global species–energy relationship in forest plots: role of abundance, temperature and species climatic tolerances. Glob. Ecol. Biogeogr. 20, 842–856 (2011).
47. 47.
Latham, R. & Ricklefs, R. E. Global patterns of tree species richness in moist forests: energy-diversity theory does not account for variation in species richness. Oikos 67, 325–333 (1993).
48. 48.
Ricklefs, R. E., Latham, R. E. & Qian, H. Global patterns of tree species richness in moist forests: distinguishing ecological influences and historical contingency. Oikos 86, 369–373 (1999).
49. 49.
Qian, H., Wiens, J. J., Zhang, J. & Zhang, Y. Evolutionary and ecological causes of species richness patterns in North American angiosperm trees. Ecography 38, 241–250 (2015).
50. 50.
Qian, H. & Ricklefs, R. E. Large-scale processes and the Asian bias in species diversity of temperate plants. Nature 407, 180–182 (2000).
51. 51.
Ricklefs, R. E., Qian, H. & White, P. S. The region effect on mesoscale plant species richness between eastern Asia and eastern North America. Ecography 27, 129–136 (2004).
52. 52.
Lyons, S. K. & Willig, M. R. A hemispheric assessment of scale dependence in latitudinal gradients of species richness. Ecology 80, 2483–2491 (1999).
53. 53.
O’Brien, E. M., Field, R. & Whittaker, R. J. Climatic gradients in woody plant (tree and shrub) diversity: water-energy dynamics, residual variation, and topography. Oikos 89, 588–600 (2000).
54. 54.
Field, R., O’Brien, E. M. & Whittaker, R. J. Global models for predicting woody plant richness from climate: development and evaluation. Ecology 86, 2263–2277 (2005).
55. 55.
Brown, J. H. Macroecology (Univ. of Chicago Press, Chicago, 1995).
56. 56.
Mutke, J. & Barthlott, W. Patterns of vascular plant diversity at continental to global scale. Biol. Skrift. 55, 521–538 (2005).
57. 57.
Lomolino, M. V., Riddle, B. R., Whittaker, R. J. & Brown, J. H. Biogeography (Sinauer Associates, Sunderland, 2010).
58. 58.
Qian, H. A comparison of the taxonomic richness of temperate plants in East Asia and North America. Am. J. Bot. 89, 1818–1825 (2002).
59. 59.
Crist, T. O. & Veech, J. A. Additive partitioning of rarefaction curves and species-area relationships: unifying alpha-, beta- and gamma-diversity with sample size and habitat area. Ecol. Lett. 9, 923–932 (2006).
60. 60.
Svenning, J.-C. & Skov, F. Limited filling of the potential range in European tree species: limited range filling in European trees. Ecol. Lett. 7, 565–573 (2004).
61. 61.
Jansson, R. & Davies, T. J. Global variation in diversification rates of flowering plants: energy vs. climate change. Ecol. Lett. 11, 173–183 (2007).
62. 62.
Legendre, P. Spatial autocorrelation: trouble or new paradigm? Ecology 74, 1659–1673 (1993).
63. 63.
Dormann, C. F. et al. Methods to account for spatial autocorrelation in the analysis of species distributional data: a review. Ecography 30, 609–628 (2007).
64. 64.
Dormann, C. F. Effects of incorporating spatial autocorrelation into the analysis of species distribution data. Glob. Ecol. Biogeogr. 16, 129–138 (2007).
65. 65.
Quintero, I. & Jetz, W. Global elevational diversity and diversification of birds. Nature 555, 246–250 (2018).
66. 66.
Stein, A., Gerstner, K. & Kreft, H. Environmental heterogeneity as a universal driver of species richness across taxa, biomes and spatial scales. Ecol. Lett. 17, 866–880 (2014).
67. 67.
MacArthur, R. H. & Wilson, E. O. The Theory of Island Biogeography (Princeton Univ. Press, Princeton, 1967).
68. 68.
Carrete, M. et al. Habitat, human pressure, and social behavior: Partialling out factors affecting large-scale territory extinction in an endangered vulture. Biol. Conserv. 136, 143–154 (2007).
69. 69.
Randin, C. F. et al. Climate change and plant distribution: local models predict high‐elevation persistence. Glob. Change Biol. 15, 1557–1569 (2009).
70. 70.
White, E. P. & Hurlbert, A. H. The combined influence of the local environment and regional enrichment on bird species richness. Am. Nat. 175, E35–E43 (2010).
71. 71.
Dormann, C. F. et al. co-linearity: a review of methods to deal with it and a simulation study evaluating their performance. Ecography 36, 27–46 (2013).
72. 72.
Rominger, A. J. et al. Community assembly on isolated islands: macroecology meets evolution. Glob. Ecol. Biogeogr. 25, 769–780 (2016).
73. 73.
Swenson, N. G. et al. Constancy in functional space across a species richness anomaly. Am. Nat. 187, E83–E92 (2016).
74. 74.
Šizling, A. L. et al. Can people change the ecological rules that appear general across space? Glob. Ecol. Biogeogr. 25, 1072–1084 (2016).
75. 75.
Quintero, I., Keil, P., Jetz, W. & Crawford, F. W. Historical biogeography using species geographical ranges. Syst. Biol. 64, 1059–1073 (2015).
76. 76.
Arias, J. S. An event model for phylogenetic biogeography using explicitly geographical ranges. J. Biogeogr. 44, 2225–2235 (2017).
77. 77.
Hawkins, B. A. & Porter, E. E. Relative influences of current and historical factors on mammal and bird diversity patterns in deglaciated North America: climate, ice and diversity. Glob. Ecol. Biogeogr. 12, 475–481 (2003).
78. 78.
Sandel, B. et al. The influence of late quaternary climate-change velocity on species endemism. Science 334, 660–664 (2011).
79. 79.
Jetz, W. & Fine, P. V. A. Global gradients in vertebrate diversity predicted by historical area-productivity dynamics and contemporary environment. PLoS Biol. 10, e1001292 (2012).
80. 80.
Cabral, J. S., Valente, L. & Hartig, F. Mechanistic simulation models in macroecology and biogeography: state-of-art and prospects. Ecography 40, 267–280 (2017).
81. 81.
Connolly, S. R., Keith, S. A., Colwell, R. K. & Rahbek, C. Process, mechanism, and modeling in macroecology. Trends Ecol. Evol. 32, 835–844 (2017).
82. 82.
McGill, B. Strong and weak tests of macroecological theory. Oikos 102, 679–685 (2003).
83. 83.
Coelho de Souza, F. et al. Evolutionary heritage influences Amazon tree ecology. Proc. R. Soc. B 283, 20161587 (2016).
84. 84.
Phillips, O. L. et al. Efficient plot-based floristic assessment of tropical forests. J. Trop. Ecol. 19, 629–645 (2003).
85. 85.
Ramesh, B. R. et al. Forest stand structure and composition in 96 sites along environmental gradients in the central Western Ghats of India. Ecology 91, 3118–3118 (2010).
86. 86.
Myers, J. A., Chase, J. M., Crandall, R. M. & Jiménez, I. Disturbance alters beta-diversity but not the relative importance of community assembly mechanisms. J. Ecol. 103, 1291–1299 (2015).
87. 87.
US Department of Agriculture. Forest Inventory and Analysis – Fiscal Year 2016 Business Report (US Department of Agriculture, Washington, D.C., 2016).
88. 88.
De Natale, F. et al. Inventario Nazionale delle Foreste e dei Serbatoi Forestali di Carbonio (Ispettorato Generale del Corpo Forestale dello Stato, CRA-ISAFA, Trento, 2005).
89. 89.
Institut national de l’information géographique et forestière. French National Forest Inventory (FNFI) (IGN, Saint-Mandé, 2017); http://inventaire-forestier.ign.fr/
90. 90.
Abbott, I. Comparisons of spatial pattern, structure, and tree composition between virgin and cut-over jarrah forest in Western Australia. For. Ecol. Manag. 9, 101–126 (1984).
91. 91.
Adam, J. H. Changes in forest community structures of tropical montane rain forest on the slope of Mt. Trus Madi in Sabah, Malaysia. J. Trop. For. Sci. 13, 76–92 (2001).
92. 92.
Addo-Fordjour, P., Obeng, S., Anning, A. & Addo, M. Floristic composition, structure and natural regeneration in a moist semi-deciduous forest following anthropogenic disturbances and plant invasion. Int. J. Biodiv. Conserv. 1, 21–37 (2009).
93. 93.
Adekunle, V. A. J. Conservation of tree species diversity in tropical rainforest ecosystem of South-West Nigeria. J. Trop. For. Sci. 18, 91–101 (2006).
94. 94.
Ansley, S. J.-A. & Battles, J. J. Forest composition, structure, and change in an old-growth mixed conifer forest in the northern Sierra Nevada. J. Torrey Bot. Soc. 125, 297–308 (1998).
95. 95.
Beals, E. W. The remnant cedar forests of Lebanon. J. Ecol. 53, 679–694 (1965).
96. 96.
Bonino, E. E. & Araujo, P. Structural differences between a primary and a secondary forest in the Argentine Dry Chaco and management implications. For. Ecol. Manag. 206, 407–412 (2005).
97. 97.
Cairns, M. A., Olmsted, I., Granados, J. & Argaez, J. Composition and aboveground tree biomass of a dry semi-evergreen forest on Mexico’s Yucatan Peninsula. For. Ecol. Manag. 186, 125–132 (2003).
98. 98.
Cao, M. & Zhang, J. Tree species diversity of tropical forest vegetation in Xishuangbanna, SW China. Biodivers. Conserv. 6, 995–1006 (1997).
99. 99.
Cheng-Yang, Z., Zeng-Li, L. I. U. & Jing-Yun, F. Tree species diversity along latitudinal gradient on southeastern and northwestern slopes of Mt. Huanggang, Wuyi Mountains, Fujian, China. Biodivers. Sci. 12, 63–74 (2004).
100. 100.
Davis, M. A., Curran, C., Tietmeyer, A. & Miller, A. Dynamic tree aggregation patterns in a species-poor temperate woodland disturbed by fire. J. Veg. Sci. 16, 167–174 (2005).
101. 101.
Do, T. V. et al. Effects of micro-topographies on stand structure and tree species diversity in an old-growth evergreen broad-leaved forest, southwestern Japan. Glob. Ecol. Conserv. 4, 185–196 (2015).
102. 102.
Eichhorn, M. Boreal forests of Kamchatka: structure and composition. Forests 1, 154–176 (2010).
103. 103.
Enoki, T. Microtopography and distribution of canopy trees in a subtropical evergreen broad-leaved forest in the northern part of Okinawa Island, Japan. Ecol. Res. 18, 103–113 (2003).
104. 104.
Eshete, A., Sterck, F. & Bongers, F. Diversity and production of Ethiopian dry woodlands explained by climate- and soil-stress gradients. For. Ecol. Manag. 261, 1499–1509 (2011).
105. 105.
Fashing, P. J., Forrestel, A., Scully, C. & Cords, M. Long-term tree population dynamics and their implications for the conservation of the Kakamega Forest, Kenya. Biodivers. Conserv. 13, 753–771 (2004).
106. 106.
Graham, A. W. The CSIRO Rainforest Permanent Plots of North Queensland - Site, Structural, Floristic and Edaphic Descriptions (CSIRO and the Cooperative Research Centre for Tropical Rainforest Ecology and Management, Rainforest CRC, Cairns, 2006).
107. 107.
Hirayama, K. & Sakimoto, M. Spatial distribution of canopy and subcanopy species along a sloping topography in a cool‐temperate conifer‐hardwood forest in the snowy region of Japan. Ecol. Res. 18, 443–454 (2003).
108. 108.
Jing-Yun, F., Yi-De, L. I., Biao, Z. H. U., Guo-Hua, L. I. U. & Guang-Yi, Z. Community structures and species richness in the montane rain forest of Jianfengling, Hainan Island, China. Biodivers. Sci. 12, 29–43 (2004).
109. 109.
Kohira, M., Ninomiya, I., Ibrahim, A. Z. & Latiff, A. Diversity, diameter structure and spatial pattern of trees in semi-evergreen rain forest of Langkawi island, Malaysia. J. Trop. For. Sci. 13, 460–476 (2001).
110. 110.
Kohyama, T. Tree size structure of stands and each species in primary warm-temperate rain forests of Southern Japan. Bot. Mag. Tokyo 99, 267–279 (1986).
111. 111.
Krishnamurthy, Y. L. et al. Vegetation structure and floristic composition of a tropical dry deciduous forest in Bhadra Wildlife Sanctuary, Karnataka, India. Trop. Ecol. 51, 235–246 (2010).
112. 112.
Lalfakawma, Sahoo, U., Roy, S., Vanlalhriatpuia, K. & Vanalalhluna, P. C. Community composition and tree population structure in undisturbed and disturbed tropical semi-evergreen forest stands of North-East India. Appl. Ecol. Env. Res. 7, 303–318 (2010).
113. 113.
Linder, P., Elfving, B. & Zackrisson, O. Stand structure and successional trends in virgin boreal forest reserves in Sweden. For. Ecol. Manag. 98, 17–33 (1997).
114. 114.
Lopes, C. G. R., Ferraz, E. M. N. & Araújo, EdeL. Physiognomic-structural characterization of dry- and humid-forest fragments (Atlantic Coastal Forest) in Pernambuco State, NE Brazil. Plant Ecol. 198, 1–18 (2008).
115. 115.
Lü, X.-T., Yin, J. & Tang, J.-W. Structure, tree species diversity and composition of tropical seasonal rainforests in Xishuangbanna, South-West China. J. Trop. For. Sci. 22, 260–270 (2010).
116. 116.
Malizia, A. & Grau, R. Liana–host tree associations in a subtropical montane forest of north-western Argentina. J. Trop. Ecol. 22, 331–339 (2006).
117. 117.
Maycock, F. P., Guzik, J., Jankovic, J., Shevera, M. & Carleton, J. T. Composition, structure and ecological aspects of mesic old growth Carpathian deciduous forests of Slovakia, southern Poland and the western Ukraine. Fragm. Flor. Geobot. 45, 281–321 (2000).
118. 118.
Nagel, A. T., Svoboda, M., Rugani, T. & Diaci, J. Gap regeneration and replacement patterns in an old-growth Fagus–Abies forest of Bosnia–Herzegovina. Plant. Ecol. 208, 307–318 (2010).
119. 119.
Namikawa, K., Matsui, T., Kobayashi, M., Goto, R. & Kuramoto, S. Initial establishment and regeneration processes of an outlying isolated Fagus crenata Blume forest stand in the northernmost boundary of its range in Hokkaido, northern Japan. Plant. Ecol. 207, 161–174 (2010).
120. 120.
Narayanan, A. & Parthasarathy, N. Biodiversity inventory of trees in a large scale permanent plot of tropical evergreen forest at Varagaliar. Anamalais, Western Ghats, India. Biodivers. Conserv. 8, 1533–1554 (1999).
121. 121.
Popradit, A. et al. Anthropogenic effects on a tropical forest according to the distance from human settlements. Sci. Rep. 5, 14689 (2015).
122. 122.
Round, P., Pierce, A., Sankamethawee, W. & Gale, G. The Mo Singto forest dynamics plot, Khao Yai National Park, Thailand. Nat. Hist. Bull. Siam Soc. 57, 57–80 (2011).
123. 123.
Sanchez, M., Pedroni, F., Eisenlohr, P. V. & Oliveira-Filho, A. T. Changes in tree community composition and structure of Atlantic rain forest on a slope of the Serra do Mar range, southeastern Brazil, from near sea level to 1000m of altitude. Flora 208, 184–196 (2013).
124. 124.
Sawada, H., Ohkubo, T., Kaji, M. & Oomura, K. Spatial distribution and topographic dependence of vegetation types and tree populations of natural forests in the Chichibu Mountains, central Japan. J. Japan. Forest Soc. 87, 293–303 (2005).
125. 125.
Sheil, D. & Salim, A. Forest tree persistence, elephants, and stem scars. Biotropica 36, 505–521 (2004).
126. 126.
Shu-Qing, Z. et al. Structure and species diversity of boreal forests in Mt. baikalu, huzhong area, daxing’an mountains, northeast china. Biodivers. Sci. 12, 182–189 (2004).
127. 127.
Splechtna, B. E., Gratzer, G. & Black, B. A. Disturbance history of a European old-growth mixed-species forest—A spatial dendro-ecological analysis. J. Veg. Sci. 16, 511–522 (2005).
128. 128.
Szwagrzyk, J. & Gazda, A. Above-ground standing biomass and tree species diversity in natural stands of Central Europe. J. Veg. Sci. 18, 555–562 (2007).
129. 129.
Wu, X.-P., Zhu, B. & Zhao, S.-Q. Comparison of community structure and species diversity of mixed forests of deciduous broad-leaved tree and Korean pine in Northeast China. Biodivers. Sci. 12, 174–181 (2004).
130. 130.
Wusheng, X., Tao, D., Shihong, L. & Li, X. A comparison of tree species diversity in two subtropical forests, Guangxi, Southwest China. J. Res. Ecol. 6, 208–216 (2015).
131. 131.
Yamada, I. Forest ecological studies of the montane forest of Mt. Pangrango, West Java: II. Stratification and floristic composition of the forest vegetation of the higher part of Mt. Pangrango. South East Asian Studies 13, 513–534 (1976).
132. 132.
Yasuoka, H. The variety of forest vegetations in south-eastern Cameroon, with special reference to the availability of wild yams for the forest hunter-gatherers. Afr. Study Monogr. 30, 89–119 (2008).
133. 133.
Kartesz, J. T. The Biota of North America Program (BONAP) (North American Plant Atlas, Chapel Hill, 2015).
134. 134.
Qian, H. Environmental determinants of woody plant diversity at a regional scale in China. PLoS. ONE 8, e75832 (2013).
135. 135.
Forzza, R. C. et al. Flora do Brazil 2020 (Jardim Botânico do Rio de Janeiro, Rio de Janeiro, 2017); http://floradobrasil.jbrj.gov.br/
136. 136.
Liang, J. et al. Positive biodiversity-productivity relationship predominant in global forests. Science 354, aaf8957 (2016).
137. 137.
Wood, S. N. Generalized Additive Models: an Introduction with R (CRC Press/Taylor & Francis Group, 2017).
138. 138.
Carpenter, B. et al. Stan: a probabilistic programming language. J. Stat. Softw. 76, https://doi.org/10.18637/jss.v076.i01 (2017).
139. 139.
Bürkner, P.-C. brms: an R package for Bayesian multilevel models using Stan. J. Stat. Softw. 80, https://doi.org/10.18637/jss.v080.i01 (2017).
140. 140.
Dauby, G. et al. RAINBIO: a mega-database of tropical African vascular plants distributions. PhytoKeys 74, 1–18 (2016).
141. 141.
Anderson-Teixeira, K. J. et al. CTFS-ForestGEO: a worldwide network monitoring forests in an era of global change. Glob. Change Biol. 21, 528–549 (2015).
142. 142.
DeWalt, S. J., Bourdy, G., Chavez de Michel, L. R. & Quenevo, C. Ethnobotany of the Tacana: Quantitative inventories of two permanent plots of Northwestern Bolivia. Econ. Bot. 53, 237–260 (1999).
143. 143.
Enquist, B. & Boyle, B. SALVIAS—the SALVIAS vegetation inventory database. Biodivers. Ecol. 4, 288–288 (2012).
144. 144.
Fegraus, E. Tropical ecology assessment and monitoring network (TEAMNetwork). Biodivers. Ecol. 4, 287–287 (2012).
145. 145.
Oliveira-filho, A. T. NeoTropTree, Flora Arbórea da Regiāo Neotropical: um Banco de Dados Envolvendo Biogeografia, Diversidade e Conservaçāo (Universidade Federal de Minas Gerais, Belo Horizonte, 2017).
146. 146.
Peet, R. K. et al. Vegetation-plot database of the Carolina Vegetation Survey. Biodivers. Ecol. 4, 243–253 (2012).
147. 147.
Peet, R. K., Lee, M. T., Jennings, M. D. & Faber-Langendoen, D. VegBank: a permanent, open-access archive for vegetation plot data. Biodivers. Ecol. 4, 233–241 (2012).
148. 148.
Sosef, M. S. M. et al. Exploring the floristic diversity of tropical Africa. BMC Biol. 15, 15 (2017).
149. 149.
Canhos, V. P. et al. Rede speciesLink: avaliação 2006 (Fapesp, São Paulo, 2006); http://splink.cria.org.br
150. 150.
Maitner, B. S. et al. The BIEN R package: A tool to access the Botanical Information and Ecology Network (BIEN)database. Meth. Eco. Evo. 9, 373–379 (2018).
151. 151.
Mauri, A., Strona, G. & San-Miguel-Ayanz, J. EU-Forest, a high-resolution tree occurrence dataset for Europe. Sci. Data 4, 160123 (2017).
152. 152.
Borcard, D., Legendre, P. & Drapeau, P. Partialling out the spatial component of ecological variation. Ecology 73, 1045–1055 (1992).
## Acknowledgements
We thank D. Craven and I. Šímová for valuable advice, and H. Kreft, J. Coyle, R. Ricklefs, and S. Blowes for critical comments that greatly improved the manuscript. We acknowledge the support of the German Centre for Integrative Biodiversity Research (iDiv) Halle-Jena-Leipzig funded by the German Research Foundation (FZT 118).
## Author information
Authors
### Contributions
P.K. formalized the ideas, collated the data, performed the analyses, and led the writing. J.M.C. proposed the initial idea, contributed to its development, discussed the results, and contributed to the writing.
### Corresponding author
Correspondence to Petr Keil.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Supplementary information
### Supplementary Information
Supplementary Figures 1–13, Supplementary Table 1, Supplementary Methods, Supplementary Discussion and Supplementary References
## Rights and permissions
Reprints and Permissions
Keil, P., Chase, J.M. Global patterns and drivers of tree diversity integrated across a continuum of spatial grains. Nat Ecol Evol 3, 390–399 (2019). https://doi.org/10.1038/s41559-019-0799-0
• Accepted:
• Published:
• Issue Date:
• ### Multifaceted biodiversity measurements reveal incongruent conservation priorities for rivers in the upper reach and lakes in the middle-lower reach of the largest river-floodplain ecosystem in China
• Zhongguan Jiang
• , Bingguo Dai
• , Chao Wang
• & Wen Xiong
Science of The Total Environment (2020)
• ### Testing Darwin’s naturalization conundrum based on taxonomic, phylogenetic, and functional dimensions of vascular plants
• Jesús N. Pinto‐Ledezma
• , Fabricio Villalobos
• , Peter B. Reich
• , Jane A. Catford
• , Daniel J. Larkin
• & Jeannine Cavender‐Bares
Ecological Monographs (2020)
• ### In memoriam Ching-I Peng (1950–2018)—an outstanding scientist and mentor with a remarkable legacy
• Kuo-Fang Chung
Botanical Studies (2020)
• ### Tropical plants evolve faster than their temperate relatives: a case from the bamboos (Poaceae: Bambusoideae) based on chloroplast genome data
• Wencai Wang
• , Siyun Chen
• , Wei Guo
• , Yongquan Li
• & Xianzhi Zhang
Biotechnology & Biotechnological Equipment (2020)
• ### BILBI: Supporting global biodiversity assessment through high-resolution macroecological modelling
• Andrew J. Hoskins
• , Thomas D. Harwood
• , Chris Ware
• , Kristen J. Williams
• , Justin J. Perry
• , Noboru Ota
• , Jim R. Croft
• , David K. Yeates
• , Walter Jetz
• , Maciej Golebiewski
• , Andy Purvis
• , Tim Robertson
• & Simon Ferrier
Environmental Modelling & Software (2020) | 2020-08-06 16:14:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5798125863075256, "perplexity": 11017.423010365155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439736972.79/warc/CC-MAIN-20200806151047-20200806181047-00213.warc.gz"} |
https://math.stackexchange.com/questions/2432079/determinant-of-a-matrix-that-is-its-own-inverse | # Determinant of a matrix that is its own inverse [duplicate]
I need help in showing that when computing the determinant the inverse of an $n \times n$ matrix with the property
$$M=M^{-1}$$
that is
$$M^2 = I$$
the determinant is either $1$ or $-1$.
I've tried showing it in a couple ways and the way I'm trying to show it has me stuck
$$K^2 = I$$
$$K^2 - I = 0$$
$$\det(K^2 - I) = 0$$
$$\det(I - I) = 0$$
I get here and I am hopelessly stuck. Could I go on to prove it this way? Is there any elementary way to prove this?
• $\det(A^{-1})=(\det A)^{-1}$ Sep 16 '17 at 18:24
• What do you get from $\det(M^2)=\det(I)$? Sep 16 '17 at 18:25
• if $\det M^2=\det I=1$ it follows that $(\det M)^2=1$ so $\det M = \pm 1$ Sep 16 '17 at 18:30
If
$M = M^{-1}, \tag 1$
then
$M^2 = I, \tag 2$
so by the multiplicative property of determinants,
$(\det M)^2 = \det (M^2) = \det I = 1, \tag 3$
which implies that
$\det M = \pm 1. \tag 4$
Now in fact, we can go a little further with only a little more work and show that every eigenvalue or $M$ is in the set $S = \{-1, 1\}$. For if
$Mv = \mu v \tag 5$
for some non-zero vector $v$, then
$\mu^2 v = \mu(\mu v) = \mu Mv = M(\mu v) = M(Mv)= M^2 v = Iv = v; \tag 6$
thus
$\mu^2 = 1, \tag 7$
or
$\mu = \pm 1. \tag 8$
Since the eigenvalues of $M$ lie in the set $S$, and $\det M$ is the product of its eigenvalues, we again see that we must have (4).
Finally, we can also write
$(M + I)(M - I) = M^2 - I = 0, \tag 9$
whence
$\det(M + I) \det(M - I) = 0; \tag {10}$
thus
$\det(M + I) = 0 \tag{11}$
or
$\det(M - I) = 0; \tag{12}$
in the former case, there exists a vector $v$ with
$Mv = -v; \tag{13}$
in the latter
$Mv = v, \tag{14}$
which gives a quick and easy proof of the existence of eigenvectors corresponding the eigenvalues $\mu = \pm 1$.
We have that $$1 = \det(I) = \det(AA^{-1}) = \det(AA) = \det(A)\det(A) = \det(A)^2.$$ So, $\det(A)^2 - 1 = 0$. This is a polynomial in $\det(A)$, what are its solutions? | 2021-09-24 21:35:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.912382185459137, "perplexity": 142.64785404130453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057580.39/warc/CC-MAIN-20210924201616-20210924231616-00046.warc.gz"} |
https://math.meta.stackexchange.com/questions/9640/should-m-se-consider-a-question-wizard-guide | # Should M.SE consider a Question Wizard Guide?
After spending quite a bit of time doing thousands of reviews of first posts, suggested edits, editing questions and trying to understand the comments/concerns about the quality of the site, I am wondering if MSE should consider a Question Wizard Guide. This Wizard would 'unobtrusively' guide users when posting questions.
When someone posts a question, the wizard will display a set of translucent guides to help the OP. Note, these would not be binding, but are only for showing the basic level of quality desired (they can still make bad posts, but if those are closed, at least they have a better idea as to why).
For example, this could include:
• Clearly written and formatted question: <...>
• Your attempts and thoughts on the problem: <...>
• Source of the problem: <...>
• Note, failure to comply with basic minimal quality standards could result in closure of your question.
We can decide what this list would be, but I think this would go a long way in addressing the concerns of all parties (OPs and the MSE Community both).
This would have the added benefit of letting new posters see that posted questions follow a certain level of consistency/quality (because we know they don't read the FAQ) and, I believe, that this is one of our missions, to help the newer mathematicians learn the importance of properly communicating.
• May 19 '13 at 19:21
• @75064: I am not tied to any particular approach and perhaps after they build up enough reputation, it would revert back to the current method as hopefully, they've learned about the expectations. So, the actual implementation can be whatever the MSE Community deems best. This idea is being floated as a possible approach to help improve the quality of postings and that satisfies some of the concerns of MSE become an answer only mill. Regards May 19 '13 at 19:26
• Since this involves a fairly major change in functionality, I think it would have to happen at the network level, which I think is unlikely to happen. May 19 '13 at 23:37
• Ooh! Can we have those guides offered by a cheerful anthropomorphic paperclip? May 20 '13 at 12:17
• @NateEldredge: I would have preferred one that is more mathematically slanted with depression, a bad attitude and socially dysfunctional. This way, when it yells at the new recruits that are posting, they can feel right at home! May 20 '13 at 14:33
• I've suggested something similar before. May 22 '13 at 17:41
While the chances of adding translucent things to the question dialog appear to be slim, I noticed that ServerFault enabled something of the sort (following the lead of SO): Should folks have to click through an interstitial page to ask questions on Server Fault?. See also the follow-up analysis in an answer by Shog9.
The effect of this measure on SF has been underwhelming. But so is their interstitial page, which is basically copy-pasted from SE-wide boilerplate How to ask. The only site-specific sentence they put in was
• If it has any hope of eliminating even a fraction of the $x1$+2$x2$<sup>10</sup>-x3², I'm in favor. Nov 5 '13 at 2:07 | 2021-10-24 12:24:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41320115327835083, "perplexity": 1657.4179858148152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585997.77/warc/CC-MAIN-20211024111905-20211024141905-00306.warc.gz"} |
http://clay6.com/qa/34567/ecotype-is-a-type-of-species-in-which-environmentally-induced-variations-ar | Browse Questions
# Ecotype is a type of species in which environmentally induced variations are :-
$\begin{array}{1 1}(a)\;\text{Temporary}\\(b)\;\text{Genetically fixed}\\(c)\;\text{Genetically not related}\\(d)\;\text{None of the above}\end{array}$
Can you answer this question?
Ecotype is a type of species in which environmentally induced variations are genetically fixed
Hence (b) is the correct answer.
answered Mar 27, 2014 | 2016-12-03 13:37:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6489731669425964, "perplexity": 7208.911939988753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698540932.10/warc/CC-MAIN-20161202170900-00451-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://control.com/textbook/continuous-fluid-flow-measurement/change-of-quantity-flow-measurement/ | Change-of-quantity Flow Measurement
Chapter 22 - Introduction to Continuous Fluid Flow Measurement
Flow, by definition, is the passage of material from one location to another over time. So far this chapter has explored technologies for measuring flow rate en route from source to destination. However, a completely different method exists for measuring flow rates: measuring how much material has either departed or arrived at the terminal locations over time.
Mathematically, we may express flow as a ratio of quantity to time. Whether it is volumetric flow or mass flow we are referring to, the concept is the same: quantity of material moved per quantity of time. We may express average flow rates as ratios of changes:
$\overline{W} = {\Delta m \over \Delta t} \hskip 50pt \overline{Q} = {\Delta V \over \Delta t}$
Where,
$$\overline{W}$$ = Average mass flow rate
$$\overline{Q}$$ = Average volumetric flow rate
$$\Delta m$$ = Change in mass
$$\Delta V$$ = Change in volume
$$\Delta t$$ = Change in time
Suppose a water storage vessel is equipped with load cells to precisely measure weight (which is directly proportional to mass with constant gravity). Assuming only one pipe entering or exiting the vessel, any flow of water through that pipe will result in the vessel’s total weight changing over time:
If the measured mass of this vessel decreased from 74688 kilograms to 70100 kilograms between 4:05 AM and 4:07 AM, we could say that the average mass flow rate of water leaving the vessel is 2294 kilograms per minute over that time span.
$\overline{W} = {\Delta m \over \Delta t} = {70100 \hbox{ kg} - 74688 \hbox{ kg} \over \hbox{4:07} - \hbox{4:05}} = {-4588 \hbox{ kg} \over 2 \hbox{ min}} = -2294 {\hbox{kg} \over \hbox{min}}$
Note that this average flow measurement may be determined without any flowmeter of any kind installed in the pipe to intercept the water flow. All the concerns of flowmeters studied thus far (turbulence, Reynolds number, fluid properties, etc.) are completely irrelevant. We may measure practically any flow rate we desire simply by measuring stored weight (or volume) over time. A computer may do this calculation automatically for us if we wish, on practically any time scale desired.
Now suppose the practice of determining average flow rates every two minutes was considered too infrequent. Imagine that operations personnel require flow data calculated and displayed more often than just 30 times an hour. All we must do to achieve better time resolution is take weight (mass) measurements more often. Of course, each mass-change interval will be expected to be less with more frequent measurements, but the amount of time we divide by in each calculation will be proportionally smaller as well. If the flow rate happens to be absolutely steady, we may sample mass as frequently as we might like and we will still arrive at the same flow rate value as before (sampling mass just once every two minutes). If, however, the flow rate is not steady, sampling more often will allow us to better see the immediate “ups” and “downs” of flow behavior.
Imagine now that we had our hypothetical “flow computer” take weight (mass) measurements at an infinitely fast pace: an infinite number of samples per second. Now, we are no longer averaging flow rates over finite periods of time; instead we would be calculating instantaneous flow rate at any given point in time.
Calculus has a special form of symbology to represent such hypothetical scenarios: we replace the Greek letter “delta” ($$\Delta$$, meaning “change”) with the roman letter “d” (meaning differential). A simple way of picturing the meaning of “d” is to think of it as meaning an infinitesimal change in whatever variable follows the “d” in the equation. When we set up two differentials in a quotient, we call the $$d \over d$$ fraction a derivative. Re-writing our average flow rate equations in derivative (calculus) form:
$W = {dm \over dt} \hskip 50pt Q = {dV \over dt}$
Where,
$$W$$ = Instantaneous mass flow rate
$$Q$$ = Instantaneous volumetric flow rate
$$dm$$ = Infinitesimal (infinitely small) change in mass
$$dV$$ = Infinitesimal (infinitely small) change in volume
$$dt$$ = Infinitesimal (infinitely small) change in time
We need not dream of hypothetical computers capable of infinite calculations per second in order to derive a flow measurement from a mass (or volume) measurement. Analog electronic circuitry exploits the natural properties of resistors and capacitors to essentially do this very thing in real time:
In the vast majority of applications you will see digital computers used to calculate average flow rates rather than analog electronic circuits calculating instantaneous flow rates. The broad capabilities of digital computers virtually ensures they will be used somewhere in the measurement/control system, so the rationale is to use the existing digital computer to calculate flow rates (albeit imperfectly) rather than complicate the system design with additional (analog) circuitry. As fast as modern digital computers are able to process simple calculations such as these anyway, there is little practical reason to prefer analog signal differentiation except in specialized applications where high speed performance is paramount.
Perhaps the single greatest disadvantage to inferring flow rate by differentiating mass or volume measurements over time is the requirement that the storage vessel have only one flow path in and out. If the vessel has multiple paths for liquid to move in and out (simultaneously), any flow rate calculated on change-in-quantity will be a net flow rate only. It is impossible to use this flow measurement technique to measure one flow out of multiple flows common to one liquid storage vessel.
A simple “thought experiment” confirms this fact. Imagine a water storage vessel receiving a flow rate in at 200 gallons per minute. Next, imagine that same vessel emptying water out of a second pipe at the exact same flow rate: 200 gallons per minute. With the exact same flow rate both entering and exiting the vessel, the water level in the vessel will remain constant. Any change-of-quantity flow measurement system would register zero change in mass or volume over time, consequently calculating a flow rate of absolutely zero. Truly, the net flow rate for this vessel is zero, but this tells us nothing about the flow in each pipe, except that those flow rates are equal in magnitude and opposite in direction.
• Share
Published under the terms and conditions of the Creative Commons Attribution 4.0 International Public License | 2020-04-08 09:05:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.687574028968811, "perplexity": 927.3728067363489}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371810807.81/warc/CC-MAIN-20200408072713-20200408103213-00244.warc.gz"} |
http://math.stackexchange.com/questions/115889/on-integer-partition | # On Integer Partition
Theorem:
For any $n, k \in \mathbb N$, the number of partitions of $n$ into parts, each of which appears at most $k$ times, is equal to the number of partitions of $n$ into parts the sizes of which are not divisible by $k+1$.
I knew of a proof where $k =2$. I take advantage of the proof but I failed. The proof involves generating function. (see: the book of cheng chuan-chong- "Principles and Techniques in Combinatorics" ) There could really be a better way of attacking this problem. I just don't know. need help for some hints. Thanks
-
hold on. let me re-phrase – a little don Mar 3 '12 at 5:35
$$\prod(1+x^j+x^{2j}+\cdots+x^{kj})=\prod{1-x^{(k+1)j}\over1-x^j}=\prod_{k+1\nmid j}{1\over1-x^j}$$
-
I don't get the solution. Can you please further explain how come? thanks.. or is there anyone who can quite expound why is the initial stuff is like that..:) – Keneth Adrian Mar 3 '12 at 11:09
The 1st product is the generating function for partitions, each part appearing at most $k$ times. Geometric series gets you to the 2nd product. Cancellation gets you to the 3rd product, which is the generating function for partitions into parts not divisible by $k+1$. Let me know if you need more - but, if you do, please let me know exactly which part you need more about. – Gerry Myerson Mar 3 '12 at 11:57
Thanks a lot for the explanation..How could the second product turn to the blast product. Is it algebraic? because the numerator suddenly turns to 1 – Keneth Adrian Mar 4 '12 at 10:19
I don't know what a blast product is. Each factor in the numerator of the second product also appears, later on, in the denominator. Cancel, and nothing is left in the numerator, while all the terms of the form $1-x^{(k+1)r}$, $r=1,2,\dots$, are gone from the denominator. Write out the first few terms for, say, $k=2$, and you'll see the cancellation. – Gerry Myerson Mar 4 '12 at 11:38
Sorry, It should have been "last".. Oh, I see it.. thanks. :) – Keneth Adrian Mar 5 '12 at 2:09
There is also a bijective proof.
Given a partition of $n$ in which no part is divisible by $k+1$, suppose you have some part $a$ that is used more than $k$ times. Then merge $k+1$ occurrences of $a$ into a single occurrence of $(k+1)a$. Repeat until no part is used more than $k$ times.
Given a partition of $n$ in which no part is used more than $k$ times, if you have a part $a=(k+1)r$ that is a multiple of $k+1$, split it into $k+1$ occurrences of the part $r$. Repeat until no part is a multiple of $k+1$.
This gives you a bijection between partitions with no part divisible by $k+1$, and partitions with no part used more than $k$ times.
To illustrate, with $k=2$. Consider the partition $$39=1+1+1+1+1+1+1+2+2+2+2+2+2+4+4+4+4+4$$ This partition has no part divisible by $3$. Let's abbreviate it to $1^72^64^5$. Then we go $$1^72^64^5\to1^31^42^64^5\to1^42^634^5\to1^312^634^5\to12^63^24^5\to12^32^33^24^5\to12^33^24^56\to$$
$$\to13^24^56^2\to13^24^34^26^2\to13^24^26^2(12)$$ and we have a partition with no part used more than twice.
In the other direction, suppose we start with $51=123^24^2567^29$, no part used more than twice. We split the $9$ into three $3$s, then the $6$ into three $2$s, then each $3$ into three $1$s: $$123^24^2567^29\to123^54^2567^2\to12^43^54^257^2\to1^{16}2^44^257^2$$ a partition with no part divisible by $3$.
- | 2014-03-11 12:54:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8696728944778442, "perplexity": 288.014666646914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011198370/warc/CC-MAIN-20140305091958-00016-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://horsebitbro.com/watermark3.php?image=auwebcp/pictures/288299600_270_pic_3.jpg | JFIF>CREATOR: gd-jpeg v1.0 (using IJG JPEG v62), default quality C \$.' ",#(7),01444'9=82<.342C 2!!22222222222222222222222222222222222222222222222222" }!1AQa"q2#BR\$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz w!1AQaq"2B #3Rbr \$4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (\$r^21@̋ހ5h+}"[=EZ\$(ɨI̓M[I6sSPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEJD?S@Ot^MPyYI((((UFrZu5NH>Xu\$UU;ɷ6*ET@7,XzVU>8PBFԳdrsQPEPEPEPVk.)ZH@(((((((((((((((((((((((((((((((((((((((((((((((((((((((6ɬ;нlGΠ`0 V!\$rhX;{p6VeN&"((((v;OJJhh*ᴚ@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@]1߅eջC6U(Ŭ[Ajb#ƲIɭÈk6 ( avBTuv`5ROhG̕Tè4)[c84(PJos k>XȠ *i7*j((((((((((((((((((((((((((((((((((((((((((((((((((((*IjjxС.y@ŘB p S0x}V}hތŚΠ(YTexj@aKE1V\b5= \2t@F5@#\nJE)N2zZD#@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@g]Yvw5J+nX:JмEXJϠ hڴèaXp:֥f8hnt\$VIV\0 4Ҥuֵ֯JCxr@((F`\$mlrE9bD)ܘQzXaF(K}+1cI@Q@OUڣaժQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQE#)5+ZRk42q@8ҷQA5JZ(( w6AsoK@h I@Q@T[h:rɠ %dժvK@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@R8Veag5/ Q4q.duQkQT%i ((((((((((((((()XD094{'h;ic*֛S ƽ9 Zb ֩ːOrEzl_7cO:cX7|S"4r1EӗTn2?ӿ )LQǩ%2}Q.XO Ϥz£1I*P=\LZ\AW-ʑ[?+1#-Ec1 пZ"Juq4>(((((((((((((((()(iv 8S_rw7zo}V \MD(fAߢúlܕ^GJ =QƼ}@tuiz|r)91Iz ]-,h-DQܘ_\aW1 +upf=`C?ʺh<@[{8Sˋ[e.dU T\_E@jK yFqqm)Zy42/zu0x3s Gl!Z\,xA*y^|4?QP-!oǚA6 qZ1ڶM|дy3:'*\(D݈ ~s Ǝ<%Vc?Hy}>e]R5F_.o',nLV5ԣc.P ̗1~L+ y?ۑ%OƝgތFI*yͷ\}S,va=%^{45#T1\$nB*JEQ@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@E eR@͝wZEb"nZF h܇XVkGZѶ.,FF P||*m6NUXȭyî iɸlYjSZB:`:;gnX韧)@'*HmYaW5A@ El'ާWZZ6IcgZ.IQqFY"BW@vy&5X]#Y4ۊ6^yҹ-z; @XOo,I7(i?MIPT%bKyL%Y[?J&/. OʣK'YZ?W-|1]7,=ZoʋHΚ^_v-I>SH_|>rAƸjx3DN]nd?LqG0NNMi1~(5\X]lb|o57N:i.`Wт`,?~U rγ,ƯZƌe~d#-1-HZ) 2ҚW%F`YZκռ0:'!~ɕy~=Ghך;a)i5{2Eh%;Z|Ŭ--f]f?G]}J'ZLI様 68i"7Swk8@=bg&Ԕ5-1MEQHI4ɥ&OZ̖ViA@ڦG+LU[\$ڥ7RamiNOE_5F 61X'4u&xj\$2W/}z"J=Qu8m,#=*l[A& 69s??nCܟOjinSt%o2-1[F1W HIV0lnlywn#A]e, WB!EFAQ46UCb;h#@CLOrj,JCi4_\ (FW_"(m4m"y 8歘) ǿtG&< <wf^K EgN5,ym m-Օew'Zoӭuwp? ^i&d|*lc55ϳ۱"7Zv훖a (9sӊ-`b}\~˭\첈^Zfأ=>? -ڥl#ˋڮ K,׳֯A8,5UZVeE@Q\$PJÙM4Bjp Ǟ2AVRjcR,B ND#()v@ Fui6ӨSLb*U WM((- r rhU,tG. (ܶ.! 6#*rAϥKnN?Ǯ+ҥ% A CGyertC&n=VdZO۬^_nxwP3[!Ouh9TQEQEQEQEQEQEQEQE!!FOJϹ.v/FhHV& ՎEx4泝J1OJ˺P\$\$-YR\_Ldud@_+5Ba:S- ltyZ(w'5\$f , &Oj#Yaja J+H".)h=)PvѴS>`1JJҹ ևڡhuVr:φ C8uޣG2jtmRp?Нj\)sI12jl]ɵ]ʽZoݘN߾O+DKGqk-k2o2k9Q'GalF>V?*ZgIE ((((((((((+/\I^s!c+GWZFs 'jОQ eu1}nۑq! a`Ms]~Qv63X🆆!?e;>T*: (b8Zi95 95"GX\*@689T REQEQEQEQEQEQEQE! =Z(# yߺF?vOu[er>4g9y } %XZEL3pIw k?O>FH[XCcTM0R:2Aj`s\GvEn[\Gqx0n-nOEU((((((:Z(\$bE*Ee D=+Z\$bD zk({r>J/\ajP9#5C@Ru(Oj\$Ҳ<-[t#ڲj!H=VPґ#XS( ՋƫR1`VހP>((t&# 8*XIp]Zrف)f?(Y^'ͧkrlTInivڗXI\$'@qUk87F VYZ6n=j8tdxڅ6{»yUtQ[8n@MX,E{U!U8QJ: QEQEQEQEQEQEQESJu =JLe`jL[\.A[Z+>}:ff+dڹcS/ gPG1]p®O wpM5YJ cE=X# 4F=J9ӆ# = _jY̜C7Oc]ڶj-2J((((((((( jM; 5b|V[<--{J˰lJ=iǹ2ab:ZӅl̟wP?mm9,8#[{xN5 90M>45&JEZU".H(((((((((((p¸>ݯtpmA]/Lnpxϝ'5Ŋ6a~b*כJ㊥`TѶrhЏiJ8KaptA1_AS5@I;MZ|fͦ3i[2zO2,5dOFmuTq')xKGCi'I1i]\$V{W.glc"6ִ%uTWq|M9 @{32:F_qQG!ʻ(+ėZ^vKKRwR0qٗSߨa5?iJ#tMVVHR\$_SPU#/yŭRĒRid[B6SZմ,@k5֦ b=jbykU)y"v GT\$2Y|R)hiy;RElĵDC]ێ2w]T 憦Ve:TEPEPEPEPEPEPEPEPEPEPJGVi1f4T\F`7Νc0=inA!'WLpQ} ~|:6r=NvpwOqQKe쥶=a~?rӪ"(((((((((4(s~rb^y ż ?3ٵQ؆TУLPWk,6c) RKbhRĸ-\QHc-PEPEPEPEPEPEPEPEPEPEP7%rzblC?GpMsvS*Y7 KM{yAEg!EP=MKo}~I"/-ᓌVfXi*(p7\$w<@|kFǗL}I'"+gS3[Y\$ϲuYl{EV;feW?&ahAE C2[KFjϊMOeL2./hN<mq*[lqxjw kvt+SF| 2*փPUW(\ۛJI2fYZ[TYlLS oȯ41P;cWe](,\$J^Z1#=pzrv~Aq?y\$Լ'\o XjTQ@BZx~P`Q-Am=zn._|)hPnٮf =NkQ6qi[fص#HǼG S\ lKYؒ%>'寭4|Agg?'m>t;A״3:`|#w2\c6 z6Hʇ+-Rb+\$>Okz{3hPϽhS& /O腯-PgRDqu?CR!itȬ}^-&x䳻\x'>\l:8L[0YeF5 wU?ONb19*njfڲq:?p^Z8z rޏuu ?d%<+5RQػ#CASԌ(((((((((((E9!{D#'x|>LI&n=ݼWj:Ɉ\$jY@_6s_A1ZlGoA\NrӞѲ0%>4lʼnQ[?I.EG&]O@:Iu6>bxgH!rG3Sg}G 3Vr1 COOo6hK4n9 ڼu͚ *+,t%pf,K#bRr \$vf^~\t+/H?3x,ZϹ>瞇A]UI9>"Xa\X)wlq]8>¦[;>8'*} 5pQud_62?Z"/b fraz֟TOOf3 }zזIGICl{͖Db̀zzVJ?5-wgx"XǪRmxpېOymmy3\R+4ܞAl.Ov9iXnbMSD؍NY~ yby t٣xncc p_ r`Ia\$bп9RH̅槱䲩^0ta@5`jk|=fQ,NxS^qD3:to t.285z۲nC+On.SϖNg:*J@]V;7sԎ-})- #Zu=b9-s Yry@\$s!*% mkcx]}7DvΙ+gs^yja>/p<ֻsO]R8`ڹ3pD@0@kR<[ f8ʹ\$^kZj:s¶nrxn.s.ޕM<3I'ݻTz u)U08 PKya~V-o!H|uz]&jk9Fc-ԴYVNFcq}r#N]>~X]kܶ敳s>=5Jw`goSV/Տڷ"p]YmXV4k-g>;vDU_@?l0kGI̺N+=Vd"{6Hsӵc,:2gl(7c\$jhzTyqq*yygv6>ݨ[ :)#]_ʪ]6W! {2pGҖ F &YIIޱ{6\_ii V+Itۏ!rQEx5/眬NUҲE|\:S@X:Ov/th[ \<3q5pvk? n屖XdO9UVGkIeNYDHxPoF}E|SE\$Lw v-ˍ?u@Ru[)sEhU( \^f:(((((((((((dK ]zy8Wcn'xvX٬)ϢZu tA[[ؔwEhFxzք]* ,Q@EQEQEQEQEQEQEPAC^G8L+.w+|UCC _Sߎ1ՂF :p} hhu_cWY- k=+ӧY01_a>!R*(OU58PXQEQEQEQEQEQEQEQEQEQERW3@\$Jfm~Ҹ*ʈ 9 TO6xD-ʪ@8EUԉ>{ɦy"gwGX{Y-Ahz6GsJl8hdkY9X}MtTSf8Z&{PHS5d6 <_,iS@s'jo,ҮY5r;c NKfvWGX["(.;q`Ԋ-jā\$]3Ҳ nc [%C7NGJ`PHY7PE]fSwiȭd VOsWly5 ԩ @R=RaOibb uagytYGGc_i_[ג][D=kZ\$XYʚohR{؞nz}ܫҺOT "/" w>۟mљa)aG*덮q]dEakdO-Eii&ac3~`~Ly*#%(((((((((( ei-ÕڟS\.XڮP:f(O͎ 㭥ێVԙ3]kwV-o닾CVNU>gma705R`zք=*#Zt(tҊ(((((((i: kEʇ&kg={Yӊ/1eoJ< T3&8u u-c :t:d7Q!٣=Z^d^{VS6zu[SJ5k/dwQj4}fN,_Pi((((((((TuRKPlip\$F;(ʾwu\$6jry MG%Ē[U\NK\b̖'ZegO(7 ;o xR;X&ѺnE xy4hdkm d};Բ6M5W&l3V*tQU nFN#ր_EhU{ڀ31W-kUB0pi(Et\ANTf(6j; r@@%!YS(\Y9-L+xScchVK*6 `^}T#@7j&-hgzPY4ǭ51 Teq[bOS@8]Nb]Z#jɑA*y f@cN)ai[ǭiƸ#SbREb(0(-hKE0C\$yfFh59Ohk{y}kZ0jŪME"vIh Pzc.! Qִi4{`yjݤW+'q]: C;Xí/9OGN (0x{tm@NOhi:)ȒlXdޟRPQEQEQEQEQEQEQEQEQE,a&E=jιAc\tQ<>6en@4%q7b; v敹X:q)~UO_5D7@ *Z]gmMڴ%O49%]N8_^rXPXGإ|叫U5 ܤEhF8/J>(((((((("HEq(f ?Ƹ,3J>>r>"ٺlpC8M; a0{VR!9 럒~,76)AMP֠u-䙢SYPCj&ǫi~!Y[_ ҷmu8# /+wN.,-盟GqQYZyRI85F{T-Q@Q@Q@Q@Phj)!MJ\KLD Pu,p+\y1ЛEsr_F+oP`f6TFi:O X% Nn/fk}*Z#PvDʩMo)2H4"0\+;Ub[%-[xEc]iM\$"^ >c}JmWls[K)&Be֢M!_zm*x{Û;JQu?K\$PZ@&E68Wb<81N GurƳgi[ڀ%UJT'xJEdTzPmVBHaoZ@4R; >XHUyT@5N)+橻l@ \$Bep;V E (QE\$XԳYFjʚfzvi¡Ѳb84f_f GuLϘ)" UR2ha\$#6) n ^zn,"n9h@)6I[Ut)W27QF&C=i(zA/,,ç֭;ѭiށuvi 9Xz%ޞrZ.p=I0!p +o-0Ee_(Pюj&QE!Q@Q@Q@Q@Q@ŖeЮwY,vtxOsYw>lx<6zGuv&PǩmƥNNjXW&C"d*\ ( ( ( ( ( ( ( ( (5Zhc.h=oLn;e_\0 mp2rXڳ!%6*"<8OvRQ#E:r hFHˤʹwC{VTZޓ6ɖ[G(jW%Sm|v<ִfağ0i7(孿iyӵP="LGUPqj/w+Bi< bnz'4`r:j#0FF^V;PqRk.Qz~uQ zG7?9HFPF2"Zݮ敾P=|E5z;#\$v7~m4Ԯp3xNᮠ_dK\?h.vP46Y2ʒ7p;V~VnKZ١Zy`nn;RHW-K3Iˁrjet眿2 XZD" \$=_5-꯹"dSɖ˧jM[qq7(-no1C,Dq/Vm]N;A[ /a*yOMlr7G3*M04*jJ5ڋwVGt%ݜa:cGK VC=f wM`t7чkɡqnbІ0fVrJocrG|Օ+1\o\$zٟof ԁMf5T棕5[SCɸ\_θ翴`j}]o:g<<|tƣvt˨CYc+(5Ѭofԥ取/ʩ\$!s1Iw'ur5Vk-! 7<Z?lm+X`^Mki>g)˷چ gjb,JF1c'@{#y}=Z[[-!HvΑ5 ܤ;43@RjqRGWLScڬ@((((((((((c.jUi\[U.- OH.keVne!/e,vU+"úհ^ cMz DGj^²<5alGv{ܮ\$f6ҫV ë1ESZL{F):0O'dSҤ-}^C5+G.`b? /.`Q<\$gA5~\$ ,HҽB-٫Cք~TFDv8 _Wj;&CҶܛ.?F}(ͯ|9[m#|}6H>}ֽi^?_4cca0j#iGR_Z=wZ'y[oOV< e`Vw3krM#W8o'[K̂B[c5?0HƟZ91l"o!r;4yr*~ֱ֟{gWV#@EAff -44}l[ H[GHA>vo+NOӬ-!tb?.aZ&10O/r[o6{ˀ~54@ڣ2I؎MMv2s53IQP-JMK^f8}(q=cPQEQEQEQEQEQEQEQEQEQEQEQEQE!J- _ SJ hMFb5bhEf4ݦ R旚{RyҀ*Y=D>W]ա\=+J7(aXbS㱠 >1Ȣ̅ꌹ/a@Co#Vi ( p*O0i&@j 6[븶685L1\$eq@8mjNEՊ((((((((((((()-LM YPh}iD f0Њ@0irjكژa .RiI?&Ry4@0jST)[T O "LR( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( LR@ HREG劒˥IEF#)Pv▊1EPEPE `ihpS"]AWQEQEQEQQ]5nHċUfeI,qW-9A9#Z(*^/OC@QEQEi܌2MU,r(((((((((((((((((()6Ө O,TP!I䊱F()|S˧yu%ݴE-b(((((((((((((((((((((((((7Y'OZrY3Dѹ7Zh"ż6@ ZF G"⡵i1.QEQEQHd)o eүUK8 EPEPU%7FF Ao2`ȩ2|NÊt7l14h"M: ) *)w0E,Cj1=ě Z*TPEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQE5"M:͖^Wހ&eL%\$ 梭.@61⥸v%Pv;N@sdI~aCLlHH&fq | 2023-02-04 22:33:41 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9002145528793335, "perplexity": 410.5553238415003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500154.33/warc/CC-MAIN-20230204205328-20230204235328-00695.warc.gz"} |
https://en.wikipedia.org/wiki/Linear_element | Electrical element
(Redirected from Linear element)
Electrical elements are conceptual abstractions representing idealized electrical components, such as resistors, capacitors, and inductors, used in the analysis of electrical networks. All electrical networks can be analyzed as multiple electrical elements interconnected by wires. Where the elements roughly correspond to real components the representation can be in the form of a schematic diagram or circuit diagram. This is called a lumped element circuit model. In other cases infinitesimal elements are used to model the network in a distributed element model.
These ideal electrical elements represent real, physical electrical or electronic components but they do not exist physically and they are assumed to have ideal properties, while actual electrical components have less than ideal properties, a degree of uncertainty in their values and some degree of nonlinearity. To model the nonideal behavior of a real circuit component may require a combination of multiple ideal electrical elements in order to approximate its function. For example, an inductor circuit element is assumed to have inductance but no resistance or capacitance, while a real inductor, a coil of wire, has some resistance in addition to its inductance. This may be modeled by an ideal inductance element in series with a resistance.
Circuit analysis using electric elements is useful for understanding many practical electrical networks using components. By analyzing the way a network is affected by its individual elements it is possible to estimate how a real network will behave.
Types
Circuit elements can be classified into different categories. One is how many terminals they have to connect them to other components:
• One-port elements – these represent the simplest components, that have only two terminals to connect to. Examples are resistances, capacitances, inductances, and diodes.
• Multiport elements – these have more than two terminals. They connect to the external circuit through multiple pairs of terminals called ports. For example, a transformer with three separate windings has six terminals and could be idealized as a three-port element; the ends of each winding are connected to a pair of terminals which represent a port.
• Two-port elements – these are the most common multiport elements, which have four terminals consisting of two ports.
Elements can also be divided into active and passive:
• Active elements or sources – these are elements which can source electrical power; examples are voltage sources and current sources. They can be used to represent ideal batteries and power supplies.
• Dependent sources – These are two-port elements with a voltage or current source which is proportional to the voltage or current at a second pair of terminals. These are used in the modelling of amplifying components such as transistors, vacuum tubes, and op-amps.
• Passive elements – These are elements which do not have a source of energy, examples are diodes, resistances, capacitances, and inductances.
Another distinction is between linear and nonlinear:
• Linear elements – these are elements in which the constituent relation, the relation between voltage and current, is a linear function. They obey the superposition principle. Examples of linear elements are resistances, capacitances, inductances, and linear dependent sources. Circuits with only linear elements, linear circuits, do not cause intermodulation distortion, and can be easily analysed with powerful mathematical techniques such as the Laplace transform.
• Nonlinear elements – these are elements in which the relation between voltage and current is a nonlinear function. An example is a diode, in which the current is an exponential function of the voltage. Circuits with nonlinear elements are harder to analyse and design, often requiring circuit simulation computer programs such as SPICE.
Standard elements
Most electrical components and circuits can be modeled using nine standard elements, five passive and four active.[citation needed] Each element is defined by a relation between the state variables of the network: current, ${\displaystyle I}$; voltage, ${\displaystyle V}$, charge, ${\displaystyle Q}$; and magnetic flux, ${\displaystyle \Phi }$.
• Two sources:
• Current sources, measured in amperes – produces a current in a conductor. Affects charge according to the relation ${\displaystyle dQ=-I\,dt}$.
• Voltage sources, measured in volts – produces a potential difference between two points. Affects magnetic flux according to the relation ${\displaystyle d\Phi =V\,dt}$.
${\displaystyle \Phi }$ in this relationship does not necessarily represent anything physically meaningful. In the case of the current generator, ${\displaystyle Q}$, the time integral of current, represents the quantity of electric charge physically delivered by the generator. Here ${\displaystyle \Phi }$ is the time integral of voltage but whether or not that represents a physical quantity depends on the nature of the voltage source. For a voltage generated by magnetic induction it is meaningful, but for an electrochemical source, or a voltage that is the output of another circuit, no physical meaning is attached to it.
Both these elements are necessarily non-linear elements. See #Non-linear elements below.
• Three passive elements: (In 2008 a fourth passive element, the memristor, was created in a lab, but it is not yet found in most circuits.)
• Resistors, with resistance ${\displaystyle R}$ measured in ohms – produces a voltage proportional to the current flowing through the element. Relates voltage and current according to the relation ${\displaystyle dV=R\,dI}$.
• Capacitors, with capacitance ${\displaystyle C}$ measured in farads – produces a current proportional to the rate of change of voltage across the element. Relates charge and voltage according to the relation ${\displaystyle dQ=C\,dV}$.
• Inductors, with inductance ${\displaystyle L}$ measured in henries – produces the magnetic flux proportional to the rate of change of current through the element. Relates flux and current according to the relation ${\displaystyle d\Phi =L\,dI}$.
• Four two-port.active elements:
• Voltage-controlled voltage sources (VCVS) – Generates a voltage based on another voltage with respect to a specified gain. (has infinite input impedance and zero output impedance).
• Voltage-controlled current sources (VCCS) – Generates a current based on a voltage elsewhere in the circuit, with respect to a specified gain, used to model field-effect transistors and vacuum tubes (has infinite input impedance and infinite output impedance). The gain is characterised by a transfer conductance which will have units of siemens.
• Current-controlled voltage sources (CCVS) – Generates a voltage based on an input current elsewhere in the circuit with respect to a specified gain. (has zero input impedance and zero output impedance). The gain is characterised by a transfer impedance which will have units of ohms.
• Current-controlled current sources (CCCS) – Generates a current based on an input current and a specified gain. Used to model bipolar junction transistors. (Has zero input impedance and infinite output impedance).
Linear approximations and non-linear elements
Conceptual symmetries of resistor, capacitor, inductor, and memristor.
A nonlinear element or in a circuit does not have a linear relationship between its circuit variables. Examples include diodes, are transistors and other semiconductor devices, vacuum tubes, and iron core inductors and transformers when operated above their saturation current. Independent voltage and independent current sources can be considered non-linear resistors.[1]
While linear circuits are easy to model and analyze, their linearity is an approximation that only holds over a certain range of input. Elements that operate linearly at low signal levels often show nonlinearity at higher levels. For instance in many audio systems, turning the volume up can make amplifying elements operate nonlinearly, distorting the sound.
In the more general case: resistance is some function of voltage and current; capacitance some function of voltage and charge; and inductance some function of current and flux.[1] Following this pattern, memristance was hypothesized to be a physical property that was some function of flux and charge.
Property General relation Linear approximation
Resistance ${\displaystyle f(V,I)=0}$ ${\displaystyle dV=RdI}$
Capacitance ${\displaystyle f(V,Q)=0}$ ${\displaystyle dQ=CdV}$
Inductance ${\displaystyle f(\Phi ,I)=0}$ ${\displaystyle d\phi =LdI}$
Memristance ${\displaystyle f(\Phi ,Q)=0}$ ${\displaystyle d\phi =MdQ}$
The memristor was proposed by Leon Chua in a 1971 paper, and its definition remains somewhat controversial. A physical component demonstrating memristance was first created in 2008, by a team at HP Labs led by scientist R. Stanley Williams.[2][3][4][5]
There are also two special non-linear elements, the nullator and norator, which are sometimes used in analysis but are not the ideal counterpart of any real component. These are sometimes used in models of components with more than two terminals, such as transistors.[1]
• Nullator: defined as ${\displaystyle V=I=0}$
• Norator: defined as an element which places no restrictions on voltage and current whatsoever.
Linearizing a nonlinear element
Nonlinear elements can be made to operate linearly if the signal in them is limited to a low level. If the input of a non-linear device such as a transistor only varies in a small range around a fixed value, then the input/output relation is linearized around this fixed value (usually called the quiescent point, Q-point, or bias point). This is called a small signal model.
Other two-port elements
All the above are two-terminal, or one-port, elements with the exception of the dependent sources. There are two lossless, passive, linear two-port elements that are normally introduced into network analysis. Their constitutive relations in matrix notation are;
Transformer
${\displaystyle {\begin{bmatrix}V_{1}\\I_{2}\end{bmatrix}}={\begin{bmatrix}0&n\\-n&0\end{bmatrix}}{\begin{bmatrix}I_{1}\\V_{2}\end{bmatrix}}}$
Gyrator
${\displaystyle {\begin{bmatrix}V_{1}\\V_{2}\end{bmatrix}}={\begin{bmatrix}0&-r\\r&0\end{bmatrix}}{\begin{bmatrix}I_{1}\\I_{2}\end{bmatrix}}}$
The transformer maps a voltage at one port to a voltage at the other in a ratio of n. The current between the same two port is mapped by 1/n. The gyrator, on the other hand, maps a voltage at one port to a current at the other. Likewise, currents are mapped to voltages. The quantity r in the matrix is in units of resistance. The gyrator is a necessary element in analysis because it is not reciprocal. Networks built from the basic linear elements only are obliged to be reciprocal and so cannot be used by themselves to represent a non-reciprocal system. It is not essential, however, to have both the transformer and gyrator. Two gyrators in cascade are equivalent to a transformer but the transformer is usually retained for convenience. Introduction of the gyrator also makes either capacitance or inductance non-essential since a gyrator terminated with one of these at port 2 will be equivalent to the other at port 1. However, transformer, capacitance and inductance are normally retained in analysis because they are the ideal properties of the basic physical components transformer, inductor and capacitor whereas a practical gyrator must be constructed as an active circuit.[6][7][8]
Examples
The following are examples of representation of components by way of electrical elements.
• On a first degree of approximation, a battery is represented by a voltage source. A more refined model also includes a resistance in series with the voltage source, to represent the battery's internal resistance (which results in the battery heating and the voltage dropping when in use). A current source in parallel may be added to represent its leakage (which discharges the battery over a long period of time).
• On a first degree of approximation, a resistor is represented by a resistance. A more refined model also includes a series inductance, to represent the effects of its lead inductance (resistors constructed as a spiral have more significant inductance). A capacitance in parallel may be added to represent the capacitive effect of the proximity of the resistor leads to each other. A wire can be represented as a low-value resistor
• Current sources are more often used when representing semiconductors. For example, on a first degree of approximation, a bipolar transistor may be represented by a variable current source that is controlled by the input current. | 2018-07-17 10:47:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 26, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.697542667388916, "perplexity": 876.9867433857704}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589634.23/warc/CC-MAIN-20180717090227-20180717110227-00426.warc.gz"} |
https://cstheory.stackexchange.com/tags/online-algorithms/hot | # Tag Info
## Hot answers tagged online-algorithms
19
I think the difficulty is that this wording slightly misleading; as they state more clearly in the Introduction (1.2), "the expected values of the dual variables constitute a feasible dual solution." For every fixed setting of the dual variables $X$, we obtain some primal solution of value $f(X)$ and a dual solution of value $\frac{e}{e-1}f(X)$. (The dual ...
10
The most common use of 'fractional' is when you're solving an integer program. You relax it by dropping the constraints that the solution be integers, thus yielding a linear program in which the variables can take fractional values (fractional comes from fraction). Later, you can convert the fractional values into integers via a process called 'rounding'.
7
In addition to the Heavy Hitters problem you've mentioned (which has quite a few algorithms: batch-decrement, space-saving, etc.), I'd consider presenting the following: Reservoir sampling - maintain a sample of $k$ elements, uniformly sampled from the set of items which appeared in the stream so far, in $O(k)$ space. Approximate bit counting on a sliding ...
7
A queue can be represented as two stacks and be maintained in amortized constant time. It's then easy to maintain product of all elements of a stack. See Purely Functional Data Structures by Chris Okasaki. (More specifically, figure 3.2 on pp. 18. ) About how to maintain on stacks: Suppose the stack is $s_1, s_2,\ldots, s_n$ from bottom to top. For one ...
6
Brute-force case analysis reveals that the optimal competitive ratio for the special case $n=3$, with no other restrictions on the cost matrix, is the golden ratio $\phi = (\sqrt{5}+1)/2$. Thus, no online algorithm can achieve a competitive ratio better than $\phi$. Suppose $C_{1,2}=0$, $C_{1,3}=1$, and $C_{2,3}=\phi$. Without loss of generality, the ...
4
There are several algorithms for estimating cardinality. This problem seems to be important enough in practice. For example, Redis, which describes itself as a ‘data structure server’, supports it. I suspect students would find this a good motivation. The algorithm that Redis uses, HyperLogLog, may be too difficult to analyze in an undergrad course. But, ...
3
the question does not seem to have been studied much (one possibility is attempting to find a relationship with a "nearby" complexity class say P/poly etc); although here is at least one ref that touches on it: Language operations with regular expressions of polynomial size Gruber/Holzer This work deals with questions regarding to what extent regularity-...
2
It seems that there are no recent books or survey papers on online algorithms.
2
Per @usul's suggestion, consider the following online greedy algorithm $A$. When bin $i$ is revealed, $A$ considers all subsets $S$ of not-yet assigned balls such that all balls in $S$ can fit together in bin $i$. It chooses any largest such set, say $A_i$, then assigns all balls in $A_i$ to bin $i$. The post asks for an algorithm that each ball can use ...
2
@orlp's intuition is correct. lemma. No online algorithm solves the problem in the worst case. Proof. Consider the following instance: $$f(p) = (1+p_1)(1+p_2)(1+p_3) \ge 8 \text{ with } B=3.$$ This instance has just one solution (one $p$ that satisfies it), namely $p=(1,1,1)$. So, given $B=3$ and $L=8$ and just the first factor $1+p_1$, the algorithm ...
2
Thanks. With Garmow's comment I was able to find the first two references which also called on-line algorithms as on-line: Johnson, David S. "Fast algorithms for bin packing." Journal of Computer and System Sciences 8.3 (1974): 272-314. and Johnson, David S. Near-optimal bin packing algorithms. Diss. Massachusetts Institute of Technology, 1973.
2
As Ricky Demer said in his comment, many search problems can be sped up with sorting or building some other index structure Lowest common ancestor queries can be answered in constant time with linear preprocessing. Lots of text problems can be sped up with some preprocessing, e.g. building a suffix array
2
I think there is no paper solving that exact problem, but "Online Vertex-Weighted Matching" by Aggarwal, Goel, Karande, and Mehta (2011) is very close. If I understood correctly, they solve your problem only with all capacities equal to one. My best guess is that you will have to do some work to extend their guarantees and algorithm to your setting. On the ...
2
If you allow randomization, the CountMin (CM) sketch can be used with weights without modification, and can also handle negative weights. When all weights are positive, the standard analysis of CM shows that with a sketch of size $O(\varepsilon^{-1}\log 1/\delta)$ you can compute a $\tilde{w}_i$ so that $\tilde{w_i} \geq w_i$ always, and $\tilde{w}_i \leq ... 2 Here's a generic randomized solution. (Do we even have deterministic solutions in the unweighted case? Don't Space Saving and Batch Decrement both need hash maps?) This is probably not the ideal solution, but it's a start. Weighted Heavy Hitters Algorithm. Input:$S=\{(\text{id}_i,\text{weight}_i)\}_{i=1}^N$a weighted stream. 1. Create an unweighted ... 2 Let$p$be the query point, and assume the interval tree is sorted by lower endpoint and each node stores the maximum endpoint in its subtree. Perform a tree-walk and stop the recursion whenever the lower endpoint of the current node is greater than$p$, or the maximum is smaller than$p$. Now at most one downward path (of length$O(\log n)$) reports no ... 1 If I understand your question correctly and the elements arrive in sorted order, I believe the usual bottom-up AVL tree insertion algorithm meets your criteria. In particular, insert-only AVL trees have$O(1)$amortized (and$O(\lg n)$worst-case) update time. Simply maintain a pointer to the last element of the tree and perform each insertion at that ... 1 Here is what you are looking for. It is quite new: https://arxiv.org/pdf/1810.07362.pdf 1 Another reason for not using cryptographic algorithms in practice is speed. In the streaming setting, we typically do not want to spend too long processing each item in the stream. Computing k cryptographic hash functions will be much more expensive than computing k fast non-cryptographic hash functions, e.g. MurmurHash. In practice, I think most people use ... 1 The obvious problem is that if you use a cryptographic pseudorandom number generator (PRNG), the correctness of your algorithm is conditional on a complexity conjecture. However, usually this can be avoided, because the full strength of cryptographic pseudorandmness is usually a huge overkill for streaming. If your streaming algorithm uses a small amount of ... 1 You can solve this in$O(n \lg n)$time through an appropriate use of interval trees. I'm going to explain how to process the$x$'s in an incremental, streaming fashion. So, suppose we've already received$x_1,\dots,x_n$. Define the sequence$m_1,\dots,m_n$by$m_i=\max(x_i,x_{i+1},\dots,x_n)$. In previous processing, we'll have accumulated a data ... 1 You can solve this in$O(n \lg^2 n)$time. Build a (balanced) binary tree with$n$nodes, where the leaves are annotated with the values$x_1,x_2,\dots,x_n$, and$x_i$is placed on the$i$th leaf from the left. Annotate each internal node$v$in the tree with the maximum value over all the leaves in the subtree rooted at$v\$. For instance, the root is ...
1
I am still unclear about the precise objective you want to optimise over, but you could look at Peter Brucker, Andreas Drexl, Rolf Möhring, Klaus Neumann, and Erwin Pesch, Resource-constrained project scheduling: Notation, classification, models, and methods, European Journal of Operational Research 112 3–41, 1999. doi:10.1016/S0377-2217(98)00204-5 for a ...
1
This question is given as an exercise in the textbook by Borodin and El-Yaniv. Unfortunately, it is impossible to solve. Indeed, it was later discovered that MTF-every-other-access is not 2-competitive. See arxiv.org/abs/1311.7357 for a proof that the competitive ratio of the algorithm is in fact 2.5, and for a history of the false belief that the algorithm ...
1
T. Wagners dissertation is about "Incremental Software Development Environments". You can find a bunch of resources at the Harmonia and Ensemble Homepage
Only top voted, non community-wiki answers of a minimum length are eligible | 2020-08-15 03:22:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7777586579322815, "perplexity": 652.7595289020512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740423.36/warc/CC-MAIN-20200815005453-20200815035453-00424.warc.gz"} |
http://en.wikipedia.org/wiki/Epsilon_conjecture | # Ribet's theorem
(Redirected from Epsilon conjecture)
In mathematics, Ribet's theorem (earlier called the epsilon conjecture or ε-conjecture) is a statement in number theory concerning properties of Galois representations associated with modular forms. It was proposed by Jean-Pierre Serre and proved by Ken Ribet. The proof of epsilon conjecture was a significant step towards the proof of Fermat's Last Theorem. As shown by Serre and Ribet, the Taniyama–Shimura conjecture (whose status was unresolved at the time) and the epsilon conjecture together imply that Fermat's Last Theorem is true.
## Statement
Let E be an elliptic curve with integer coefficients in a Néron minimal model. Suppose that the discriminant Δ of E is written as the product Πpδp of prime powers pδp and similarly the conductor N of E is the product Πpnp of prime powers. Suppose that E is a modular elliptic curve, then we can perform a level descent modulo primes ℓ dividing one of the exponents δp of a prime dividing the discriminant. If pδp is an odd prime power factor of Δ and if p divides N only once (i.e. np=1), then there exists another elliptic curve E' , with conductor N' = N/p, such that the coefficients of the L-series of E are congruent modulo ℓ to the coefficients of the L-series of E' .
The epsilon conjecture is a relative statement: assuming that a given elliptic curve E over Q is modular, it predicts the precise level of E.
## History
In his thesis, Yves Hellegouarch[1] came up with the idea of associating solutions (a,b,c) of Fermat's equation with a completely different mathematical object: an elliptic curve. If ℓ is an odd prime and a, b, and c are positive integers such that
$a^\ell + b^\ell = c^\ell,\$
then a corresponding Frey curve is an algebraic curve given by the equation
$y^2 = x(x - a^\ell)(x + b^\ell).\$
This is a nonsingular algebraic curve of genus one defined over Q, and its projective completion is an elliptic curve over Q.
In 1982[2] Gerhard Frey called attention to the unusual properties of the same curve as Hellegouarch, now called a Frey curve. This provided a bridge between Fermat and Taniyama by showing that a counterexample to Fermat's Last Theorem would create such a curve that would not be modular. The conjecture attracted considerable interest when Frey (1986)[3] suggested that the Taniyama–Shimura–Weil conjecture implies Fermat's Last Theorem. However, his argument was not complete. In 1985[4][5] Jean-Pierre Serre proposed that a Frey curve could not be modular and provided a partial proof of this. This showed that a proof of the semistable case of the Taniyama-Shimura conjecture would imply Fermat's Last Theorem. Serre did not provide a complete proof and what was missing became known as the epsilon conjecture or ε-conjecture. In the summer of 1986, Ribet (1990)[6] proved the epsilon conjecture, thereby proving that the Taniyama–Shimura–Weil conjecture implied Fermat's Last Theorem.
## Implication of Fermat's Last Theorem
Suppose that the Fermat equation with exponent ℓ ≥ 3 had a solution in non-zero integers a, b, c. Let us form the corresponding Frey curve E. It is an elliptic curve and one can show that its discriminant Δ is equal to 16 (abc)2ℓ and its conductor N is the radical of abc, i.e. the product of all distinct primes dividing abc. By the Taniyama–Shimura conjecture, E is a modular elliptic curve. Since N is square-free, by the epsilon conjecture one can perform level descent modulo ℓ. Repeating this procedure, we will eliminate all odd primes from the conductor and reach the modular curve X0(2) of level 2. However, this curve is not an elliptic curve since it has genus zero, resulting in a contradiction. | 2013-12-13 19:48:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9602054357528687, "perplexity": 537.0400271483356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164987957/warc/CC-MAIN-20131204134947-00020-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/college-algebra-10th-edition/chapter-r-section-r-2-algebra-essentials-r-2-assess-your-understanding-page-27/25 | ## College Algebra (10th Edition)
$x \gt 0$
$x$ being positive means that it is greater than 0. Thus, in inequality form, the given statement is $x \gt 0$. | 2018-07-18 20:53:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7645825147628784, "perplexity": 600.7766715557267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590329.25/warc/CC-MAIN-20180718193656-20180718213656-00286.warc.gz"} |
http://autotelicum.github.io/Smooth-CoffeeScript/interactive/interactive-coffeescript.html | No JavaScript → no output and no interactivity.
Scratch card disclaimer: Using a coin may cause damage to your screen.
# Smooth CoffeeScript — Interactive Edition
An introduction to programming in CoffeeScript with an emphasis on clarity and abstraction.
This edition requires an HTML5 web browser. It is meant as a gift to welcome you into the wondrous world of programming. You can provide feedback or have a look at the ghost in the machine: Grimoire. It is ifself an interactive literate document.
The source of this book is a literate markdown program/document which produces this code: CoffeeScript, that translates into JavaScript and produces this output.
The book is freely (CCBYSA) available, and may be used (as a whole or in parts) in any way you see fit, as long as credit is given to the original author. Interactive edition & illustrations by E. Hoigaard1 © 2555 BE / 2012 CE. Source code, static HTML and PDF editions at: http://autotelicum.github.com/Smooth-CoffeeScript.
##### ○•○
Based on Eloquent JavaScript by Marijn Haverbeke
## Part I. Preface
### Foreword
CoffeeScript is a lucid evolution of JavaScript created by Jeremy Ashkenas. This book attempts to be an evolution of “Eloquent JavaScript” by Marijn Haverbeke. Apart from the major change in explaining CoffeeScript instead of JavaScript, numerous other changes have been made and sections have been added, edited or removed.
Everything that is expressed in this book is therefore solely the responsibility of the editor. In the sense of open source software, this book is a fork. To read the excellent original JavaScript work as it was intended by its author refer to Eloquent JavaScript by Marijn Haverbeke.
You do not need to know JavaScript but after reading Smooth CoffeeScript you can in JavaScript Basics by Rebecca Murphey find an overview that can be helpful when debugging or using JavaScript libraries.
##### ○•○
The program examples in this book live in an interactive environment where you can change the examples and create new solutions while you learn CoffeeScript. The environment also includes the Underscore functional library, the Coffeekup HTML markup, and `qc`, a QuickCheck based testing library. These libraries extend CoffeeScript with useful abstractions and testing tools to keep focus on the task at hand instead of distracting boilerplate code.
While it is possible to express programs from a very small set of language primitives, it quickly becomes tedious and error prone to do so. The approach taken here is to include a broader set of functional building blocks as if they were a native part of the programming language. By thinking in terms of these higher level constructs more complex problems can be handled with less effort.
To ensure correctness testing is required. This is especially true when developing reusable algorithms in a dynamic and untyped language. By integrating QuickCheck style test cases as soon as functions are introduced, it is intended that writing tests and declaring assumptions become a seamless part of writing software.
##### ○•○
CoffeeScript is available in browsers and environments where JavaScript is available. The screenshots below show CoffeeScript running the same web server and client application on Mac OS X, Windows and iOS.
The CoffeeScript below shows the brevity of the self-contained application from the screenshots. It contains an HTML 5 web page in Coffeekup markup and its own HTTP web server. The page has a Canvas element with a drawing of the ‘Seed of Life’. Coffeekup is a few hundred lines of CoffeeScript. No frameworks. No JavaScript. Pure and smooth.
``webdesign = -> doctype 5 html -> head -> meta charset: 'utf-8' title 'My drawing | My awesome website' style ''' body {font-family: sans-serif} header, nav, section, footer {display: block} ''' coffeescript -> draw = (ctx, x, y) -> circle = (ctx, x, y) -> ctx.beginPath() ctx.arc x, y, 100, 0, 2*Math.PI, false ctx.stroke() ctx.strokeStyle = 'rgba(255,40,20,0.7)' circle ctx, x, y for angle in [0...2*Math.PI] by 1/3*Math.PI circle ctx, x+100*Math.cos(angle), y+100*Math.sin(angle) window.onload = -> canvas = document.getElementById 'drawCanvas' context = canvas.getContext '2d' draw context, 300, 200 body -> header -> h1 'Seed of Life' canvas id: 'drawCanvas', width: 550, height: 400kup = if exports? then require 'coffeekup' else CoffeeKupwebpage = kup.render webdesign, format:onshowDocument webpage, 565, 500``
An accompanying web server can be written as easy as:
``http = require 'http'server = http.createServer (req, res) -> show "#{req.client.remoteAddress} #{req.method} #{req.url}" res.writeHead 200, 'Content-Type': 'text/html' res.write webpage res.end()server.listen 3389show 'Server running at'show server.address()``
### Getting Started
You do not need to install anything to learn CoffeeScript and complete this book … as long as you have an up-to-date standard compliant HTML5 web browser.2
If you are new to programming then I recommend that you read through the book sequentially and solve the exercises along the way. If you already know JavaScript, then you can choose to read through the language overview at coffeescript.org and skip forward to solve some of the exercises in the Paradigm chapters.
In this Interactive Edition almost all examples3 can be edited and the changes you make take effect instantly. To open an editor, touch or click on the program text in the example.
You can choose to disable automatic evaluation in the menu at the top of this page; then you will have to use a keystroke combination instead.4 There you can also choose between editing in a plain text area or with the embedded CodeMirror editor. If you have a compatible browser then try the CodeMirror option — it has syntax highlighting and semi-automatic indentation.
Tip: Some browsers may become unresponsive when running for example a never ending loop — so save work you may have in other tabs before proceeding. Should it happen then refresh this web page or restart your browser.
Some of the examples in the last chapter of the book show WebSockets — it is optional but to run them you need a compatible browser and CoffeeScript on your system. Installing CoffeeScript on Windows and Mac OS X is easy, see the couple of steps involved in Quick CoffeeScript Install. You get a Read-Eval-Print-Loop (REPL), a commandline interpreter and compiler that can watch for file changes and compile your CoffeeScript modules whenever you save them.
##### ○•○
You will encounter some functions and symbols that are not part of CoffeeScript when you read through the rest of the book. They are needed to run the interactive environment and are explained as they are introduced. The following tables summarize them:
Function Action
`show`/`view` serialized object without or with a return value
`showDocument` embeds an HTML web page inside this web page
`runOnDemand` creates a ‘Run’ button; click it to try the code
`confirm` displays a question and ‘Yes’ and ‘No’ buttons
`prompt` displays a message and an input field
Symbol Explanation
`→` normal output from `show`/`view` functions
`⇒` asynchronous output from display functions
`↵` return value from an executed code block
`☕` timing information if enabled
Included software
Software version details
##### View Solution
Interactive environment details
## Part II. Language
### Introduction
When personal computers were first introduced, most of them came equipped with a simple programming language, usually a variant of BASIC. Interacting with the computer was closely integrated with this language, and thus every computer-user, whether he wanted to or not, would get a taste of it. Now that computers have become plentiful and cheap, typical users do not get much further than clicking things with a mouse. For most people, this works very well. But for those of us with a natural inclination towards technological tinkering, the removal of programming from every-day computer use presents something of a barrier.
Fortunately, as an effect of developments in the World Wide Web, it so happens that every computer equipped with a modern web-browser also has an environment for programming JavaScript which can easily be adapted to an environment for CoffeeScript. In today’s spirit of not bothering the user with technical details, it is kept well hidden, but a web-page can make it accessible, and use it as a platform for learning to program. This is such a page.
##### ○•○
I do not enlighten those who are not eager to learn, nor arouse those who are not anxious to give an explanation themselves. If I have presented one corner of the square and they cannot come back to me with the other three, I should not go over the points again.
Confucius
Besides explaining CoffeeScript, this book tries to be an introduction to the basic principles of programming. Programming, it turns out, is hard. The fundamental rules are, most of the time, simple and clear. But programs, while built on top of these basic rules, tend to become complex enough to introduce their own rules, their own complexity. Because of this, programming is rarely simple or predictable. As Donald Knuth, who is something of a founding father of the field, says, it is an art.
To get something out of this book, more than just passive reading is required. Try to stay sharp, make an effort to solve the exercises, and only continue on when you are reasonably sure you understand the material that came before.
##### ○•○
The computer programmer is a creator of universes for which he alone is responsible. Universes of virtually unlimited complexity can be created in the form of computer programs.
Joseph Weizenbaum, Computer Power and Human Reason
A program is many things. It is a piece of text typed by a programmer, it is the directing force that makes the computer do what it does, it is data in the computer’s memory, yet it controls the actions performed on this same memory. Analogies that try to compare programs to objects we are familiar with tend to fall short, but a superficially fitting one is that of a machine. The gears of a mechanical watch fit together ingeniously, and if the watchmaker was any good, it will accurately show the time for many years. The elements of a program fit together in a similar way, and if the programmer knows what he is doing, the program will run without crashing.
A computer is a machine built to act as a host for these immaterial machines. Computers themselves can only do stupidly straightforward things. The reason they are so useful is that they do these things at an incredibly high speed. A program can, by ingeniously combining many of these simple actions, do very complicated things.
To some of us, writing computer programs is a fascinating game. A program is a building of thought. It is costless to build, weightless, growing easily under our typing hands. If we get carried away, its size and complexity will grow out of control, confusing even the one who created it. This is the main problem of programming. It is why so much of today’s software tends to crash, fail, screw up.
When a program works, it is beautiful. The art of programming is the skill of controlling complexity. The great program is subdued, made simple in its complexity.
##### ○•○
Today, many programmers believe that this complexity is best managed by using only a small set of well-understood techniques in their programs. They have composed strict rules about the form programs should have, and the more zealous among them will denounce those who break these rules as bad programmers.
What hostility to the richness of programming! To try to reduce it to something straightforward and predictable, to place a taboo on all the weird and beautiful programs. The landscape of programming techniques is enormous, fascinating in its diversity, still largely unexplored.
It is certainly littered with traps and snares, luring the inexperienced programmer into all kinds of horrible mistakes, but that only means you should proceed with caution, keep your wits about you. As you learn, there will always be new challenges, new territory to explore. The programmer who refuses to keep exploring will surely stagnate, forget his joy, lose the will to program (and become a manager).
As far as I am concerned, the definite criterion for a program is whether it is correct. Efficiency, clarity, and size are also important, but how to balance these against each other is always a matter of judgement, a judgement that each programmer must make for himself. Rules of thumb are useful, but one should never be afraid to break them.
##### ○•○
In the beginning, at the birth of computing, there were no programming languages. Programs looked something like this:
``````00110001 00000000 00000000 00110001 00000001 00000001
00110011 00000001 00000010 01010001 00001011 00000010
00100010 00000010 00001000 01000011 00000001 00000000
01000001 00000001 00000001 00010000 00000010 00000000
01100010 00000000 00000000
``````
That is a program to add the numbers from one to ten together, and print out the result (1 + 2 + … + 10 = 55). It could run on a very simple kind of computer. To program early computers, it was necessary to set large arrays of switches in the right position, or punch holes in strips of cardboard and feed them to the computer. You can imagine how this was a tedious, error-prone procedure. Even the writing of simple programs required much cleverness and discipline, complex ones were nearly inconceivable.
Of course, manually entering these arcane patterns of bits (which is what the 1s and 0s above are generally called) did give the programmer a profound sense of being a mighty wizard. And that has to be worth something, in terms of job satisfaction.
Each line of the program contains a single instruction. It could be written in English like this:
``````1 Store the number 0 in memory location 0
2 Store the number 1 in memory location 1
3 Store the value of memory location 1 in location 2
4 Subtract the number 11 from the value in location 2
5 If the value in memory location 2 is the number 0,
continue with instruction 9
6 Add the value of memory location 1 to location 0
7 Add the number 1 to the value of memory location 1
8 Continue with instruction 3
9 Output the value of memory location 0
``````
While that is more readable than the binary soup, it is still rather unpleasant. It might help to use names instead of numbers for the instructions and memory locations:
``````Set 'total' to 0
Set 'count' to 1
[loop]
Set 'compare' to 'count'
Subtract 11 from 'compare'
If 'compare' is zero, continue at [end]
Continue at [loop]
[end]
Output 'total'
``````
At this point it is not too hard to see how the program works. Can you? The first two lines give two memory locations their starting values: `total` will be used to build up the result of the program, and `count` keeps track of the number that we are currently looking at. The lines using `compare` are probably the weirdest ones. What the program wants to do is see if `count` is equal to 11, in order to decide whether it can stop yet. Because the machine is so primitive, it can only test whether a number is zero, and make a decision (jump) based on that. So it uses the memory location labelled `compare` to compute the value of `count - 11`, and makes a decision based on that value. The next two lines add the value of `count` to the result, and increment `count` by one every time the program has decided that it is not 11 yet. Here is the same program in CoffeeScript:
``total = 0count = 1while count <= 10 total += count count += 1total``
This gives us a few more improvements. Most importantly, there is no need to specify the way we want the program to jump back and forth anymore. The magic word `while` takes care of that. It continues executing the lines indented below it as long as the condition it was given holds: `count <= 10`, which means ‘`count` is less than or equal to `10`’. Apparently, there is no need anymore to create a temporary value and compare that to zero. This was a stupid little detail, and the power of programming languages is that they take care of stupid little details for us.
This can also be expressed in a shorter form in CoffeeScript:
``total = 0total += count for count in [1..10]total``
The `for` and `in` words goes through the range of numbers from 1 to 10 `[1..10]`, assigning each number in turn to `count`. Each value in `count` is then added to `total`.
Finally, here is what the program could look like if we happened to have the convenient operation `sum` available, which computes the sum of a collection of numbers similar to the mathematical notation ${\sum }_{n=1}^{10}n$:
``sum [1..10]``
##### View Solution
Another possibility is to have functions attached to datatypes. Here a sum function is attached to an array, giving the sum of the elements in the array.
``[1..10].sum()``
##### View Solution
The moral of this story, then, is that the same program can be expressed in long and short, unreadable and readable ways. The first version of the program was extremely obscure, while the last ones are almost English: `show` the `sum` of the numbers from `1` to `10`. (We will see in later chapters how to build things like `sum`.)
A good programming language helps the programmer by providing a more abstract way to express himself. It hides uninteresting details, provides convenient building blocks (such as the `while` construct), and, most of the time, allows the programmer to add building blocks himself (such as the `sum` operation).
##### ○•○
JavaScript is the language that is, at the moment, mostly being used to do all kinds of clever and horrible things with pages on the World Wide Web. JavaScript is also used for scripting in a variety of applications and operating systems. Of special note is server-side JavaScript (SSJS), where the server portion of a web application is written in JavaScript, so a full application can be expressed in one programming language. CoffeeScript generates standard JavaScript code and can therefore be used in environments where standard JavaScript is accepted. It means that both browser portions and server portions can be written in CoffeeScript.
CoffeeScript is a new language so it remains to be seen how popular it becomes for general application development, but if you are interested in programming, CoffeeScript is definitely a useful language to learn. Even if you do not end up doing much web programming, the mind-bending programs in this book will always stay with you, haunt you, and influence the programs you write in other languages.
There are those who will say terrible things about JavaScript. Many of these things are true. When I was for the first time required to write something in JavaScript, I quickly came to despise the language. It would accept almost anything I typed, but interpret it in a way that was completely different from what I meant. This had a lot to do with the fact that I did not have a clue what I was doing, but there is also a real issue here: JavaScript is ridiculously liberal in what it allows. The idea behind this design was that it would make programming in JavaScript easier for beginners. In actuality, it mostly makes finding problems in your programs harder, because the system will not point them out to you.
However, the flexibility of the language is also an advantage. It leaves space for a lot of techniques that are impossible in more rigid languages, and it can be used to overcome some of JavaScript’s shortcomings. After learning it properly, and working with it for a while, I have really learned to like this language. CoffeeScript repairs many of the confusing and cumbersome aspects of JavaScript, while keeping its underlying flexibility and beauty. It is doubleplusgood.
##### ○•○
Most chapters in this book contain quite a lot of code5. In my experience, reading and writing code is an important part of learning to program. Try to not just glance over these examples, but read them attentively and understand them. This can be slow and confusing at first, but you will quickly get the hang of it. The same goes for the exercises. Do not assume you understand them until you have actually written a working solution.
Because of the way the web works, it is always possible to look at the JavaScript programs that people put in their web-pages. This can be a good way to learn how some things are done. Because most web programmers are not ‘professional’ programmers, or consider JavaScript programming so uninteresting that they never properly learned it, a lot of the code you can find like this is of a very bad quality. When learning from ugly or incorrect code, the ugliness and confusion will propagate into your own code, so be careful who you learn from. Another source of programs are CoffeeScript projects hosted on open source services such as github.
### Basic CoffeeScript: values, variables, and control flow
Inside the computer’s world, there is only data. That which is not data, does not exist. Although all data is in essence just a sequence of bits6, and is thus fundamentally alike, every piece of data plays its own role. In CoffeeScript’s system, most of this data is neatly separated into things called values. Every value has a type, which determines the kind of role it can play. There are six basic types of values: Numbers, strings, booleans, objects, functions, and undefined values.
To create a value, one must merely invoke its name. This is very convenient. You do not have to gather building material for your values, or pay for them, you just call for one and woosh, you have it. They are not created from thin air, of course. Every value has to be stored somewhere, and if you want to use a gigantic amount of them at the same time you might run out of computer memory. Fortunately, this is only a problem if you need them all simultaneously. As soon as you no longer use a value, it will dissipate, leaving behind only a few bits. These bits are recycled to make the next generation of values.
##### ○•○
Values of the type number are, as you might have deduced, numeric values. They are written the way numbers are usually written:
``144``
Enter that in the console, and the same thing is printed in the output window. The text you typed in gave rise to a number value, and the console took this number and wrote it out to the screen again. In a case like this, that was a rather pointless exercise, but soon we will be producing values in less straightforward ways, and it can be useful to ‘try them out’ on the console to see what they produce.
This is what `144` looks like in bits7:
``````01000000 01100010 00000000 00000000 00000000 00000000 00000000 00000000
``````
The number above has 64 bits. Numbers in CoffeeScript always do. This has one important repercussion: There is a limited amount of different numbers that can be expressed. With three decimal digits, only the numbers 0 to 999 can be written, which is 103 = 1000 different numbers. With 64 binary digits, 264 different numbers can be written. This is a lot, more than 1019 (a one with nineteen zeros).
Not all whole numbers below 1019 fit in a CoffeeScript number though. For one, there are also negative numbers, so one of the bits has to be used to store the sign of the number. A bigger issue is that non-whole numbers must also be represented. To do this, 11 bits are used to store the position of the decimal dot within the number.
That leaves 52 bits8. Any whole number less than 252, which is over 1015, will safely fit in a CoffeeScript number. In most cases, the numbers we are using stay well below that, so we do not have to concern ourselves with bits at all. Which is good. I have nothing in particular against bits, but you do need a terrible lot of them to get anything done. When at all possible, it is more pleasant to deal with bigger things.
Fractional numbers are written by using a dot.
``9.81``
For very big or very small numbers, one can also use ‘scientific’ notation by adding an `e`, followed by the exponent of the number:
``2.998e8``
Which is 2.998 ⋅ 108 = 299 800 000.
Calculations with whole numbers (also called integers) that fit in 52 bits are guaranteed to always be precise. Unfortunately, calculations with fractional numbers are generally not. Like π (pi) can not be precisely expressed by a finite amount of decimal digits, many numbers lose some precision when only 64 bits are available to store them. This is a shame, but it only causes practical problems in very specific situations9. The important thing is to be aware of it, and treat fractional digital numbers as approximations, not as precise values.
##### ○•○
The main thing to do with numbers is arithmetic. Arithmetic operations such as addition or multiplication take two number values and produce a new number from them. Here is what they look like in CoffeeScript:
``100 + 4 * 11``
The `+` and `*` symbols are called operators. The first stands for addition, and the second for multiplication. Putting an operator between two values will apply it to those values, and produce a new value.
Does the example mean ‘add 4 and 100, and multiply the result by 11’, or is the multiplication done before the adding? As you might have guessed, the multiplication happens first. But, as in mathematics, this can be changed by wrapping the addition in parentheses:
``(100 + 4) * 11``
For subtraction, there is the `-` operator, and division can be done with `/`. When operators appear together without parentheses, the order in which they are applied is determined by the precedence of the operators. The first example shows that multiplication has a higher precedence than addition. Division and multiplication always come before subtraction and addition. When multiple operators with the same precedence appear next to each other (`1 - 1 + 1`) they are applied left-to-right.
Try to figure out what value this produces, and then copy it to the next field to see if you were correct…
``115 * 4 - 4 + 88 / 2``
Touch or click in the following field to write in it. To evaluate some CoffeeScript, either just type it in Auto Evaluation mode or press ↑ / ↩ (Shift/Enter) while in the field in manual mode.
``# Compose a solution here``
These rules of precedence are not something you should worry about. When in doubt, just add parentheses.
There is one more arithmetic operator which is probably less familiar to you. The `%` symbol is used to represent the modulo operation. `X` modulo `Y` is the remainder of dividing `X` by `Y`. For example `314 % 100` is `14`, `10 % 3` is `1`, and `144 % 12` is `0`. Modulo has the same precedence as multiplication and division.
##### ○•○
The next data type is the string. Its use is not as evident from its name as with numbers, but it also fulfills a very basic role. Strings are used to represent text, the name supposedly derives from the fact that it strings together a bunch of characters. Strings are written by enclosing their content in quotes:
``'Patch my boat with chewing gum.'``
Almost anything can be put between quotes, and CoffeeScript will make a string value out of it. But a few characters are tricky. You can imagine how putting quotes between quotes might be hard.
``'The programmer pondered: "0x2b or not 0x2b"'``
CoffeeScript implements both single quoted and double quoted strings, which can be handy when you have only one kind of quote in a string.
``"Aha! It's 43 if I'm not a bit off"``
Double quoted strings can contain interpolated values, small snippets of CoffeeScript code between `#{` and `}`. The code is evaluated and inserted into the string.
``"2 + 2 gives #{2 + 2}"``
Newlines, the things you get when you press enter, can not be put between quotes in the normal form of strings. A string can span multiple lines to help avoid overly long lines in the program but the line breaks are not shown in the output.
``'Imagine if this was a very long line of text'``
##### ○•○
CoffeeScript has triple-quoted strings aka heredocs to make it easy to have strings that span multiple lines where the line breaks are preserved in the output. Indentation before the quotes are ignored so the following lines can be aligned nicely.
``'''First comes A then comes B'''``
The triple double quoted variant allows for interpolated values.
``"""Math 101: 1 + 1 --- #{1 + 1}"""``
Still to be able to have special characters in a string, the following trick is used: Whenever a backslash (‘`\`’) is found inside quoted text, it indicates that the character after it has a special meaning. A quote that is preceded by a backslash will not end the string, but be part of it. When an ‘`n`’ character occurs after a backslash, it is interpreted as a newline. Similarly, a ‘`t`’ after a backslash means a tab character.
``'This is the first line\nAnd this is the second'``
There are of course situations where you want a backslash in a string to be just a backslash, not a special code. If two backslashes follow each other, they will collapse right into each other, and only one will be left in the resulting string value:
``'A newline character is written like "\\n".'``
##### ○•○
Strings can not be divided, multiplied, or subtracted. The `+` operator can be used on them. It does not add, but it concatenates, it glues two strings together.
``'con' + 'cat' + 'e' + 'nate'``
There are more ways of manipulating strings, but these are discussed later.
##### ○•○
Not all operators are symbols, some are written as words. For example, the `typeof` operator, which produces a string value naming the type of the value you give it.
``typeof 4.5``
The other operators we saw all operated on two values, `typeof` takes only one. Operators that use two values are called binary operators, while those that take one are called unary operators. The minus operator can be used both as a binary and unary operator10:
``-(10 - 2)``
##### ○•○
Then there are values of the boolean type. There are two of these: `true` and `false`. CoffeeScript has some aliases for them: `true` can be written as `yes` or `on`. `false` as `no` or `off`. These alternatives can in some cases make a program easier to read. Here is one way to produce a `true` value:
``3 > 2``
And `false` can be produced like this:
``3 < 2``
I hope you have seen the `>` and `<` signs before. They mean, respectively, ‘is greater than’ and ‘is less than’. They are binary operators, and the result of applying them is a boolean value that indicates whether they hold in this case. You can chain comparisons to test if something is within an interval. These comparisons give respectively `true` and `false`:
``100 < 115 < 200``
``100 < 315 < 200``
Strings can be compared in the same way:
``'Aardvark' < 'Zoroaster'``
The way strings are ordered is more or less alphabetic. More or less… Uppercase letters are always ‘less’ than lowercase ones, so `'Z' < 'a'` is `true`, and non-alphabetic characters (’ `!`’, ‘`@`’, etc) are also included in the ordering. The actual way in which the comparison is done is based on the Unicode standard. This standard assigns a number to virtually every character one would ever need, including characters from Greek, Arabic, Japanese, Tamil, and so on. Having such numbers is practical for storing strings inside a computer — you can represent them as a list of numbers. When comparing strings, CoffeeScript just compares the numbers of the characters inside the string, from left to right.
Other similar operators are `>=` (‘is greater than or equal to’), `<=` (‘is less than or equal to’), `==` (‘is equal to’), and `!=` (‘is not equal to’). Equal to can also be written in text as `is` and not equal to as `isnt`.
``'Itchy' isnt 'Scratchy'``
##### ○•○
There are also some useful operations that can be applied to boolean values themselves. CoffeeScript supports three logical operators: `and`, `or`, and `not`. These can be used to ‘reason’ about booleans.
The logical `and` operator can also be written as `&&`. It is a binary operator, and its result is only `true` if both of the values given to it are `true`.
``true and false``
Logical `or` with alias `||`, is `true` if either of the values given to it is `true`:
``true or false``
`not` can be written as an exclamation mark, `!`, it is a unary operator that flips the value given to it, `!true` is `false`, and `not false` is `true`.
#### Exercise 1
``((4 >= 6) || ('grass' != 'green')) and !(((12 * 2) == 144) and true)``
Is this true? For readability, there are a lot of unnecessary parentheses in there. This simple version means the same thing:
``(4 >= 6 or 'grass' isnt 'green') and not(12 * 2 is 144 and true)``
``# Compose a solution here``
##### View Solution
It is not always obvious when parentheses are needed. In practice, one can usually get by with knowing that of the operators we have seen so far, `or` has the lowest precedence, then comes `and`, then the comparison operators ( `>`, `==`, etcetera), and then the rest. This has been chosen in such a way that, in simple cases, as few parentheses as possible are necessary.
##### ○•○
All the examples so far have used the language like you would use a pocket calculator. Make some values and apply operators to them to get new values. Creating values like this is an essential part of every CoffeeScript program, but it is only a part. A piece of code that produces a value is called an expression. Every value that is written directly (such as `22` or `'psychoanalysis'`) is an expression. An expression between parentheses is also an expression. And a binary operator applied to two expressions, or a unary operator applied to one, is also an expression.
There are a few more ways of building expressions, which will be revealed when the time is ripe.
There exists a unit that is bigger than an expression. It is called a statement. A program is built as a list of statements. Most statements end with a newline, although a statement can stretch over many lines. Statements can also end with a semicolon (`;`). In CoffeeScript semicolon is mostly used if you want to place multiple statements on the same line. The simplest kind of statement is an expression with a semicolon after it. This is a program:
``1; not false``
It is a useless program. An expression can be content to just produce a value, but a statement only amounts to something if it somehow changes the world. It could print something to the screen — that counts as changing the world — or it could change the internal state of the program in a way that will affect the statements that come after it. These changes are called ‘side effects’. The statements in the example above just produce the values `1` and `true`, and then immediately throw them into the bit bucket11. This leaves no impression on the world at all, and is not a side effect.
##### ○•○
How does a program keep an internal state? How does it remember things? We have seen how to produce new values from old values, but this does not change the old values, and the new value has to be immediately used or it will dissipate again. To catch and hold values, CoffeeScript provides a thing called a variable.
``caught = 5 * 5``
A variable always has a name, and it can point at a value, holding on to it. The statement above creates a variable called `caught` and uses it to grab hold of the number that is produced by multiplying `5` by `5`.
After running the above program, you can type the word `caught` into the console, and it will retrieve the value `25` for you. The name of a variable is used to fetch its value. `caught + 1` also works. A variable name can be used as an expression, and thus can be part of bigger expressions.
Assigning a value to a new variable name with the `=` operator, creates the new variable. Variable names can be almost every word, but they may not include spaces. Digits can be part of variable names, `catch22` is a valid name, but the name must not start with one. The characters ‘`\$`’ and ‘`_`’ can be used in names as if they were letters, so `\$_\$` is a correct variable name.
When a variable points at a value, that does not mean it is tied to that value forever. At any time, the `=` operator can be used on existing variables to yank them away from their current value and make them point to a new one.
``caught = 4 * 4``
##### ○•○
You should imagine variables as tentacles, rather than boxes. They do not contain values, they grasp them — two variables can refer to the same value. Only the values that the program still has a hold on can be accessed by it. When you need to remember something, you grow a tentacle to hold on to it, or re-attach one of your existing tentacles to a new value:
To remember the amount of dollars that Luigi still owes you, you could use a variable… Then, every time Luigi pays something back, this amount can be decremented by giving the variable a new number:
``luigiDebt = 140luigiDebt = luigiDebt - 35``
The collection of variables and their values that exist at a given time is called the environment. When a program starts up, this environment is not empty. It always contains a number of standard variables.
If you install CoffeeScript and use the `coffee` command line program to execute a CoffeeScript program or run the interactive environment with `coffee -r ./prelude` then the environment is called `global`. You can view it by typing: →∣ / ‘Tab’. When your browser loads a page, it creates a new environment called `window` and attaches these standard values to it. The variables created and modified by programs on that page survive until the browser goes to a new page.12
##### ○•○
A lot of the values provided by the standard environment have the type ‘function’. A function is a piece of program wrapped in a value. Generally, this piece of program does something useful, which can be evoked using the function value that contains it. In the development environment, the variable `show` holds a function that shows a message in the terminal or command line window. You can use it like this:
``show 'Also, your hair is on fire.'``
Executing the code in a function is called invoking or applying it. The notation for doing this is the function name followed by parentheses or a comma separated list of values. Every expression that produces a function value can be invoked by putting parentheses after it. The parentheses can be left out when values are passed. The string value is given to the function, which uses it as the text to show in the console window. Values given to functions are called parameters or arguments. `show` needs only one of them, but other functions might need a different number.
##### ○•○
Showing a message is a side effect. A lot of functions are useful because of the side effects they produce. It is also possible for a function to produce a value, in which case it does not need to have a side effect to be useful. For example, there is a function `Math.max`, which takes two arguments and gives back the biggest of the two:
``show Math.max 2, 4``
When a function produces a value, it is said to return it. Because things that produce values are always expressions in CoffeeScript, function calls can be used as a part of bigger expressions:
``show 100 + Math.max 7, 4show Math.max(7, 4) + 100show Math.max(7, 4 + 100)show Math.max 7, 4 + 100``
When parentheses are left out from a function call then CoffeeScript implicitly inserts them stretching to the end of the line. In the example above it means that the first two lines gives the answer `107` and the last two `104`. So depending on your intention you may have to use parentheses to get the result you want. Functions↓ discusses writing your own functions.
##### ○•○
As the previous examples illustrated, `show` can be useful for showing the result of an expression. `show` is not a standard CoffeeScript function, browsers do not provide it for you, it is made available by the interactive environment. When you create your own programs in the web browser, you can instead use `alert` to pop up a dialog with a message or `console.log` to direct output to your browsers built-in console.
We will continue in the CoffeeScript environment. `show` tries to display its argument the way it would look in a program, which can give more information about the type of a value. In an interactive console, started with `coffee -r ./prelude`, you can explore the environment or you can try it right here:
``show Math.PIshow console if console?show showDocument``
What the output means is not so important for now. `show` is a tool that can give you details on the things in your programs, which can be handy later, if something does not behave the way you had expected. A variant of `show` is `view` it returns the value it displays. That can be helpful when you want to see what is going on inside a long expression.
``show 1 + 2 + 3 + 4 + 5 + view 6 + 7``
##### ○•○
The environment provided by browsers contains a few more functions for popping up windows. You can ask the user an OK/Cancel question using `confirm`. This returns a boolean, `true` if the user presses ‘OK’, and `false` if he presses ‘Cancel’. The interactive environment has a similar `confirm` function where the user is asked yes or no to a question. It displays the question and its buttons inline in the output.
Blocking a user interface with a question should only happen when getting an answer right now is really critical. None of the questions on this page are, you can continue reading without answering any of them. In a CoffeeScript server, a question should normally not stop the process and wait for a user to reply. Instead the program continues running the code following the function call. Eventually when the user has answered the question, then a function given as an argument is called with the answer. This piece of code involves a bit of magic, that will be explained in Functions↓. While it is more complicated for this use, we will in later chapters see that it makes perfect sense for web applications with many users and in responsive user interfaces.
``confirm 'Shall we, then?', (answer) -> show answer``
#### Prompt
`prompt` can be used to ask an ‘open’ question. The first argument is the question, the second one is the text that the user starts with. A line of text can be typed into the window, and the function will — in a browsers’ default implementation — return this as a string. As with `confirm` the interactive environment offers a similar function, that takes a third argument which will receive the answer.
``prompt 'Tell us everything you know', '...', (answer) -> show 'So you know: ' + answer``
##### ○•○
It is possible to give almost every variable in the environment a new value. This can be useful, but also dangerous. If you give `show` the value `8`, you will not be able to show things anymore. You can refresh this web page to start over. If your browser supports offline web apps then you can even refresh this web page when you are somewhere without internet - this works for example on iOS 5. Some functions like `confirm` and `prompt` also work when you run your program from a file, but they interact poorly with the server environment. So should you try these examples there, then fortunately… you can stop a server program with `CTRL-C` and pick up where you left off.
##### ○•○
One-line programs are not very interesting. When you put more than one statement into a program, the statements are, predictably, executed one at a time, from top to bottom.
``prompt 'Pick a number', '', (answer) -> theNumber = Number answer show 'Your number is the square root of ' + (theNumber * theNumber)``
The function `Number` converts a value to a number, which is needed in this case because the answer from `prompt` is a string value. There are similar functions called `String` and `Boolean` which convert values to those types.
##### ○•○
Consider a program that prints out all even numbers from 0 to 12. One way to write this is:
``show 0show 2show 4show 6show 8show 10show 12``
That works, but the idea of writing a program is to make something less work, not more. If we needed all even numbers below 1000, the above would be unworkable. What we need is a way to automatically repeat some code.
``currentNumber = 0while currentNumber <= 12 show currentNumber currentNumber = currentNumber + 2``
You may have seen `while` in the Introduction↑ chapter. A statement starting with the word `while` creates a loop. A loop is a disturbance in the sequence of statements, it may cause the program to repeat some statements multiple times. In this case, the word `while` is followed by an expression, which is used to determine whether the loop will loop or finish. As long as the boolean value produced by this expression is `true`, the code in the loop is repeated. As soon as it is false, the program goes to the bottom of the loop and continues as normal.
The variable `currentNumber` demonstrates the way a variable can track the progress of a program. Every time the loop repeats, it is incremented by `2`, and at the beginning of every repetition, it is compared with the number `12` to decide whether to keep on looping.
The third part of a `while` statement is another statement. This is the body of the loop, the action or actions that must take place multiple times. Indentation is used to group statements into blocks. To the world outside the block, a block counts as a single statement. In the example, this is used to include in the loop both the call to `show` and the statement that updates `currentNumber`.
If we did not have to print the numbers, the program could have been:
``counter = 0while counter <= 12 then counter = counter + 2``
Here, `counter = counter + 2` is the statement that forms the body of the loop. The `then` keyword separates the boolean from the body, so both can be on the same line.
#### Exercise 2
Use the techniques shown so far to write a program that calculates and shows the value of 210 (2 to the 10th power). You are, obviously, not allowed to use a cheap trick like just writing `2 * 2 * ...`
If you are having trouble with this, try to see it in terms of the even-numbers example. The program must perform an action a certain amount of times. A counter variable with a `while` loop can be used for that. Instead of printing the counter, the program must multiply something by 2. This something should be another variable, in which the result value is built up.
Do not worry if you do not quite see how this would work yet. Even if you perfectly understand all the techniques this chapter covers, it can be hard to apply them to a specific problem. Reading and writing code will help develop a feeling for this, so study the solution, and try the next exercise.
``# Compose a solution hererunOnDemand -> # Start lines below with at least 2 spaces``
##### View Solution
`NOTE` When solving the exercises beware of never ending loops (loops where the condition does not change). A problem loop can look like this: `while counter then counter`. Since the `counter` is not changed the loop will go on forever. If one is present then your browser will become unresponsive and you will have to reload the page or restart the browser.
Such a loop can occur accidentally while entering a solution. To make the problem less likely, write your code after the line `runOnDemand ->`. Start each line with two spaces (or more). When your solution is ready, press the `Run` button.
#### Exercise 3
With some slight modifications, the solution to the previous exercise can be made to draw a triangle. And when I say ‘draw a triangle’ I mean ‘print out some text that almost looks like a triangle when you squint’.
Print out ten lines. On the first line there is one ‘#’ character. On the second there are two. And so on.
How does one get a string with X ‘#’ characters in it? One way is to build it every time it is needed with an ‘inner loop’ — a loop inside a loop. A simpler way is to reuse the string that the previous iteration of the loop used, and add one character to it.
``# Compose a solution hererunOnDemand -> # Start lines below with at least 2 spaces``
##### View Solution
You will have noticed the spaces I put in front of some statements. These are required: The level of indentation decides which block a line belongs to. The role of the indentation inside blocks is to make the structure of the code clearer to a reader. Because new blocks can be opened inside other blocks, it can become hard to see where one block ends and another begins if they were not indented. When lines are indented, the visual shape of a program corresponds to the shape of the blocks inside it. I like to use two spaces for every open block, but tastes differ. If a line becomes too long, then you can split it between two words or place a `\` at the end of the line and continue on the next.
##### ○•○
The uses of `while` we have seen so far all show the same pattern. First, a ‘counter’ variable is created. This variable tracks the progress of the loop. The `while` itself contains a check, usually to see whether the counter has reached some boundary yet. Then, at the end of the loop body, the counter is updated.
A lot of loops fall into this pattern. For this reason, CoffeeScript, and similar languages, also provide a slightly shorter and more comprehensive form:
``for number in [0..12] by 2 then show number``
This program is exactly equivalent to the earlier even-number-printing example. The only change is that all the statements that are related to the ‘state’ of the loop are now on one line. The numbers in square brackets are a range `[4..7]`, a list of numbers starting from the first number and going up one by one to the last. A range with two dots includes the last number in the list `(4,5,6,7)`, with three dots `[4...7]` the last number is excluded `(4,5,6)`. The amount of each step can be changed with the `by` keyword. So `[2..6] by 2` gives the list `(2,4,6)`. Ranges can also decrement if the first number is largest, or involve negative numbers or floating point numbers.
The `number` in the `for` comprehension take on each successive value from the range during each turn through the loop. The `number` value is then available in the loop body where it can be used in computations or as here in `show number`. In most cases this is shorter and clearer than a `while` construction.
The `for` comprehension can take on other forms as well.
``# For with indented bodyfor number in [0..12] by 2 show number``
Another is that the body of the loop can be given before the `for` statement.
``# For with prepended bodyshow number for number in [0..12] by 2``
##### ○•○
The lines that starts with ‘#’ in the previous example might have looked a bit suspicious to you. It is often useful to include extra text in a program. The most common use for this is adding some explanations in human language to a program.
``# The variable counter, which is about to be defined,# is going to start with a value of 0, which is zero.counter = 0# Now, we are going to loop, hold on to your hat.while counter < 100 # counter is less than one hundred ### Every time we loop, we INCREMENT the value of counter Seriously, we just add one to it. ### counter++# And then, we are done.``
This kind of text is called a comment. The rules are like this: ‘`#`’ starts a comment, which goes on until the end of the line. ‘`###`’ starts another kind of comment that goes on until a ‘`###`’ is found so it can stretch over multiple lines.
As you can see, even the simplest programs can be made to look big, ugly, and complicated by simply adding a lot of comments to them.
##### ○•○
I have been using some rather odd capitalisation in some variable names. Because you can not have spaces in these names — the computer would read them as two separate variables — your choices for a name that is made of several words are more or less limited to the following:
`````` fuzzylittleturtle FuzzyLittleTurtle
fuzzy_little_turtle fuzzyLittleTurtle
``````
The first one is hard to read. Personally, I like the one with the underscores, though it is a little painful to type. However, since CoffeeScript evolved from JavaScript, most CoffeeScript programmers follow the JavaScript convention with the last one. Its the one used by the standard JavaScript functions. It is not hard to get used to little things like that, so I will just follow the crowd and capitalise the first letter of every word after the first.
In a few cases, such as the `Number` function, the first letter of a variable is also capitalised. This was done to mark this function as a constructor. What a constructor is will become clear in Object Orientation↓. For now, the important thing is not to be bothered by this apparent lack of consistency.
##### ○•○
Note that names that have a special meaning, such as `while`, and `for` may not be used as variable names. These are called keywords. There are also a number of words which are ‘reserved for use’ in future versions of JavaScript and CoffeeScript. These are also officially not allowed to be used as variable names, though some environments do allow them. The full list in Reserved Words↓ is rather long.
Do not worry about memorising these for now, but remember that this might be the problem when something does not work as expected. In my experience, `char` (to store a one-character string) and `class` are the most common names to accidentally use.
#### Exercise 4
Rewrite the solutions of the previous two exercises to use `for` instead of `while`.
``# Compose a solution here``
##### View Solution
A program often needs to ‘update’ a variable with a value that is based on its previous value. For example `counter = counter + 1`. CoffeeScript provides a shortcut for this: `counter += 1`. This also works for many other operators, for example `result *= 2` to double the value of `result`, or `counter -= 1` to count downwards. `counter++` and `counter--` are shorter versions of `counter += 1` and `counter -= 1`.
##### ○•○
Loops are said to affect the control flow of a program. They change the order in which statements are executed. In many cases, another kind of flow is useful: skipping statements.
We want to show all numbers between 0 and 20 which are divisible both by 3 and by 4.
``for counter in [0..20] if counter % 3 is 0 and counter % 4 is 0 show counter``
The keyword `if` is not too different from the keyword `while`: It checks the condition it is given, and executes the statement after it based on this condition. But it does this only once, so that the statement is executed zero or one time.
The trick with the modulo (`%`) operator is an easy way to test whether a number is divisible by another number. If it is, the remainder of their division, which is what modulo gives you, is zero.
If we wanted to print all of the numbers between 0 and 20, but put parentheses around the ones that are not divisible by 4, we can do it like this:
``for counter in [0..20] if counter % 4 is 0 show counter if counter % 4 isnt 0 show '(' + counter + ')'``
But now the program has to determine whether `counter` is divisible by `4` two times. The same effect can be gotten by appending an `else` part after an `if` statement. The `else` statement is executed only when the `if`’s condition is false.
``for counter in [0..20] if counter % 4 is 0 show counter else show '(' + counter + ')'``
To stretch this trivial example a bit further, we now want to print these same numbers, but add two stars after them when they are greater than 15, one star when they are greater than 10 (but not greater than 15), and no stars otherwise.
``for counter in [0..20] if counter > 15 show counter + '**' else if counter > 10 show counter + '*' else show counter``
This demonstrates that you can chain `if` statements together. In this case, the program first looks if `counter` is greater than `15`. If it is, the two stars are printed and the other tests are skipped. If it is not, we continue to check if `counter` is greater than `10`. Only if `counter` is also not greater than `10` does it arrive at the last `show` statement.
#### Exercise 5
Write a program to ask yourself, using `prompt`, what the value of 2 + 2 is. If the answer is ‘4’, use `show` to say something praising. If it is ‘3’ or ‘5’, say ‘Almost!’. In other cases, say something mean. Refer back↑ for the little bit of magic needed with `prompt`.
``# Compose a solution here``
##### View Solution
The logic tests in a program can become complicated. To help write conditions clearly CoffeeScript provides a couple of variations on the `if` statement: The body of an `if` statement can be placed before the condition. And an `if not` can be written as `unless`.
``fun = onshow 'The show is on!' unless fun is off``
##### ○•○
When a loop does not always have to go all the way through to its end, the `break` keyword can be useful. It immediately jumps out of the current loop, continuing after it. This program finds the first number that is greater than 20 and divisible by 7:
``current = 20loop if current % 7 is 0 break current++show current``
The `loop` construct does not have a part that checks for the end of the loop. It is the same as `while true`. This means that it is dependent on the `break` statement inside it to ever stop. The same program could also have been written as simply…
``current = 20current++ until current % 7 is 0show current``
In this case, the body of the loop comes before the loop test. The `until` keyword is similar to the `unless` keyword, but translates into `while not`. The only effect of the loop is to increment the variable `current` to its desired value. But I needed an example that uses `break`, so pay attention to the first version too.
#### Exercise 6
Pick a lucky number from 1 to 6 then keep rolling a simulated die, until your lucky number comes up. Count the number of rolls. Use a loop and optionally a `break`. Casting a die can be simulated with:
``roll = Math.floor Math.random() * 6 + 1``
Note that `loop` is the same as `while true` and both can be used to create a loop that does not end on its own account. Writing `while true` is a useful trick but a bit silly, you ask the program to loop as long as `true` is `true`, so the preferred way is to write `loop`.
``# Compose a solution here``
##### View Solution
In the second solution to the previous exercise `roll` has not been set to a value the first time through the loop. It is only assigned a value in the next statement. What happens when you take the value of this variable?
``show mysteryVariablemysteryVariable = 'nothing'``
In terms of tentacles, this variable ends in thin air, it has nothing to grasp. When you ask for the value of an empty place, you get a special value named `undefined`. Functions which do not return an interesting value, such as the `show` function also return an `undefined` value. Most things in CoffeeScript however return a value, even most statements.
``show show 'I am a side effect.'``
There is a similar value, `null`, whose meaning is ‘this variable is defined, but it does not have a value’. The difference in meaning between `undefined` and `null` is mostly academic, and usually not very interesting. In practical programs, it is often necessary to check whether something ‘has a value’. In these cases, the expression `something?` may be used, the `?` is called the existential operator. It returns `true` unless something is `null` or `undefined`. It also comes in an existential assignment form `?=` which will only assign to a variable that is either `null` or `undefined`.
``show iam ? undefinediam ?= 'I want to be'show iamiam ?= 'I am already'show iam if iam?``
##### ○•○
Which brings us to another subject… If you have been exposed to JavaScript then you know that comparisons of different types can be tricky.
``show false == 0show '' == 0show '5' == 5``
In JavaScript all these give the value `true` — not so in CoffeeScript where they are all `false`. When comparing values that have different types, you have to convert them into compatible types first. We saw this earlier with `Number` so `Number('5') == 5` gives `true`. The behaviour of `is`/`==` in CoffeeScript is the same as `===` in JavaScript.
``show `null === undefined`show `false === 0`show `'' === 0`show `'5' === 5```
All these are `false`. You can embed JavaScript in CoffeeScript by surrounding the JavaScript code with backquotes. Using JavaScript when you have CoffeeScript is similar to embedding assembly language in a high-level language. It should be something you very rarely need to do.
##### ○•○
There are some other situations that cause automatic type conversions to happen. If you add a non-string value to a string, the value is automatically converted to a string before it is concatenated. If you multiply a number and a string, CoffeeScript tries to make a number out of the string.
``show 'Apollo' + 5show null + 'ify'show '5' * 5show 'strawberry' * 5``
The last statement prints `NaN`, which is a special value. It stands for ‘not a number’, and is of type number (which might sound a little contradictory). In this case, it refers to the fact that a strawberry is not a number. All arithmetic operations on the value `NaN` result in `NaN`, which is why multiplying it by `5`, as in the example, still gives a `NaN` value. Also, and this can be disorienting at times, `NaN == NaN` equals `false`, checking whether a value is `NaN` can be done with the `isNaN` function.
These automatic conversions can be very convenient, but they are also rather weird and error prone. Even though `+` and `*` are both arithmetic operators, they behave completely different in the example. In my own code, I use `+` on non-strings a lot, but make it a point not to use `*` and the other numeric operators on string values.
Converting a number to a string is always possible and straightforward, but converting a string to a number may not even work (as in the last line of the example). We can use `Number` to explicitly convert the string to a number, making it clear that we might run the risk of getting a `NaN` value.
``show Number('5') * 5``
##### ○•○
When we discussed the boolean operators `and` / `&&` and `or` / `||` earlier, I claimed they produced boolean values. This turns out to be a bit of an oversimplification. If you apply them to boolean values, they will indeed return booleans. But they can also be applied to other kinds of values, in which case they will return one of their arguments.
What `or` really does is this: It looks at the value to the left of it first. If converting this value to a boolean would produce `true`, it returns this left value, and otherwise it returns the one on its right. Check for yourself that this does the correct thing when the arguments are booleans. Why does it work like that? It turns out this is very practical. Consider this example:
``prompt 'What is your name?', '', (input) -> show 'Well hello ' + (input or 'dear')``
If the user confirms without giving a name, the variable `input` will hold the value `''`. This would give `false` when converted to a boolean. The expression `input or 'dear'` can in this case be read as ‘the value of the variable `input`, or else the string `'dear'`’. It is an easy way to provide a ‘fallback’ value.
The `and` operator works similarly, but the other way around. When the value to its left is something that would give `false` when converted to a boolean, it returns that value, and otherwise it returns the value on its right.
Another property of these two operators is that the expression to their right is only evaluated when necessary. In the case of `true or X`, no matter what `X` is, the result will be `true`, so `X` is never evaluated, and if it has side effects they never happen. The same goes for `false and X`.
``false or show 'I am happening!'true or show 'Not me.'``
### Functions
A program often needs to do the same thing in different places. Repeating all the necessary statements every time is tedious and error-prone. It would be better to put them in one place, and have the program take a detour through there whenever necessary. This is what functions were invented for: They are canned code that a program can go through whenever it wants. Putting a string on the screen requires quite a few statements, but when we have a `show` function we can just write `show 'Aleph'` and be done with it.
To view functions merely as canned chunks of code does not do them justice though. When needed, they can play the role of pure functions, algorithms, indirections, abstractions, decisions, modules, continuations, data structures, and more. Being able to effectively use functions is a necessary skill for any kind of serious programming. This chapter provides an introduction into the subject, Functional Programming↓ discusses the subtleties of functions in more depth.
##### ○•○
Pure functions, for a start, are the things that were called functions in the mathematics classes that I hope you have been subjected to at some point in your life. Taking the cosine or the absolute value of a number is a pure function of one argument. Addition is a pure function of two arguments.
The defining properties of pure functions are that they always return the same value when given the same arguments, and never have side effects. They take some arguments, return a value based on these arguments, and do not monkey around with anything else.
In CoffeeScript, addition is an operator, but it could be wrapped in a function like this (and as pointless as this looks, we will come across situations where it is actually useful):
``add = (a, b) -> a + badd 2, 2``
`add` is the name of the function. `a` and `b` are the names of the two arguments. `a + b` is the body of the function.
The construct `->` is used when creating a new function. When it is assigned to a variable name, the resulting function will be stored under this name. Before the `->` comes a list of argument names in parentheses. If a function does not take any arguments, then the parentheses are not needed. After the `->` follows the body of the function. The body can follow the `->` on the same line or indented on the following line.
The last statement in a function determines its value. The keyword `return`, followed by an expression, can also be used to determine the value the function returns. When control comes across a `return` statement, it immediately jumps out of the current function and gives the returned value to the code that called the function. A `return` statement without an expression after it will cause the function to return `undefined`.
A body can, of course, have more than one statement in it. Here is a function for computing powers (with positive, integer exponents):
``power = (base, exponent) -> result = 1 for count in [0...exponent] result *= base resultpower 2, 10``
If you solved Exercise 2↑, this technique for computing a power should look familiar. Creating a variable (`result`) and updating it are side effects. Did I not just say pure functions had no side effects? A variable created inside a function exists only inside the function. This is fortunate, or a programmer would have to come up with a different name for every variable he needs throughout a program. Because `result` only exists inside `power`, the changes to it only last until the function returns, and from the perspective of code that calls it there are no side effects.
#### Exercise 7
Write a function called `absolute`, which returns the absolute value of the number it is given as its argument. The absolute value of a negative number is the positive version of that same number, and the absolute value of a positive number (or zero) is that number itself.
``# Compose a solution here``
##### View Solution
Pure functions have two very nice properties. They are easy to think about, and they are easy to re-use.
If a function is pure, a call to it can be seen as a thing in itself. When you are not sure that it is working correctly, you can test it by calling it directly from the console, which is simple because it does not depend on any context13. It is easy to make these tests automatic — to write a program that tests a specific function. Non-pure functions might return different values based on all kinds of factors, and have side effects that might be hard to test and think about.
Because pure functions are self-sufficient, they are likely to be useful and relevant in a wider range of situations than non-pure ones. Take `show`, for example. This function’s usefulness depends on the presence of a special place on the screen for printing output. If that place is not there, the function is useless. We can imagine a related function, let’s call it `format`, that takes a value as an argument and returns a string that represents this value. This function is useful in more situations than `show`.
Of course, `format` does not solve the same problem as `show`, and no pure function is going to be able to solve that problem, because it requires a side effect. In many cases, non-pure functions are precisely what you need. In other cases, a problem can be solved with a pure function but the non-pure variant is much more convenient or efficient.
Thus, when something can easily be expressed as a pure function, write it that way. But never feel dirty for writing non-pure functions.
##### ○•○
How do we make sure that a function gives the result that we expect? In the last exercise we tried with `absolute -144` and got the answer we wanted. For a simple function that is likely enough, but functions quickly become much more complicated and it becomes difficult to predict the output just from reading the program text. To reassure ourselves that `absolute` actually works many more test cases are needed. But typing test case after test case very quickly becomes very boring — so there must be a better way…
The exercise described the properties that the function should have as: ‘The absolute value of a negative number is the positive version of that same number, and the absolute value of a positive number (or zero) is that number itself.’ This description can be turned into properties that the computer can test for us.
The `testAbsolute` function calls on testPure in `qc``qc` stands for quick check14 — and tells it in the first argument to test `absolute`. The next argument, `arbInt`, declare that `absolute` takes an arbitrary integer as its only argument. Don’t worry about the brackets and dots, they will be explained in the next chapters. Calling `testAbsolute` with a descriptive name and a property is all that is needed to tell what to expect of `absolute`.
``runOnDemand -> # Create the Run button further down testAbsolute = (name, property) -> testPure absolute, [arbInt], name, property testAbsolute 'returns positive integers', (c, arg, result) -> result >= 0 testAbsolute 'positive returns positive', (c, arg, result) -> c.guard arg >= 0 result is arg testAbsolute 'negative returns positive', (c, arg, result) -> c.guard arg < 0 result is -arg test()``
First from the description of the function, clearly `absolute` should return a value larger than or equal to zero. That is what `result >= 0` in the property says. A property here is a function which is given three arguments; a test case (called `c` since `case` is a reserved word), the argument that `absolute` was called with and the `result` it gave back. Based on these values the property then returns true or false depending on whether the function conformed to the property.
The description says: ‘the absolute value of a positive number (or zero) is that number itself.’ So this property only needs positive numbers, a call to `guard` can to tell `qc` to disregard values that are not positive. The property then checks that the `result` is the same as the argument. Almost the same goes for negative arguments, except we use unary minus in the property.
So far only the desired properties of the function has been declared. No tests have been performed. Calling `test()` at then end starts the testing process when you press the ‘Run’ button, `qc` then generates test data and checks the properties.
``````My results:
Pass: returns positive integers (pass=100, invalid=0)
Pass: positive returns positive (pass=100, invalid=103)
Pass: negative returns positive (pass=100, invalid=90)
``````
That was nice. `absolute` has passed 300 test cases in the blink of an eye. The invalid counts comes from the `guard` calls that throw away test cases. If you want to see the test values then you can insert a `show c.args` in the property.
So what does it look like if a test fails? The `power` function from earlier in this chapter is a good candidate for a function that will fail to live up to expectations. We could reasonably expect that `power` will behave the same as the standard `Math.pow` function — but only for integers of course.
``runOnDemand -> testPure power, [arbInt, arbInt], 'power equals Math.pow for integers', (c, base, exponent, result) -> result is c.note Math.pow base, exponent test()``
Calling `testPure` and describing `power` as a function with two integer arguments does that. The property is then declaring that the `result` from `power` is the same as that from `Math.pow`. To see the value that `Math.pow` returns, a call is made to `c.note` which registers the value it is given.
``````My results:
fail: power equals Math.pow for integers
pass=9, invalid=0
shrinkedArgs=3,-2,9,0.1111111111111111
Failed case:
[ -9,
-9,
-387420489,
-2.581174791713197e-9 ]
``````
That failed and `qc` shows why. The `-9` and `-9` in the last lines refer to the arguments that `qc` generated for the test case. The `-387420489` is the result from `power`. The last number is the value noted from `Math.pow`, it is a close approximation of the correct answer –9–9 = –1/387420489
``runOnDemand -> testPure power, [arbWholeNum, arbWholeNum], 'power equals Math.pow for positive integers', (c, base, exponent, result) -> result is c.note Math.pow base, exponent test()``
The expectation that `power` works for integers was too broad, but surely the functions will work the same for positive integers or what do you think? Instead of using guard to throw away test cases as before, the description of the arguments can be changed. Many different argument types are included in `qc` (ranges, strings, dates, lists, …) and there is also one for positive integers, `arbWholeNum`.
``````My results:
fail: power equals Math.pow for positive integers
pass=28, invalid=0
shrinkedArgs=9,18,150094635296999100,150094635296999140
Failed case:
[ 27,
27,
4.434264882430377e+38,
4.434264882430378e+38 ]
``````
Well it passed 28 tests before a test case for 2727 gave a difference on the last digit15. Notice the line with `shrinkedArgs`. When a test fails then `qc` tries to find a simpler test that reproduces the problem. Simpler can mean shorter strings, lists or in this case smaller numbers. So `qc` found that there was a difference already for 918. The result from `power` ends with `100` and from `Math.pow` with `140`. So which is correct? None16 of them: 918 = 150094635296999121.
#### Exercise 8
Modify the function `intensify` in the following program until it passes the test properties. You can find descriptions of many `qc` definitions such as `arbConst` in the `qc` reference↓. A predefined function `c.noteVerbose` can help by both recording a result if the test fails and displaying values as they are tested.
``intensify = (n) -> 2runOnDemand -> testPure intensify, [arbInt], 'intensify grows by 2 when positive', (c, arg, result) -> c.guard arg > 0 arg + 2 is result testPure intensify, [arbInt], 'intensify grows by 2 when negative', (c, arg, result) -> c.guard arg < 0 arg - 2 is result testPure intensify, [arbConst(0)], 'only non-zero intensify grows', (c, arg, result) -> result is 0 test()``
##### View Solution
Writing test declarations before writing a function can be a good way of specifying it. The test declarations in these examples are much larger than the functions they are testing. In Binary Heaps↓ is a more realistic example of a class and tests for it. Declarative testing is well suited to testing algorithms and reusable libraries. There are many other test tools that you can choose depending on your preference and task17. The main point here is that a reasonable level of testing is part of writing code.
##### ○•○
Back to functions, they do not have to contain a `return` statement. If no `return` statement is encountered, the function returns the value of the last statement. The predefined function `view` returns its argument so that it can be used inside expressions. If you want the function to return `undefined` then the last statement can be a `return`.18
``yell = (message) -> view message + '!!' returnyell 'Yow'``
##### ○•○
The names of the arguments of a function are available as variables inside it. They will refer to the values of the arguments the function is being called with, and like normal variables created inside a function, they do not exist outside it. Aside from the top-level environment, there are smaller, local environments created by functions. When looking up a variable inside a function, the outer environment is checked first, and only if the variable does not exist there is it created in the local environment.
``dino = 'I am alive'reptile = 'I am A-OK'meteor = (reptile) -> show reptile # Argument dino = 'I am extinct' reptile = 'I survived' possum = 'I am new'show dino # Outermeteor 'What happened?'show dino # Outer changedshow reptile # Outer unchangedtry show possum catch e show e.message # Error undefined``
This makes it possible for arguments to a function to ‘shadow’ outer level variables that have the same name. The easiest way to deal with variables is to use unique names for variables throughout a file. In Modularity↓ you will see that the top-level is not shared between files unless variables are specifically exported.
So if the name of variable in an inner function exists in an outer environment then the variable is referring to the outer one, it is not a new definition. That is an expression like `variable = 'something'` may be a definition of a new variable or it may be an assignment to a previously defined variable. As a matter of style, when you are using top-level variables then it makes sense to introduce them at the top of a file with a default value.
``variable = 'first' # DefinitionshowVariable = -> show 'In showVariable, the variable holds: ' + variable # secondtestIt = -> variable = 'second' # Assignment show 'In test, the variable holds ' + variable + '.' # second showVariable()show 'The variable is: ' + variable # firsttestIt()show 'The variable is: ' + variable # second``
The variables defined in the local environment are only visible to the code inside the function. If a function calls another function, the newly called function does not see the variables inside the first function.
``andHere = -> try show aLocal # Not defined catch e then show e.messageisHere = -> aLocal = 'aLocal is defined' andHere()isHere()``
However, and this is a subtle but extremely useful phenomenon, when a function is defined inside another function, its local environment will be based on the local environment that surrounds it.
``isHere = -> andHere = -> try show aLocal # Is defined catch e then show e.message aLocal = 'aLocal is defined' andHere()isHere()``
##### ○•○
Here is a special case that might surprise you:
``varWhich = 'top-level'parentFunction = -> varWhich = 'local' childFunction = -> show varWhich childFunctionchild = parentFunction()child()``
`parentFunction` returns its internal function, and the code at the bottom calls this function. Even though `parentFunction` has finished executing at this point, the local environment where `variable` has the value `'local'` still exists, and `childFunction` still uses it. This phenomenon is called closure.
##### ○•○
With scoping we can ‘synthesise’ functions. By using some of the variables from an enclosing function, an inner function can be made to do different things. Imagine we need a few different but similar functions, one that adds 2 to its argument, one that adds 5, and so on.
``makeAddFunction = (amount) -> add = (number) -> number + amountaddTwo = makeAddFunction 2addFive = makeAddFunction 5show addTwo(1) + addFive(1)``
##### ○•○
On top of the fact that different functions can contain variables of the same name without getting tangled up, these scoping rules also allow functions to call themselves without running into problems. A function that calls itself is called recursive. Recursion allows for some interesting definitions. When you define recursive functions the first thing you need is a stop condition, otherwise your recursive function becomes an elaborate but never ending loop. Look at this implementation of `power`:
``powerRec = (base, exponent) -> if exponent is 0 1 else base * powerRec base, exponent - 1show 'power 3, 3 = ' + powerRec 3, 3``
This is rather close to the way mathematicians define exponentiation, and to me it looks a lot nicer than the earlier version. It sort of loops, but there is no `while`, `for`, or even a local side effect to be seen. By calling itself, the function produces the same effect. The stop condition is when `exponent` becomes `0` and the `exponent - 1` ensures that `exponent` gets closer to `0` for each call. Note the assumption that exponent is a positive integer, that should be clearly documented if `powerRec` was a part of a reusable library.
##### ○•○
Does this elegance affect performance? There is only one way to know for sure: measure it. The timings below are from my machine, you should not rely much on those. CPU’s, operating systems, compilers and interpreters in browsers all has an effect on performance, so measure in something that is as close to your target environment as possible.
``timeIt = (func) -> start = new Date() for i in [0...1000000] then func() show "Timing: #{(new Date() - start)*0.001}s"add = (n, m) -> n + m # Baseline comparisonrunOnDemand -> timeIt -> p = add 9,18 # 0.042s timeIt -> p = Math.pow 9,18 # 0.049s timeIt -> p = power 9,18 # 0.464s timeIt -> p = powerRec 9,18 # 0.544s``
The dilemma of speed versus elegance is an interesting one. It not only occurs when deciding for or against recursion. In many situations, an elegant, intuitive, and often short solution can be replaced by a more convoluted but faster solution.
In the case of the `power` function above the un-elegant version is still sufficiently simple and easy to read. It does not make very much sense to replace it with the recursive version. Often, though, the concepts a program is dealing with get so complex that giving up some efficiency in order to make the program more straightforward becomes an attractive choice.
The basic rule, which has been repeated by many programmers and with which I wholeheartedly agree, is not to worry about efficiency until your program is provably too slow. When it is, find out which parts are too slow, and start exchanging elegance for efficiency in those parts.
Of course, the above rule does not mean one should start ignoring performance altogether. In many cases, like the `power` function, not much simplicity is gained by the ‘elegant’ approach. In other cases, an experienced programmer can see right away that a simple approach is never going to be fast enough.
The reason I am making a big deal out of this is that surprisingly many programmers focus fanatically on efficiency, even in the smallest details. The result is bigger, more complicated, and often less correct programs, which take longer to write than their more straightforward equivalents and often run only marginally faster.
When you have a simple, correct implementation that is too slow, then you can use that as a reference implementation to test your improved version. The one thing you do need to consider up front is what kind of data structures and algorithms can handle the task at hand. There is a huge difference between searching in a list with ten items and searching in a list with millions of items. These kind of considerations are covered in more detail in Searching↓.
##### ○•○
But I was talking about recursion. A concept closely related to recursion is a thing called the stack. When a function is called, control is given to the body of that function. When that body returns, the code that called the function is resumed. While the body is running, the computer must remember the context from which the function was called, so that it knows where to continue afterwards. The place where this context is stored is called the stack.
The fact that it is called ‘stack’ has to do with the fact that, as we saw, a function body can again call a function. Every time a function is called, another context has to be stored. One can visualise this as a stack of contexts. Every time a function is called, the current context is thrown on top of the stack. When a function returns, the context on top is taken off the stack and resumed.
This stack requires space in the computer’s memory to be stored. When the stack grows too big, the computer will give up with a message like “out of stack space” or “too much recursion”. This is something that has to be kept in mind when writing recursive functions.
``chicken = -> show 'Lay an egg' egg()egg = -> show 'Chick hatched' chicken()# WARNING! Clicking `Run` may cause your# browser to enter a catatonic staterunOnDemand -> try show chicken() + ' came first.' catch error then show error.message``
In addition to demonstrating a very interesting way of writing a broken program, this example shows that a function does not have to call itself directly to be recursive. If it calls another function which (directly or indirectly) calls the first function again, it is still recursive. The `try` and `catch` parts are covered in Error Handling↓.
##### ○•○
Recursion is not always just a less-efficient alternative to looping. Some problems are much easier to solve with recursion than with loops. Most often these are problems that require exploring or processing several ‘branches’, each of which might branch out again into more branches.
Consider this puzzle: By starting from the number 1 and repeatedly either adding 5 or multiplying by 3, an infinite amount of new numbers can be produced. How would you write a function that, given a number, tries to find a sequence of additions and multiplications that produce that number?
For example, the number 13 could be reached by first multiplying 1 by 3, and then adding 5 twice. The number 15 can not be reached at all.
``findSequence = (goal) -> find = (start, history) -> if start is goal history else if start > goal null else find(start + 5, '(' + history + ' + 5)') ? \ find(start * 3, '(' + history + ' * 3)') find 1, '1'show findSequence 24``
Note that the solution does not necessarily find the shortest sequence of operations, it is satisfied when it finds any sequence at all.
The inner `find` function, by calling itself in two different ways, explores both the possibility of adding 5 to the current number and of multiplying it by 3. When it finds the number, it returns the `history` string, which is used to record all the operators that were performed to get to this number. It also checks whether the current number is bigger than `goal`, because if it is, we should stop exploring this branch, it is not going to give us our number.
The use of the existential `?` operator in the example can be read as ‘return the solution found by adding 5 to `start`, and if that fails, return the solution found by multiplying `start` by 3’.
##### ○•○
Usually when you define a function you assign it to a name that you can use to refer to it later. That is not required and sometimes it is not worthwhile to give a function a name, then you can use an anonymous function instead.
Like in the `makeAddFunction` example we saw earlier:
``makeAddFunction = (amount) -> (number) -> number + amountshow makeAddFunction(11) 3``
Since the function named `add` in the first version of `makeAddFunction` was referred to only once, the name does not serve any purpose and we might as well directly return the function value.
#### Exercise 9
Write a function `greaterThan`, which takes one argument, a number, and returns a function that represents a test. When this returned function is called with a single number as argument, it returns a boolean: `true` if the given number is greater than the number that was used to create the test function, and `false` otherwise.
``# Compose a solution here``
##### View Solution
Try the following:
``yell 'Hello', 'Good Evening', 'How do you do?'``
The function `yell` defined earlier in this chapter only accepts one argument. Yet when you call it like this, the computer does not complain at all, but just ignores the other arguments.
``yell()``
You can, apparently, even get away with passing too few arguments. When an argument is not passed, its value inside the function is `undefined`.
In the next chapter, we will see a way in which a function body can get at the exact list of arguments that were passed to it. This can be useful, as it makes it possible to have a function accept any number of arguments. A debugging function `console.log` makes use of this:
``console?.log 'R', 2, 'D', 2``
To see the output from `console.log` you have to enable and open your browser’s built-in developer tools. The question mark is to prevent Internet Explorer 9 from throwing an error if Developer Tools (F12) has not been activated.
Of course, the downside of a variable number of arguments is that it is also possible to accidentally pass the wrong number of arguments to functions that expect a fixed amount of them and never be told about it.
### Data Structures: Objects and Arrays
This chapter will be devoted to solving a few simple problems. In the process, we will discuss two new types of values, arrays and objects, and look at some techniques related to them.
Consider the following situation: Your crazy aunt Emily, who is rumoured to have over fifty cats living with her (you never managed to count them), regularly sends you e-mails to keep you up to date on her exploits. They usually look like this:
Dear nephew,
Your mother told me you have taken up skydiving. Is this true? You watch yourself, young man! Remember what happened to my husband? And that was only from the second floor!
Anyway, things are very exciting here. I have spent all week trying to get the attention of Mr. Drake, the nice gentleman who moved in next door, but I think he is afraid of cats. Or allergic to them? I am going to try putting Fat Igor on his shoulder next time I see him, very curious what will happen.
Also, the scam I told you about is going better than expected. I have already gotten back five ‘payments’, and only one complaint. It is starting to make me feel a bit bad though. And you are right that it is probably illegal in some way.
(etc)
Much love,
Aunt Emily
died 27/04/2006: Black Leclère
born 05/04/2006 (mother Lady Penelope): Red Lion, Doctor Hobbles the 3rd, Little Iroquois
To humour the old dear, you would like to keep track of the genealogy of her cats, so you can add things like “P.S. I hope Doctor Hobbles the 2nd enjoyed his birthday this Saturday!”, or “How is old Lady Penelope doing? She’s five years old now, isn’t she?”, preferably without accidentally asking about dead cats. You are in the possession of a large quantity of old e-mails from your aunt, and fortunately she is very consistent in always putting information about the cats’ births and deaths at the end of her mails in precisely the same format.
You are hardly inclined to go through all those mails by hand. Fortunately, we were just in need of an example problem, so we will try to work out a program that does the work for us. For a start, we write a program that gives us a list of cats that are still alive after the last e-mail.
Before you ask, at the start of the correspondence, aunt Emily had only a single cat: Spot. (She was still rather conventional in those days.)
It usually pays to have some kind of clue what one’s program is going to do before starting to type. Here’s a plan:
1. Start with a set of cat names that has only “Spot” in it.
2. Go over every e-mail in our archive, in chronological order.
5. Remove the names from paragraphs that start with “died” from our set.
Where taking the names from a paragraph goes like this:
1. Find the colon in the paragraph.
2. Take the part after this colon.
3. Split this part into separate names by looking for commas.
It may require some suspension of disbelief to accept that aunt Emily always used this exact format, and that she never forgot or misspelled a name, but that is just how your aunt is.
##### ○•○
First, let me tell you about properties. A lot of CoffeeScript values have other values associated with them. These associations are called properties. Every string has a property called `length`, which refers to a number, the amount of characters in that string.
Properties can be accessed in two ways:
``text = 'purple haze'show text['length']show text.length``
The second way is a shorthand for the first, and it only works when the name of the property would be a valid variable name — when it does not have any spaces or symbols in it and does not start with a digit character.
Numbers, booleans, the value `null`, and the value `undefined` do not have any properties. Trying to read properties from such a value produces an error. Try the following code, if only to get an idea about the kind of error-message you get in such a case (which, for some browsers, can be rather cryptic).
``nothing = nulltry show nothing.length catch error then show error.message``
The properties of a string value can not be changed. There are quite a few more than just `length`, as we will see, but you are not allowed to add or remove any.
This is different with values of the type object. Their main role is to hold other values. They have, you could say, their own set of tentacles in the form of properties. You are free to modify these, remove them, or add new ones.
An object can be written like this:
``cat = colour: 'grey' name: 'Spot' size: 46# Or: cat = {colour: 'grey', name: 'Spot', size: 46}cat.size = 47show cat.sizedelete cat.sizeshow cat.sizeshow cat``
Like variables, each property attached to an object is labelled by a string. The first statement creates an object in which the property `'colour'` holds the string `'grey'`, the property `'name'` is attached to the string `'Spot'`, and the property `'size'` refers to the number `46`. The second statement gives the property named `size` a new value, which is done in the same way as modifying a variable.
The keyword `delete` cuts off properties. Trying to read a non-existent property gives the value `undefined`.
If a property that does not yet exist is set with the `=` operator, it is added to the object.
``empty = {}empty.notReally = 1000show empty.notReally``
Properties whose names are not valid variable names have to be quoted when creating the object, and approached using brackets:
``thing = {'gabba gabba': 'hey', '5': 10}show thing['5']thing['5'] = 20show thing[2 + 3]delete thing['gabba gabba']show thing``
As you can see, the part between the brackets can be any expression. It is converted to a string to determine the property name it refers to. One can even use variables to name properties:
``propertyName = 'length'text = 'mainline'show text[propertyName]``
The operator `of` can be used to test whether an object has a certain property. It produces a boolean.
``chineseBox = {}chineseBox.content = chineseBoxshow 'content' of chineseBoxshow 'content' of chineseBox.contentshow chineseBox``
##### ○•○
When object values are shown in the interactive environment, all layers of properties are shown. You can give an extra boolean argument, `shallow`, to `show` to only its `own` top-most properties. This is also used to limit the display of objects with circular references such as for the `chineseBox` above.
``abyss = {lets:1, go:deep:down:into:the:abyss:7}show abyssshow abyss, on``
#### Exercise 10
The solution for the cat problem talks about a ‘set’ of names. A set is a collection of values in which no value may occur more than once. If names are strings, can you think of a way to use an object to represent a set of names?
Show how a name can be added to this set, how one can be removed, and how you can check whether a name occurs in it.
``# Compose a solution here``
##### View Solution
Object values, apparently, can change. The types of values discussed in Basic CoffeeScript↑ are all immutable, it is impossible to change an existing value of those types. You can combine them and derive new values from them, but when you take a specific string value, the text inside it can not change. With objects, on the other hand, the content of a value can be modified by changing its properties.
When we have two numbers, `120` and `120`, they can for all practical purposes be considered the precise same number. With objects, there is a difference between having two references to the same object and having two different objects that contain the same properties. Consider the following code:
``object1 = {value: 10}object2 = object1object3 = {value: 10}show object1 is object2show object1 is object3object1.value = 15show object2.valueshow object3.value``
`object1` and `object2` are two variables grasping the same value. There is only one actual object, which is why changing `object1` also changes the value of `object2`. The variable `object3` points to another object, which initially contains the same properties as `object1`, but lives a separate life.
CoffeeScript’s `is`/`==` operator, when comparing objects, will only return `true` if both values given to it are the precise same value. Comparing different object with identical contents will give `false`. This is useful in some situations, but unpractical in others19.
##### ○•○
Object values can play a lot of different roles. Behaving like a set is only one of those. We will see a few other roles in this chapter, and Object Orientation↓ shows another important way of using objects.
In the plan for the cat problem — in fact, call it an algorithm, not a plan, that makes it sound like we know what we are talking about — in the algorithm, it talks about going over all the e-mails in an archive. What does this archive look like? And where does it come from?
Do not worry about the second question for now. Modularity↓ talks about some ways to import data into your programs, but for now you will find that the e-mails are just magically there. Some magic is really easy, inside computers.
##### ○•○
The way in which the archive is stored is still an interesting question. It contains a number of e-mails. An e-mail can be a string, that should be obvious. The whole archive could be put into one huge string, but that is hardly practical. What we want is a collection of separate strings.
Collections of things are what objects are used for. One could make an object like this:
``mailArchive = { 'the first e-mail': 'Dear nephew, ...' 'the second e-mail': '...' # and so on ...}``
But that makes it hard to go over the e-mails from start to end — how does the program guess the name of these properties? This can be solved by more predictable property names:
``mailArchive = { 0: 'Dear nephew, ... (mail number 1)' 1: '(mail number 2)' 2: '(mail number 3)'}for current of mailArchive show 'Processing e-mail #' + current + ': ' + mailArchive[current]``
Luck has it that there is a special kind of objects specifically for this kind of use. They are called arrays, and they provide some conveniences, such as a `length` property that contains the amount of values in the array, and a number of operations useful for this kind of collections.
New arrays can be created using brackets (`[` and `]`). As with properties, the commas between elements are optional when they are placed on separate lines. Ranges and for comprehensions also create arrays.
``mailArchive = ['mail one', 'mail two', 'mail three']for current in [0...mailArchive.length] show 'Processing e-mail #' + current + ': ' + mailArchive[current]``
In this example, the numbers of the elements are not specified explicitly anymore. The first one automatically gets the number 0, the second the number 1, and so on.
Why start at 0? People tend to start counting from 1. As unintuitive as it seems, numbering the elements in a collection from 0 is often more practical. Just go with it for now, it will grow on you.
Starting at element 0 also means that in a collection with `X` elements, the last element can be found at position `X - 1`. This is why the `for` loop in the example uses an exclusive range `0...mailArchive.length`. There is no element at position `mailArchive.length`, so as soon as `current` has that value, we stop looping.
#### Exercise 11
Write a function `range` that takes one argument, a positive number, and returns an array containing all numbers from 0 up to and including the given number.
An empty array can be created by simply typing `[]`. Also remember that adding properties to an object, and thus also to an array, can be done by assigning them a value with the `=` operator. The `length` property is automatically updated when elements are added.
``# Compose a solution here``
##### View Solution
In CoffeeScript most statements can also be used as expressions. That means for example that the values from a for comprehension can be collected in a variable and used later.
``numbers = (number for number in [0..12] by 2)show numbers``
##### ○•○
Both string and array objects contain, in addition to the `length` property, a number of properties that refer to function values.
``doh = 'Doh'show typeof doh.toUpperCaseshow doh.toUpperCase()``
Every string has a `toUpperCase` property. When called, it will return a copy of the string, in which all letters have been converted to uppercase. There is also `toLowerCase`. Guess what that does.
Notice that, even though the call to `toUpperCase` does not pass any arguments, the function does somehow have access to the string `'Doh'`, the value of which it is a property. How this works precisely is described in Object Orientation↓.
Properties that contain functions are generally called methods, for example ‘`toUpperCase` is a method of a string object’.
``mack = []mack.push 'Mack'mack.push 'the'mack.push 'Knife'show mack.join ' 'show mack.pop()show mack``
The method `push`, which is associated with arrays, can be used to add values to it. It could have been used in the last exercise, as an alternative to `result[i] = i`. Then there is `pop`, the opposite of `push`: it takes off and returns the last value in the array. `join` builds a single big string from an array of strings. The parameter it is given is pasted between the values in the array.
##### ○•○
Coming back to those cats, we now know that an array would be a good way to store the archive of e-mails. In this book, the `retrieveMails` after the `require` can be used to (magically) get hold of this array. The magic will be dispelled in Modularity↓.
Mail Archive
##### View Solution
Going over them to process them one after another is no rocket science anymore either:
``mailArchive = retrieveMails()for email, i in mailArchive show "Processing e-mail ##{i} #{email[0..15]}..." # Do more things...``
In a `for ... in` statement we can get both the value and its index in the array. The `email[0..15]` gets the first snippet of each email. We have also decided on a way to represent the set of cats that are alive. The next problem, then, is to find the paragraphs in an e-mail that start with `'born'` or `'died'`.
##### ○•○
The first question that comes up is what exactly a paragraph is. In this case, the string value itself can not help us much: CoffeeScript’s concept of text does not go any deeper than the ‘sequence of characters’ idea, so we must define paragraphs in those terms.
Earlier, we saw that there is such a thing as a newline character. These are what most people use to split paragraphs. We consider a paragraph, then, to be a part of an e-mail that starts at a newline character or at the start of the content, and ends at the next newline character or at the end of the content.
And we do not even have to write the algorithm for splitting a string into paragraphs ourselves. Strings already have a method named `split`, which is (almost) the opposite of the `join` method of arrays. It splits a string into an array, using the string given as its argument to determine in which places to cut.
``words = 'Cities of the Interior'show words.split ' '``
Thus, cutting on newlines (`'\n'`), can be used to split an e-mail into paragraphs.
#### Exercise 12
`split` and `join` are not precisely each other’s inverse. `string.split(x).join(x)` always produces the original value, but `array.join(x).split(x)` does not. Can you give an example of an array where `.join(' ').split(' ')` produces a different value?
``# Compose a solution here``
##### View Solution
Paragraphs that do not start with either “born” or “died” can be ignored by the program. How do we test whether a string starts with a certain word? The method `charAt` can be used to get a specific character from a string. `x.charAt(0)` gives the first character, `1` is the second one, and so on. One way to check whether a string starts with “born” is:
``paragraph = 'born 15-11-2003 (mother Spot): White Fang'show paragraph.charAt(0) is 'b' and paragraph.charAt(1) is 'o' and paragraph.charAt(2) is 'r' and paragraph.charAt(3) is 'n'``
But that gets a bit clumsy — imagine checking for a word of ten characters. There is something to be learned here though: when a line gets ridiculously long, it can be spread over multiple lines. The result can be made easier to read by lining up the start of the new line with the first element on the original line that plays a similar role. You can also end a line with `\` to indicate that it continues on the next line.
Strings also have a method called `slice`. It copies out a piece of the string, starting from the character at the position given by the first argument, and ending before (not including) the character at the position given by the second one. It is the same as using a range as an index. This allows the check to be written in a shorter way.
``show paragraph.slice(0, 4) is 'born'show paragraph[0...4] is 'born'``
#### Exercise 13
Write a function called `startsWith` that takes two arguments, both strings. It returns `true` when the first argument starts with the characters in the second argument, and `false` otherwise.
``# Compose a solution here``
##### View Solution
What happens when `charAt`, `slice` or a range are used to take a piece of a string that does not exist? Will the `startsWith` still work when the pattern is longer than the string it is matched against?
``show 'Pip'.charAt 250show 'Nop'.slice 1, 10show 'Pin'[1...10]``
`charAt` will return `''` when there is no character at the given position, and `slice` or the range will simply leave out the part of the new string that does not exist.
So yes, `startsWith` should work. When `startsWith('Idiots', 'Most honoured colleagues')` is called, the call to `slice` will, because `string` does not have enough characters, always return a string that is shorter than `pattern`. Because of that, the comparison with `is` will return `false`, which is correct.
It helps to always take a moment to consider abnormal (but valid) inputs for a program. These are usually called corner cases, and it is very common for programs that work perfectly on all the ‘normal’ inputs to screw up on corner cases20.
##### ○•○
The only part of the cat-problem that is still unsolved is the extraction of names from a paragraph. The algorithm was this:
1. Find the colon in the paragraph.
2. Take the part after this colon.
3. Split this part into separate names by looking for commas.
This has to happen both for paragraphs that start with `'died'`, and paragraphs that start with `'born'`. It would be a good idea to put it into a function, so that the two pieces of code that handle these different kinds of paragraphs can both use it.
#### Exercise 14
Can you write a function `catNames` that takes a paragraph as an argument and returns an array of names?
Strings have an `indexOf` method that can be used to find the (first) position of a character or sub-string within that string. Also, when `slice` is given only one argument, it will return the part of the string from the given position all the way to the end. With a range either the start or end can be left out: `'shorthand'[...5]``'short'` and `'shorthand'[5...]``'hand'`.
It can be helpful to use CoffeeScript interactively to ‘explore’ functions. Try `'foo: bar'.indexOf(':')` and see what you get.
``# Compose a solution here``
##### View Solution
All that remains now is putting the pieces together. One way to do that looks like this:
``mailArchive = retrieveMails()livingCats = 'Spot': truefor email, i in mailArchive paragraphs = email.split '\n' for paragraph in paragraphs if startsWith paragraph, 'born' names = catNames paragraph for name in names livingCats[name] = true else if startsWith paragraph, 'died' names = catNames paragraph for name in names delete livingCats[name]show livingCats, on``
That is quite a big dense chunk of code. We will look into making it a bit lighter in a moment. But first let us look at our results. We know how to check whether a specific cat survives:
``if 'Spot' in livingCats show 'Spot lives!'else show 'Good old Spot, may she rest in peace.'``
But how do we list all the cats that are alive? The `of` keyword is somewhat similar to the `in` keyword when it is used together with `for`:
``for cat of livingCats show cat``
A loop like that will go over the names of the properties in an object, which allows us to enumerate all the names in our set.
##### ○•○
Some pieces of code look like an impenetrable jungle. The example solution to the cat problem suffers from this. One way to make some light shine through it is to just add some strategic blank lines. This makes it look better, but does not really solve the problem.
What is needed here is to break the code up. We already wrote two helper functions, `startsWith` and `catNames`, which both take care of a small, understandable part of the problem. Let us continue doing this.
``addToSet = (set, values) -> for i in [0..values.length] set[values[i]] = trueremoveFromSet = (set, values) -> for i in [0..values.length] delete set[values[i]]``
These two functions take care of the adding and removing of names from the set. That already cuts out the two most inner loops from the solution:
``livingCats = 'Spot': truefor email in mailArchive paragraphs = email.split '\n' for paragraph in paragraphs if startsWith paragraph, 'born' addToSet livingCats, catNames paragraph else if startsWith paragraph, 'died' removeFromSet livingCats, catNames paragraphshow livingCats, on``
Quite an improvement, if I may say so myself.
Why do `addToSet` and `removeFromSet` take the set as an argument? They could use the variable `livingCats` directly, if they wanted to. The reason is that this way they are not completely tied to our current problem. If `addToSet` directly changed `livingCats`, it would have to be called `addCatsToCatSet`, or something similar. The way it is now, it is a more generally useful tool.
Even if we are never going to use these functions for anything else, which is quite probable, it is useful to write them like this. Because they are ‘self sufficient’, they can be read and understood on their own, without needing to know about some external variable called `livingCats`.
The functions are not pure: They change the object passed as their `set` argument. This makes them slightly trickier than real pure functions, but still a lot less confusing than functions that run amok and change any value or variable they please.
##### ○•○
We continue breaking the algorithm into pieces:
``findLivingCats = -> mailArchive = retrieveMails() livingCats = 'Spot': true handleParagraph = (paragraph) -> if startsWith paragraph, 'born' addToSet livingCats, catNames paragraph else if startsWith paragraph, 'died' removeFromSet livingCats, catNames paragraph for email in mailArchive paragraphs = email.split '\n' for paragraph in paragraphs handleParagraph paragraph livingCatshowMany = 0for cat of findLivingCats() howMany++show 'There are ' + howMany + ' cats.'``
The whole algorithm is now encapsulated by a function. This means that it does not leave a mess after it runs: `livingCats` is now a local variable in the function, instead of a top-level one, so it only exists while the function runs. The code that needs this set can call `findLivingCats` and use the value it returns.
It seemed to me that making `handleParagraph` a separate function also cleared things up. But this one is so closely tied to the cat-algorithm that it is meaningless in any other situation. On top of that, it needs access to the `livingCats` variable. Thus, it is a perfect candidate to be a function-inside-a-function. When it lives inside `findLivingCats`, it is clear that it is only relevant there, and it has access to the variables of its parent function.
This solution is actually bigger than the previous one. Still, it is tidier and I hope you will agree that it is easier to read.
##### ○•○
The program still ignores a lot of the information that is contained in the e-mails. There are birth-dates, dates of death, and the names of mothers in there.
To start with the dates: What would be a good way to store a date? We could make an object with three properties, `year`, `month`, and `day`, and store numbers in them.
``whenWasIt = year: 1980, month: 2, day: 1``
But CoffeeScript already provides a kind of object for this purpose. Such an object can be created by using the keyword `new`:
``whenWasIt = new Date 1980, 1, 1show whenWasIt``
Just like the notation with colons and optional braces we have already seen, `new` is a way to create object values. Instead of specifying all the property names and values, a function is used to build up the object. This makes it possible to define a kind of standard procedure for creating objects. Functions like this are called constructors, and in Object Orientation↓ we will see how to write them.
The `Date` constructor can be used in different ways.
``show new Dateshow new Date 1980, 1, 1show new Date 2007, 2, 30, 8, 20, 30``
As you can see, these objects can store a time of day as well as a date. When not given any arguments, an object representing the current time and date is created. Arguments can be given to ask for a specific date and time. The order of the arguments is year, month, day, hour, minute, second, milliseconds. These last four are optional, they become 0 when not given.
The month numbers these objects use go from 0 to 11, which can be confusing. Especially since day numbers do start from 1.
##### ○•○
The content of a `Date` object can be inspected with a number of `get...` methods.
``today = new Date();show "Year: #{today.getFullYear()} month: #{today.getMonth()} day: #{today.getDate()}"show "Hour: #{today.getHours()} minutes: #{today.getMinutes()} seconds: #{today.getSeconds()}"show "Day of week: #{today.getDay()}"``
All of these, except for `getDay`, also have a `set...` variant that can be used to change the value of the date object.
Inside the object, a date is represented by the amount of milliseconds it is away from January 1st 1970. You can imagine this is quite a large number.
``today = new Date()show today.getTime()``
A very useful thing to do with dates is comparing them.
``wallFall = new Date 1989, 10, 9gulfWarOne = new Date 1990, 6, 2show wallFall < gulfWarOneshow wallFall is wallFall# butshow wallFall is new Date 1989, 10, 9``
Comparing dates with `<`, `>`, `<=`, and `>=` does exactly what you would expect. When a date object is compared to itself with `is` the result is `true`, which is also good. But when `is` is used to compare a date object to a different, equal date object, we get `false`. Huh?
As mentioned earlier, `is` will return `false` when comparing two different objects, even if they contain the same properties. This is a bit clumsy and error-prone here, since one would expect `>=` and `is` to behave in a more or less similar way. Testing whether two dates are equal can be done like this:
``wallFall1 = new Date 1989, 10, 9wallFall2 = new Date 1989, 10, 9show wallFall1.getTime() is wallFall2.getTime()``
##### ○•○
In addition to a date and time, `Date` objects also contain information about a timezone. When it is one o’clock in Amsterdam, it can, depending on the time of year, be noon in London, and seven in the morning in New York. Such times can only be compared when you take their time zones into account. The `getTimezoneOffset` function of a `Date` can be used to find out how many minutes it differs from GMT (Greenwich Mean Time).
``now = new Date()show now.getTimezoneOffset()``
#### Exercise 15
`'died 27/04/2006: Black Leclère'`
The date part is always in the exact same place of a paragraph. How convenient. Write a function `extractDate` that takes such a paragraph as its argument, extracts the date, and returns it as a date object.
``# Compose a solution here``
##### View Solution
Storing cats will work differently from now on. Instead of just putting the value `true` into the set, we store an object with information about the cat. When a cat dies, we do not remove it from the set, we just add a property `death` to the object to store the date on which the creature died.
This means our `addToSet` and `removeFromSet` functions have become useless. Something similar is needed, but it must also store birth-dates and, later, the mother’s name.
``catRecord = (name, birthdate, mother) -> name: name birth: birthdate mother: motheraddCats = (set, names, birthdate, mother) -> for name in names set[name] = catRecord name, birthdate, motherdeadCats = (set, names, deathdate) -> for name in names set[name].death = deathdate``
`catRecord` is a separate function for creating these storage objects. It might be useful in other situations, such as creating the object for Spot. ‘Record’ is a term often used for objects like this, which are used to group a limited number of values.
##### ○•○
So let us try to extract the names of the mother cats from the paragraphs.
``'born 15/11/2003 (mother Spot): White Fang'``
One way to do this would be…
``extractMother = (paragraph) -> start = paragraph.indexOf '(mother ' start += '(mother '.length end = paragraph.indexOf ')' paragraph[start...end]show extractMother \ 'born 15/11/2003 (mother Spot): White Fang'``
Notice how the start position has to be adjusted for the length of the string `'(mother '`, because `indexOf` returns the position of the start of the pattern, not its end.
#### Exercise 16
The thing that `extractMother` does can be expressed in a more general way. Write a function `between` that takes three arguments, all of which are strings. It will return the part of the first argument that occurs between the patterns given by the second and the third arguments. That is:
``between 'born 15/11/2003 (mother Spot): White Fang', '(mother ', ')' ⇒ 'Spot'between 'bu ] boo [ bah ] gzz', '[ ', ' ]' ⇒ 'bah'``
To make that second test work, it can be useful to know that `indexOf` can be given a second, optional parameter that specifies at which point it should start searching.
``# Compose a solution here``
##### View Solution
Having `between` makes it possible to express extractMother in a simpler way:
``extractMother = (paragraph) -> between paragraph, '(mother ', ')'``
##### ○•○
The new, improved cat-algorithm looks like this:
``findCats = -> mailArchive = retrieveMails() cats = {'Spot': catRecord 'Spot', new Date(1997, 2, 5), 'unknown'} handleParagraph = (paragraph) -> if startsWith paragraph, 'born' addCats cats, catNames(paragraph), extractDate(paragraph), extractMother(paragraph) else if startsWith paragraph, 'died' deadCats cats, catNames(paragraph), extractDate(paragraph) for email in mailArchive paragraphs = email.split '\n' for paragraph in paragraphs handleParagraph paragraph catscatData = findCats()show catData['Clementine'], onshow catData[catData['Clementine'].mother], on``
Having that extra data allows us to finally have a clue about the cats aunt Emily talks about. A function like this could be useful:
``formatDate = (date) -> "#{date.getDate()}/" + "#{date.getMonth() + 1}/" + "#{date.getFullYear()}"catInfo = (data, name) -> unless name of data return "No cat by the name of #{name} is known." cat = data[name] message = "#{name}," + " born #{formatDate cat.birth}" + " from mother #{cat.mother}" if "death" of cat message += ", died #{formatDate cat.death}" "#{message}."show catInfo catData, "Fat Igor"``
The `return` statement in `catInfo` is used as an escape hatch. If there is no data about the given cat, the rest of the function is meaningless, so we immediately return a value, which prevents the rest of the code from running.
In the past, certain groups of programmers considered functions that contain multiple `return` statements sinful. The idea was that this made it hard to see which code was executed and which code was not. Other techniques, which will be discussed in Error Handling↓, have made the reasons behind this idea more or less obsolete, but you might still occasionally come across someone who will criticise the use of ‘shortcut’ return statements.
#### Exercise 17
The `formatDate` function used by `catInfo` does not add a zero before the month and the day part when these are only one digit long. Write a new version that does this.
``# Compose a solution here``
##### View Solution
The property ‘dot’ accessor that we have been using comes in a handy form combined with the existential operator. Instead of `object.element` we can write `object?.element` if `object` is defined then we get the value as before. But if it is `null` then we get `undefined` instead of an error.
#### Exercise 18
Write a function `oldestCat` which, given an object containing cats as its argument, returns the name of the oldest living cat.
``# Compose a solution here``
##### View Solution
Now that we are familiar with arrays, I can show you something related. Whenever a function is called, a special variable named `arguments` is added to the environment in which the function body runs. This variable refers to an object that resembles an array. It has a property `0` for the first argument, `1` for the second, and so on for every argument the function was given. It also has a `length` property.
This object is not a real array though, it does not have methods like `push`, and it does not automatically update its `length` property when you add something to it. Why not, I never really found out, but this is something one needs to be aware of.
``argumentCounter = -> show "You gave me #{arguments.length} arguments."argumentCounter "Death", "Famine", "Pestilence"``
Some functions can take any number of arguments. These typically loop over the values in the `arguments` object to do something with them. We can create a `print` function that uses `show` to print each of its arguments.
``print = -> show arg for arg in arguments returnprint 'From here to', 1/0``
Others can take optional arguments which, when not given by the caller, get some sensible default value.
``add = (number, howmuch) -> if arguments.length < 2 howmuch = 1 number + howmuchshow add 6show add 6, 4``
#### Exercise 19
Extend the `range` function from Exercise 11↑ to take a second, optional argument. If only one argument is given, it behaves as earlier and produces a range from 0 to the given number. If two arguments are given, the first indicates the start of the range, the second the end.
``# Compose a solution here``
##### View Solution
CoffeeScript has a few of features that can make it simpler to work with arguments. You can set default values directly in the argument list. The `show` function is defined like:
``show = (obj, shallow = off, symbol = '→') -> showHere currentTag, obj, shallow, symbol return # Suppress display of the return value``
When the arguments starting from `shallow` are not present in a call, they get the value after the `=`. No checks are needed in the body where the arguments are used.
For variable arguments you can use `...` commonly called splats. The ellipsis can indicate extra arguments in a function definition or a call. Do you remember the testPure function? It was used with both `absolute` which takes one argument and `power` which takes two. In its definition `(c, a...)` says that `a...` is a variable argument list. The variable arguments are used two times, in the call to `func` and in the call to `property`.
``testPure = (func, types, name, property) -> declare name, types, (c, a...) -> c.assert property c, a..., c.note func a...``
Finally the `or=` operator can be used in `options or= defaults` as shorthand for `options or (options = defaults)`.
#### Exercise 20
You may remember this line of code from the introduction:
``sum [1..10]``
All we need to make this line work is a `sum` function. This function takes an array of numbers, and returns their sum. Write it, it should be easy. Check that it also works with `range`.
``# Compose a solution here``
##### View Solution
The previous chapter showed the functions `Math.max` and `Math.min`. With what you know now, you will notice that these are really the properties `max` and `min` of the object stored under the name `Math`. This is another role that objects can play: A warehouse holding a number of related values.
There are quite a lot of values inside `Math`, if they would all have been placed directly into the global environment they would, as it is called, pollute it. The more names have been taken, the more likely one is to accidentally overwrite the value of some variable. For example, it is not a far shot to want to name something `max`.
Most languages will stop you, or at least warn you, when you are defining a variable with a name that is already taken. Not JavaScript.
In any case, one can find a whole outfit of mathematical functions and constants inside `Math`. All the trigonometric functions are there — `cos`, `sin`, `tan`, `acos`, `asin`, `atan`. π and e, which are written with all capital letters (`PI` and `E`), which was, at one time, a fashionable way to indicate something is a constant. `pow` is a good replacement for the `power` functions we have been writing, it also accepts negative and fractional exponents. `sqrt` takes square roots. `max` and `min` can give the maximum or minimum of two values. `round`, `floor`, and `ceil` will round numbers to the closest whole number, the whole number below it, and the whole number above it respectively.
There are a number of other values in `Math`, but this text is an introduction, not a reference. References are what you look at when you suspect something exists in the language, but need to find out what it is called or how it worked exactly. A useful reference for predefined global objects like the `Math` object exist at the Mozilla Developer Network.
An interesting object is `Array` when you look through its reference notice that many definitions for example `forEach` are marked `Requires JavaScript 1.6` or another version number. JavaScript 1.5 corresponds to ECMA–262 3rd Edition from December 1999.
More than a decade later this is still the standard that most JavaScript engines implement — including V8, the engine that via JavaScript compiles CoffeeScript to native machine code. It is fair to say that JavaScript is evolving at a glacial pace. Fortunately CoffeeScript and Underscore leapfrogs the JavaScript process and gives you language and library advances that you can use without waiting for existing browsers and engines to be upgraded.
##### ○•○
Maybe you already thought of a way to find out what is available in the `Math` object:
``for name of Math show name``
But alas, nothing appears. Similarly, when you do this:
``for name of ['Huey', 'Dewey', 'Loui'] show name``
You only see `0`, `1`, and `2`, not `length`, or `push`, or `join`, which are definitely also in there. Apparently, some properties of objects are hidden. There is a good reason for this: All objects have a few methods, for example `toString`, which converts the object into some kind of relevant string, and you do not want to see those when you are, for example, looking for the cats that you stored in the object.
Why the properties of `Math` are hidden is unclear to me. Someone probably wanted it to be a mysterious kind of object. But you can peek under the hood in the standalone REPL; type `Math.` followed by →∣ / ‘Tab’.
All properties your programs add to objects are visible. There is no way to make them hidden, which is unfortunate because, as we will see in Object Orientation↓, it would be nice to be able to add methods to objects without having them show up in our `for` / `of` loops.
##### ○•○
Some properties are read-only, you can get their value but not change it. For example, the properties of a string value are all read-only.
Other properties can be ‘watched’. Changing them causes things to happen. For example, lowering the length of an array causes excess elements to be discarded:
``array = ['Heaven', 'Earth', 'Man']array.length = 2show array``
### Error Handling
Writing programs that work when everything goes as expected is a good start. Making your programs behave properly when encountering unexpected conditions is where it really gets challenging.
The problematic situations that a program can encounter fall into two categories: Programmer mistakes and genuine problems. If someone forgets to pass a required argument to a function, that is an example of the first kind of problem. On the other hand, if a program asks the user to enter a name and it gets back an empty string, that is something the programmer can not prevent.
In general, one deals with programmer errors by finding and fixing them, and with genuine errors by having the code check for them and perform some suitable action to remedy them (for example, asking for the name again), or at least fail in a well-defined and clean way.
##### ○•○
It is important to decide into which of these categories a certain problem falls. For example, consider our old `power` function:
``power = (base, exponent) -> result = 1 for count in [0...exponent] result *= base result``
When some geek tries to call `power 'Rabbit', 4`, that is quite obviously a programmer error, but how about `power 9, 0.5`? The function can not handle fractional exponents, but, mathematically speaking, raising a number to the ½ power is perfectly reasonable ( `Math.pow` can handle it). In situations where it is not entirely clear what kind of input a function accepts, it is often a good idea to explicitly state the kind of arguments that are acceptable in a comment.
##### ○•○
If a function encounters a problem that it can not solve itself, what should it do? In Data Structures↑ we wrote the function `between`:
``between = (string, start, end) -> startAt = string.indexOf start startAt += start.length endAt = string.indexOf end, startAt string[startAt...endAt]``
If the given `start` and `end` do not occur in the string, `indexOf` will return `-1` and this version of `between` will return a lot of nonsense:
``show between 'Your mother!', '{-', '-}'``
When the program is running, and the function is called like that, the code that called it will get a string value, as it expected, and happily continue doing something with it. But the value is wrong, so whatever it ends up doing with it will also be wrong. And if you are unlucky, this wrongness only causes a problem after having passed through twenty other functions. In cases like that, it is extremely hard to find out where the problem started.
In some cases, you will be so unconcerned about these problems that you do not mind the function misbehaving when given incorrect input. For example, if you know for sure the function will only be called from a few places, and you can prove that these places give it decent input, it is generally not worth the trouble to make the function bigger and uglier so that it can handle problematic cases.
But most of the time, functions that fail ‘silently’ are hard to use, and even dangerous. What if the code calling `between` wants to know whether everything went well? At the moment, it can not tell, except by re-doing all the work that `between` did and checking the result of `between` with its own result. That is bad. One solution is to make `between` return a special value, such as `false` or `undefined`, when it fails.
``between = (string, start, end) -> startAt = string.indexOf start if startAt is -1 then return startAt += start.length endAt = string.indexOf end, startAt if endAt is -1 then return string[startAt...endAt]show between 'bu ] boo [ bah ] gzz', '[ ', ' ]'show between 'bu [ boo bah gzz', '[ ', ' ]'``
You can see that error checking does not generally make functions prettier. But now code that calls `between` can do something like:
``prompt "Tell me something", "", (answer) -> parenthesized = between answer, "(", ")" if parenthesized? show "You parenthesized '#{parenthesized}'."``
##### ○•○
In many cases returning a special value is a perfectly fine way to indicate an error. It does, however, have its downsides. Firstly, what if the function can already return every possible kind of value? For example, consider this function that gets the last element from an array:
``lastElement = (array) -> if array.length > 0 array[array.length - 1] else undefinedshow lastElement [1, 2, undefined]``
So did the array have a last element? Looking at the value `lastElement` returns, it is impossible to say.
The second issue with returning special values is that it can sometimes lead to a whole lot of clutter. If a piece of code calls `between` ten times, it has to check ten times whether `undefined` was returned. Also, if a function calls `between` but does not have a strategy to recover from a failure, it will have to check the return value of `between`, and if it is `undefined`, this function can then return `undefined` or some other special value to its caller, who in turn also checks for this value.
Sometimes, when something strange occurs, it would be practical to just stop doing what we are doing and immediately jump back to a place that knows how to handle the problem.
Well, we are in luck, a lot of programming languages provide such a thing. Usually, it is called exception handling.
##### ○•○
The theory behind exception handling goes like this: It is possible for code to raise (or throw) an exception, which is a value. Raising an exception somewhat resembles a super-charged return from a function — it does not just jump out of the current function, but also out of its callers, all the way up to the top-level call that started the current execution. This is called unwinding the stack. You may remember the stack of function calls that was mentioned in Functions↑. An exception zooms down this stack, throwing away all the call contexts it encounters.
If they always zoomed right down to the base of the stack, exceptions would not be of much use, they would just provide a novel way to blow up your program. Fortunately, it is possible to set obstacles for exceptions along the stack. These ‘catch’ the exception as it is zooming down, and can do something with it, after which the program continues running at the point where the exception was caught. An example:
``lastElement = (array) -> if array.length > 0 array[array.length - 1] else throw 'Can not take the last element' + ' of an empty array.' lastElementPlusTen = (array) -> lastElement(array) + 10try show lastElementPlusTen []catch error show 'Something went wrong: ' + error``
`throw` is the keyword that is used to raise an exception. The keyword `try` sets up an obstacle for exceptions: When the code in the block after it raises an exception, the `catch` block will be executed. The variable named in parentheses after the word `catch` is the name given to the exception value inside this block.
Note that the function `lastElementPlusTen` completely ignores the possibility that `lastElement` might go wrong. This is the big advantage of exceptions — error-handling code is only necessary at the point where the error occurs, and the point where it is handled. The functions in between can forget all about it. Well, almost.
##### ○•○
Consider the following: A function `processThing` wants to set a top-level variable `currentThing` to point to a specific thing while its body executes, so that other functions can have access to that thing too. Normally you would of course just pass the thing as an argument, but assume for a moment that that is not practical. When the function finishes, `currentThing` should be set back to `null`.
``currentThing = nullprocessThing = (thing) -> if currentThing isnt null throw 'Oh no! We are already processing a thing!' currentThing = thing # do complicated processing... currentThing = null``
But what if the complicated processing raises an exception? In that case the call to `processThing` will be thrown off the stack by the exception, and `currentThing` will never be reset to `null`. `try` statements can also be followed by a `finally` keyword, which means ‘no matter what happens, run this code after trying to run the code in the `try` block’. If a function has to clean something up, the cleanup code should usually be put into a `finally` block:
``processThing = (thing) -> if currentThing isnt null throw 'Oh no! We are already processing a thing!' currentThing = thing try # do complicated processing... finally currentThing = null``
##### ○•○
A lot of errors in programs cause the CoffeeScript environment to raise an exception. For example:
``try show Sasquatchcatch error show 'Caught: ' + error.message``
In cases like this, special error objects are raised. These always have a `message` property containing a description of the problem. You can raise similar objects using the `new` keyword and the `Error` constructor:
``try throw new Error 'Fire!'catch error show error.toString()``
##### ○•○
When an exception goes all the way to the bottom of the stack without being caught, it gets handled by the environment. What this means differs between different browsers and engines, sometimes a description of the error is written to some kind of log, sometimes a window pops up describing the error.
The errors produced by entering code in the CoffeeScript REPL are caught by the console, and displayed along with a stack trace.
##### ○•○
Most programmers consider exceptions purely an error-handling mechanism. In essence, though, they are just another way of influencing the control flow of a program. For example, they can be used as a kind of `break` statement in a recursive function. Here is a slightly strange function which determines whether an object, and the objects stored inside it, contain at least seven `true` values:
``FoundSeven = {}hasSevenTruths = (object) -> counted = 0 count = (object) -> for name of object if object[name] is true if (++counted) is 7 throw FoundSeven if typeof object[name] is 'object' count object[name] try count object return false catch exception if exception isnt FoundSeven throw exception return true``
The inner function `count` is recursively called for every object that is part of the argument. When the variable `counted` reaches seven, there is no point in continuing to count, but just returning from the current call to `count` will not necessarily stop the counting, since there might be more calls below it. So what we do is just throw a value, which will cause the control to jump right out of any calls to `count`, and land at the `catch` block.
But just returning `true` in case of an exception is not correct. Something else might be going wrong, so we first check whether the exception is the object `FoundSeven`, created specifically for this purpose. If it is not, this `catch` block does not know how to handle it, so it raises it again.
``testdata = a: true b: true c: false d: a: true b: false c: true d: a: true b: a: true e: a: false b: true c: trueshow hasSevenTruths testdata``
This is a pattern that is also common when dealing with error conditions — you have to make sure that your `catch` block only handles exceptions that it knows how to handle. Throwing string values, as some of the examples in this chapter do, is rarely a good idea, because it makes it hard to recognise the type of the exception. A better idea is to use unique values, such as the `FoundSeven` object, or to introduce a new type of objects, as described in Object Orientation↓.
### Functional Programming
As programs get bigger, they also become more complex and harder to understand. We all think ourselves pretty clever, of course, but we are mere human beings, and even a moderate amount of chaos tends to baffle us. And then it all goes downhill. Working on something you do not really understand is a bit like cutting random wires on those time-activated bombs they always have in movies. If you are lucky, you might get the right one — especially if you are the hero of the movie and strike a suitably dramatic pose — but there is always the possibility of blowing everything up.
Admittedly, in most cases, breaking a program does not cause any large explosions. But when a program, by someone’s ignorant tinkering, has degenerated into a ramshackle mass of errors, reshaping it into something sensible is a terrible labour — sometimes you might just as well start over.
Thus, the programmer is always looking for ways to keep the complexity of his programs as low as possible. An important way to do this is to try and make code more abstract. When writing a program, it is easy to get sidetracked into small details at every point. You come across some little issue, and you deal with it, and then proceed to the next little problem, and so on. This makes the code read like a grandmother’s tale.
Yes, dear, to make pea soup you will need split peas, the dry kind. And you have to soak them at least for a night, or you will have to cook them for hours and hours. I remember one time, when my dull son tried to make pea soup. Would you believe he hadn’t soaked the peas? We almost broke our teeth, all of us. Anyway, when you have soaked the peas, and you’ll want about a cup of them per person, and pay attention because they will expand a bit while they are soaking, so if you aren’t careful they will spill out of whatever you use to hold them, so also use plenty water to soak in, but as I said, about a cup of them, when they are dry, and after they are soaked you cook them in four cups of water per cup of dry peas. Let it simmer for two hours, which means you cover it and keep it barely cooking, and then add some diced onions, sliced celery stalk, and maybe a carrot or two and some ham. Let it all cook for a few minutes more, and it is ready to eat.
Another way to describe this recipe:
Per person: one cup dried split peas, half a chopped onion, half a carrot, a celery stalk, and optionally ham.
Soak peas overnight, simmer them for two hours in four cups of water (per person), add vegetables and ham, and cook for ten more minutes.
This is shorter, but if you do not know how to soak peas you will surely screw up and put them in too little water. But how to soak peas can be looked up, and that is the trick. If you assume a certain basic knowledge in the audience, you can talk in a language that deals with bigger concepts, and express things in a much shorter and clearer way.
This, more or less, is what abstraction is.
How is this far-fetched recipe story relevant to programming? Well, obviously, the recipe is the program. Furthermore, the basic knowledge that the cook is supposed to have corresponds to the functions and other constructs that are available to the programmer. If you remember the introduction of this book, things like `while` make it easier to build loops, and in Data Structures↑ we wrote some simple functions in order to make other functions shorter and more straightforward. Such tools, some of them made available by the language itself, others built by the programmer, are used to reduce the amount of uninteresting details in the rest of the program, and thus make that program easier to work with.
##### ○•○
Functional programming, which is the subject of this chapter, produces abstraction through clever ways of combining functions. A programmer armed with a repertoire of fundamental functions and, more importantly, the knowledge on how to use them, is much more effective than one who starts from scratch. Unfortunately, a standard CoffeeScript environment comes with deplorably few essential functions, so we have to write them ourselves or, which is often preferable, make use of somebody else’s code (more on that in Modularity↓).
In this chapter we write a set of useful functions to understand how they work and solve problems with them to understand how to use them. In later chapters we will use the larger set of functions in the Underscore library that comes with CoffeeScript.
There are other popular approaches to abstraction, most notably object-oriented programming, the subject of Object Orientation↓.
##### ○•○
In programming a fundamental operation is performing an action on every element of an array. Many programming languages have borrowed their way of doing this from the C programming language:
``size_t i;size_t N = sizeof(thing) / sizeof(thing[0]);for (i = 0; i < N; ++i) { do_something(thing[i]);}``
That is very ugly, an eyesore if you have any good taste at all. It has been improved somewhat in derived languages like C++ and JavaScript. Other programming languages can have very different approaches21. In CoffeeScript it becomes the reasonably pleasant:
``# Say we have:thing = [5..7]doSomething = show# thenfor i in [0...thing.length] then doSomething thing[i]# or better - you can see the generated code with: show -> \for element in thing then doSomething element``
Still do we have to do this over and over or can it be abstracted?
The problem is that, whereas most functions just take some values, combine them, and return something, such a loop contains a piece of code that it must execute. It is easy to write a function that goes over an array and prints out every element:
``printArray = (array) -> for element in array show element returnprintArray [7..9]``
But what if we want to do something else than print? Since ‘doing something’ can be represented as a function, and functions are also values, we can pass our action as a function value:
``forEach = (array, action) -> for element in array action element #returnforEach ['Wampeter', 'Foma', 'Granfalloon'], showrunOnDemand -> show forEach # View generated code``
In CoffeeScript most statements are expressions that return a value, that is also the case with the `for` statement. A `return` statement at the end of `forEach` will make it return `undefined`. Here `forEach` is allowed to return its value, when we get to the `map` function you will see why.
By making use of an anonymous function, something just like a `for` loop can be written as:
``sum = (numbers) -> total = 0 forEach numbers, (number) -> total += number totalshow sum [1, 10, 100]``
Note that the variable `total` is visible inside the anonymous function because of the scoping rules. Also note that this version is hardly shorter than the `for` loop.
You do get a variable bound to the current element in the array, `number`, so there is no need to use `numbers[i]` anymore, and when this array is created by evaluating some expression, there is no need to store it in a variable, because it can be passed to `forEach` directly.
The cat-code in Data Structures↑ contains a piece like this:
``paragraphs = email.split '\n'for paragraph in paragraphs handleParagraph paragraph``
This can now be written as…
``forEach email.split('\n'), handleParagraph``
On the whole, using more abstract (or ‘higher level’) constructs results in more information and less noise: The code in `sum` reads ‘for each number in numbers add that number to the total’, instead of… ‘there is this variable that starts at zero, and it counts upward to the length of the array called numbers, and for every value of this variable we look up the corresponding element in the array and add this to the total’.
##### ○•○
What `forEach` does is take an algorithm, in this case ‘going over an array’, and abstract it. The ‘gaps’ in the algorithm, in this case, what to do for each of these elements, are filled by functions which are passed to the algorithm function.
Functions that operate on other functions are called higher-order functions. By operating on functions, they can talk about actions on a whole new level. The `makeAddFunction` function from Functions↑ is also a higher-order function. Instead of taking a function value as an argument, it produces a new function.
Higher-order functions can be used to generalise many algorithms that regular functions can not easily describe. When you have a repertoire of these functions at your disposal, it can help you think about your code in a clearer way: Instead of a messy set of variables and loops, you can decompose algorithms into a combination of a few fundamental algorithms, which are invoked by name, and do not have to be typed out again and again.
Being able to write what we want to do instead of how we do it means we are working at a higher level of abstraction. In practice, this means shorter, clearer, and more pleasant code.
##### ○•○
Another useful type of higher-order function modifies the function value it is given:
``negate = (func) -> (x) -> not func xisNotNaN = negate isNaNshow isNotNaN NaN``
The function returned by `negate` feeds the argument it is given to the original function `func`, and then negates the result. But what if the function you want to negate takes more than one argument? You can get access to any arguments passed to a function with the `arguments` array, but how do you call a function when you do not know how many arguments you have?
Functions22 have a method called `apply`, which is used for situations like this. It takes two arguments. The role of the first argument will be discussed in Object Orientation↓, for now we just use `null` there. The second argument is an array containing the arguments that the function must be applied to.
``show Math.min.apply null, [5, 6]negate = (func) -> -> not func.apply null, argumentsmorethan = (x,y) -> x > ylessthan = negate morethanshow lessthan 5, 7``
Now you know the underlying `arguments` mechanism remember that you can also use splats…
``show Math.min [5, 6]...negate = (func) -> (args...) -> not func args...morethan = (x,y) -> x > ylessthan = negate morethanshow lessthan 5, 7``
##### ○•○
Let us look at a few more basic algorithms related to arrays. The `sum` function is really a variant of an algorithm which is usually called `reduce` or `foldl`:
``reduce = (array, combine, base) -> forEach array, (element) -> base = combine base, element baseadd = (a, b) -> a + bsum = (numbers) -> reduce numbers, add, 0show sum [1, 10, 100]``
`reduce` combines an array into a single value by repeatedly using a function that combines an element of the array with a base value. This is exactly what `sum` did, so it can be made shorter by using `reduce`… except that addition is an operator and not a function in CoffeeScript, so we first had to put it into a function.
#### Exercise 21
Write a function `countZeroes`, which takes an array of numbers as its argument and returns the amount of zeroes that occur in it. Use `reduce`.
Then, write the higher-order function `count`, which takes an array and a test function as arguments, and returns the amount of elements in the array for which the test function returned `true`. Re-implement `countZeroes` using this function.
``# Compose a solution here``
##### View Solution
One other generally useful ‘fundamental algorithm’ related to arrays is called `map`. It goes over an array, applying a function to every element, just like `forEach`. But instead of discarding the values returned by function, it builds up a new array from these values.
``map = (array, func) -> result = [] forEach array, (element) -> result.push func element resultshow map [0.01, 2, 9.89, Math.PI], Math.round``
Note that the last argument is called `func`, not `function`, this is because `function` is a keyword and thus not a valid variable name. And then:
``# Leave out result since forEach already returns itmap = (array, func) -> forEach array, (element) -> func element# Leave out func argumentsmap = (array, func) -> forEach array, func# Leave out forEach argumentsmap = forEachshow map [0.01, 2, 9.89, Math.PI], Math.round``
The reduction shows how nicely functions and expressions can be used to shorten a function. If we were concerned with the performance of `forEach` then inserting a `return` at the end of its definition would prevent it from collecting the results of the `for` comprehension and we would have to provide an implementation in `map`.
##### ○•○
There once was, living in the deep mountain forests of Transylvania, a recluse. Most of the time, he just wandered around his mountain, talking to trees and laughing with birds. But now and then, when the pouring rain trapped him in his little hut, and the howling wind made him feel unbearably small, the recluse felt an urge to write something, wanted to pour some thoughts out onto paper, where they could maybe grow bigger than he himself was.
After failing miserably at poetry, fiction, and philosophy, the recluse finally decided to write a technical book. In his youth, he had done some computer programming, and he figured that if he could just write a good book about that, fame and recognition would surely follow.
So he wrote. At first he used fragments of tree bark, but that turned out not to be very practical. He went down to the nearest village and bought himself a laptop computer. After a few chapters, he realised he wanted to put the book in HTML format, in order to put it on his web-page…
Are you familiar with HTML? It is the method used to add mark-up to pages on the web, and we will be using it a few times in this book, so it would be nice if you know how it works, at least generally. If you are a good student, you could go search the web for a good introduction to HTML now, and come back here when you have read it. Most of you probably are lousy students, so I will just give a short explanation and hope it is enough.
HTML stands for ‘HyperText Mark-up Language’. An HTML document is all text. Because it must be able to express the structure of this text, information about which text is a heading, which text is purple, and so on, a few characters have a special meaning, somewhat like backslashes in CoffeeScript strings. The ‘less than’ and ‘greater than’ characters are used to create ‘tags’. A tag gives extra information about the text in the document. It can stand on its own, for example to mark the place where a picture should appear in the page, or it can contain text and other tags, for example when it marks the start and end of a paragraph.
Some tags are compulsory, a whole HTML document must always be contained in between `html` tags. The HTML version is specified on the first line with the document type, `DOCTYPE`, so browsers can parse and render it correctly. Here is an example of an HTML5 document:
``<!DOCTYPE HTML><html> <head> <meta charset="utf-8"/> <title>A quote</title> </head> <body> <h1>A quote</h1> <blockquote> <p>The connection between the language in which we think/program and the problems and solutions we can imagine is very close. For this reason restricting language features with the intent of eliminating programmer errors is at best dangerous.</p> <p>-- Bjarne Stroustrup</p> </blockquote> <p>Mr. Stroustrup is the inventor of the C++ programming language, but quite an insightful person nevertheless.</p> <p>Also, here is a picture of an ostrich:</p> <img src="../img/ostrich.jpg"/> </body></html>``
Elements that contain text or other tags are first opened with `<tagname>`, and afterwards finished with `</tagname>`. The `html` element always contains two children: `head` and `body`. The first contains information about the document, the second contains the actual document.
Most tag names are cryptic abbreviations. `h1` stands for ‘heading 1’, the biggest kind of heading. There are also `h2` to `h6` for successively smaller headings. `p` means ‘paragraph’, and `img` stands for ‘image’. The `img` element does not contain any text or other tags, but it does have some extra information, `src="../img/ostrich.png"`, which is called an ‘attribute’. In this case, it contains information about the image file that should be shown here.
Because `<` and `>` have a special meaning in HTML documents, they can not be written directly in the text of the document. If you want to say ‘`5 < 10`’ in an HTML document, you have to write ‘`5 < 10`’, where ‘`lt`’ stands for ‘less than’. ‘`>`’ is used for ‘`>`’, and because these codes also give the ampersand character a special meaning, a plain ‘`&`’ is written as ‘`&`’.
Now, those are only the bare basics of HTML, but they should be enough to make it through this chapter, and later chapters that deal with HTML documents, without getting entirely confused.
##### ○•○
The interactive environment has a function `showDocument` that can be used to look at HTML documents. If you have a webpage in a string variable then you can display it with: `showDocument variableName, width, height`. You can also give it a document in a string directly.23
``# You could access a picture with its URLimageSource = 'http://autotelicum.github.com/Smooth-CoffeeScript'linkOstrich = "#{imageSource}/img/ostrich.jpg"# But I will use a predefined image to avoid a server round-tripshowDocument """<!DOCTYPE HTML><html> <head> <meta charset="utf-8"/> <title>A quote</title> </head> <body> <h1>A quote</h1> <blockquote> <p>The connection between the language in which we think/program and the problems and solutions we can imagine is very close. For this reason restricting language features with the intent of eliminating programmer errors is at best dangerous.</p> <p>-- Bjarne Stroustrup</p> </blockquote> <p>Mr. Stroustrup is the inventor of the C++ programming language, but quite an insightful person nevertheless.</p> <p>Also, here is a picture of an ostrich:</p> <img src="#{ostrich}"/> </body></html>""", 565, 420``
##### ○•○
So, picking up the story again, the recluse wanted to have his book in HTML format. At first he just wrote all the tags directly into his manuscript, but typing all those less-than and greater-than signs made his fingers hurt, and he constantly forgot to write `&` when he needed an `&`. This gave him a headache. Next, he tried to write the book in Microsoft Word, and then save it as HTML. But the HTML that came out of that was fifteen times bigger and more complicated than it had to be. And besides, Microsoft Word gave him a headache.
The solution that he eventually came up with was this: He would write the book as plain text, following some simple rules about the way paragraphs were separated and the way headings looked. Then, he would write a program to convert this text into precisely the HTML that he wanted.
The rules are this:
1. Paragraphs are separated by blank lines.
2. A paragraph that starts with a ‘%’ symbol is a header. The more ‘%’ symbols, the smaller the header.
3. Inside paragraphs, pieces of text can be emphasised by putting them between asterisks.
4. Footnotes are written between braces.
##### ○•○
After he had struggled painfully with his book for six months, the recluse had still only finished a few paragraphs. At this point, his hut was struck by lightning, killing him, and forever putting his writing ambitions to rest. From the charred remains of his laptop, I could recover the following manuscript, that I have placed in a global string variable, `recluseFile`, so you can get it as a string value in the following.
``recluseFile = """% The Book of Programming%% The Two AspectsBelow the surface of the machine, the program moves.Without effort, it expands and contracts. In great harmony, electrons scatter and regroup. The forms on the monitorare but ripples on the water. The essence stays invisiblybelow.When the creators built the machine, they put in theprocessor and the memory. From these arise the two aspectsof the program.The aspect of the processor is the active substance. It iscalled Control. The aspect of the memory is the passive substance. It is called Data.Data is made of merely bits, yet it takes complex forms.Control consists only of simple instructions, yet itperforms difficult tasks. From the small and trivial, thelarge and complex arise.The program source is Data. Control arises from it. TheControl proceeds to create new Data. The one is born fromthe other, the other is useless without the one. This isthe harmonious cycle of Data and Control.Of themselves, Data and Control are without structure. Theprogrammers of old moulded their programs out of this rawsubstance. Over time, the amorphous Data has crystallisedinto data types, and the chaotic Control was restrictedinto control structures and functions.%% Short SayingsWhen a student asked Fu-Tzu about the nature of the cycleof Data and Control, Fu-Tzu replied 'Think of a compiler,compiling itself.'A student asked 'The programmers of old used only simplemachines and no programming languages, yet they madebeautiful programs. Why do we use complicated machinesand programming languages?'. Fu-Tzu replied 'The buildersof old used only sticks and clay, yet they made beautifulhuts.'A hermit spent ten years writing a program. 'My program cancompute the motion of the stars on a 286-computer runningMS DOS', he proudly announced. 'Nobody owns a 286-computeror uses MS DOS anymore.', Fu-Tzu responded.Fu-Tzu had written a small program that was full of globalstate and dubious shortcuts. Reading it, a student asked'You warned us against these techniques, yet I find them inyour program. How can this be?' Fu-Tzu said 'There is noneed to fetch a water hose when the house is not on fire.'{This is not to be read as an encouragement of sloppyprogramming, but rather as a warning against neuroticadherence to rules of thumb.}%% WisdomA student was complaining about digital numbers. 'When Itake the root of two and then square it again, the resultis already inaccurate!'. Overhearing him, Fu-Tzu laughed.'Here is a sheet of paper. Write down the precise value ofthe square root of two for me.'Fu-Tzu said 'When you cut against the grain of the wood,much strength is needed. When you program against the grainof a problem, much code is needed.'Tzu-li and Tzu-ssu were boasting about the size of theirlatest programs. 'Two-hundred thousand lines', said Tzu-li,'not counting comments!'. 'Psah', said Tzu-ssu, 'mine isalmost a *million* lines already.' Fu-Tzu said 'My bestprogram has five hundred lines.' Hearing this, Tzu-li andTzu-ssu were enlightened.A student had been sitting motionless behind his computerfor hours, frowning darkly. He was trying to write abeautiful solution to a difficult problem, but could notfind the right approach. Fu-Tzu hit him on the back of hishead and shouted '*Type something!*' The student startedwriting an ugly solution. After he had finished, hesuddenly understood the beautiful solution.%% ProgressionA beginning programmer writes his programs like an antbuilds her hill, one piece at a time, without thought forthe bigger structure. His programs will be like loose sand.They may stand for a while, but growing too big they fallapart{Referring to the danger of internal inconsistencyand duplicated structure in unorganised code.}.Realising this problem, the programmer will start to spenda lot of time thinking about structure. His programs willbe rigidly structured, like rock sculptures. They are solid,but when they must change, violence must be done to them{Referring to the fact that structure tends to putrestrictions on the evolution of a program.}.The master programmer knows when to apply structure andwhen to leave things in their simple form. His programsare like clay, solid yet malleable.%% LanguageWhen a programming language is created, it is givensyntax and semantics. The syntax describes the form ofthe program, the semantics describe the function. When thesyntax is beautiful and the semantics are clear, theprogram will be like a stately tree. When the syntax isclumsy and the semantics confusing, the program will belike a bramble bush.Tzu-ssu was asked to write a program in the languagecalled Java, which takes a very primitive approach tofunctions. Every morning, as he sat down in front of hiscomputer, he started complaining. All day he cursed,blaming the language for all that went wrong. Fu-Tzulistened for a while, and then reproached him, saying'Every language has its own way. Follow its form, do nottry to program as if you were using another language.'"""show # 'The End'``
##### ○•○
To honour the memory of our good recluse, I would like to finish his HTML-generating program for him. A good approach to this problem goes like this:
1. Split the file into paragraphs by cutting it at every empty line.
2. Remove the ‘%’ characters from header paragraphs and mark them as headers.
3. Process the text of the paragraphs themselves, splitting them into normal parts, emphasised parts, and footnotes.
4. Move all the footnotes to the bottom of the document, leaving numbers24 in their place.
5. Wrap each piece into the correct HTML tags.
6. Combine everything into a single HTML document.
This approach does not allow footnotes inside emphasised text, or vice versa. This is kind of arbitrary, but helps keep the example code simple. If, at the end of the chapter, you feel like an extra challenge, you can try to revise the program to support ‘nested’ mark-up.
##### ○•○
Step 1 of the algorithm is trivial. A blank line is what you get when you have two newlines in a row, and if you remember the `split` method that strings have, which we saw in Data Structures↑, you will realise that this will do the trick:
``paragraphs = recluseFile.split "\n\n"show "Found #{paragraphs.length} paragraphs."``
#### Exercise 22
Write a function `processParagraph` that, when given a paragraph string as its argument, checks whether this paragraph is a header. If it is, it strips of the ‘%’ characters and counts their number. Then, it returns an object with two properties, `content`, which contains the text inside the paragraph, and `type`, which contains the tag that this paragraph must be wrapped in, `'p'` for regular paragraphs, `'h1'` for headers with one ‘%’, and `'hX'` for headers with `X` ‘%’ characters.
``# Compose a solution here``
##### View Solution
This is where we can try out the `map` function we saw earlier.
``paragraphs = map recluseFile.split('\n\n'), processParagraphshow paragraph for paragraph in paragraphs[0..2]``
And bang, we have an array of nicely categorised paragraph objects. We are getting ahead of ourselves though, we forgot step 3 of the algorithm:
Process the text of the paragraphs themselves, splitting them into normal parts, emphasised parts, and footnotes.
Which can be decomposed into:
1. If the paragraph starts with an asterisk, take off the emphasised part and store it.
2. If the paragraph starts with an opening brace, take off the footnote and store it.
3. Otherwise, take off the part until the first emphasised part or footnote, or until the end of the string, and store it as normal text.
4. If there is anything left in the paragraph, start at 1 again.
#### Exercise 23
Build a function `splitParagraph` which, given a paragraph string, returns an array of paragraph fragments. Think of a good way to represent the fragments, they need `type` and `content` properties.
The method `indexOf`, which searches for a character or sub-string in a string and returns its position, or `-1` if not found, will probably be useful in some way here.
This is a tricky algorithm, and there are many not-quite-correct or way-too-long ways to describe it. If you run into problems, just think about it for a minute. Try to write inner functions that perform the smaller actions that make up the algorithm.
``# Compose a solution here``
##### View Solution
We can now wire `processParagraph` to also split the text inside the paragraphs, my version can be modified like this:
``processParagraph = (paragraph) -> header = 0 while paragraph[0] is '%' paragraph = paragraph.slice 1 header++ type: if header is 0 then 'p' else 'h' + header, content: splitParagraph paragraph# Adhoc testparagraphs = map recluseFile.split('\n\n'), processParagraphshow paragraph for paragraph in paragraphs[0..2]``
Mapping that over the array of paragraphs gives us an array of paragraph objects, which in turn contain arrays of fragment objects. The next thing to do is to take out the footnotes, and put references to them in their place. Something like this:
``extractFootnotes = (paragraphs) -> footnotes = [] currentNote = 0 replaceFootnote = (fragment) -> if fragment.type is 'footnote' ++currentNote footnotes.push fragment fragment.number = currentNote type: 'reference', number: currentNote else fragment forEach paragraphs, (paragraph) -> paragraph.content = map paragraph.content, replaceFootnote footnotesshow 'Footnotes from the recluse:'show extractFootnotes paragraphsshow paragraphs[20]``
The `replaceFootnote` function is called on every fragment. When it gets a fragment that should stay where it is, it just returns it, but when it gets a footnote, it stores this footnote in the `footnotes` array, and returns a reference to it instead. In the process, every footnote and reference is also numbered.
##### ○•○
That gives us enough tools to extract the information we need from the file. All that is left now is generating the correct HTML.
A lot of people think that concatenating strings is a great way to produce HTML. When they need a link to, for example, a site where you can play the game of Go, they will do:
``url = "http://www.gokgs.com/"text = "Play Go!"linkText = "<a href=\"#{url}\">#{text}</a>"show _.escape linkText# Without the _.escape it becomes a linkshow linkText``
(Where `a` is the tag used to create links in HTML documents.) … Not only is this clumsy, but when the string `text` happens to include an angular bracket or an ampersand, it is also wrong. Weird things will happen on your website, and you will look embarrassingly amateurish. We would not want that to happen. A few simple HTML-generating functions are easy to write. So let us write them.
##### ○•○
The secret to successful HTML generation is to treat your HTML document as a data structure instead of a flat piece of text. CoffeeScript’s objects provide a very easy way to model this:
``linkObject = name: 'a' attributes: href: 'http://www.gokgs.com/' content: ['Play Go!']``
Each HTML element contains a `name` property, giving the name of the tag it represents. When it has attributes, it also contains an `attributes` property, which contains an object in which the attributes are stored. When it has content, there is a `content` property, containing an array of other elements contained in this element. Strings play the role of pieces of text in our HTML document, so the array `['Play Go!']` means that this link has only one element inside it, which is a simple piece of text.
Typing in these objects directly is clumsy, but we do not have to do that. We provide a shortcut function to do this for us:
``tag = (name, content, attributes) -> name: name attributes: attributes content: content``
Note that, since we allow the `attributes` and `content` of an element to be undefined if they are not applicable, the second and third argument to this function can be left off when they are not needed. `tag` is still rather primitive, so we write shortcuts for common types of elements, such as links, or the outer structure of a simple document:
``link = (target, text) -> tag "a", [text], href: targetshow link "http://www.gokgs.com/", "Play Go!"htmlDoc = (title, bodyContent) -> tag "html", [tag("head", [tag "title", [title]]), tag "body", bodyContent]show htmlDoc "Quote", "In his house at R'lyeh " + "dead Cthulu waits dreaming."``
#### Exercise 24
Looking back at the example HTML document if necessary, write an `image` function which, when given the location of an image file, will create an `img` HTML element.
``# Compose a solution here``
##### View Solution
When we have created a document, it will have to be reduced to a string. But building this string from the data structures we have been producing is very straightforward. The important thing is to remember to transform the special characters in the text of our document…
``escapeHTML = (text) -> replacements = [[/&/g, '&'] [/"/g, '"'] [/</g, '<'] [/>/g, '>']] forEach replacements, (replace) -> text = text?.replace replace[0], replace[1] textshow escapeHTML '< " & " >'``
The `replace` method of strings creates a new string in which all occurrences of the pattern in the first argument are replaced by the second argument, so `'Borobudur'.replace(/r/g, 'k')` gives `'Bokobuduk'`. Do not worry about the pattern syntax here — we will get to that in Regular Expressions↓. The `escapeHTML` function puts the different replacements that have to be made into an array, so that it can loop over them and apply them to the argument one by one.
Double quotes are also replaced, because we will also be using this function for the text inside the attributes of HTML tags. Those will be surrounded by double quotes, and thus must not have any double quotes inside of them.
Calling replace four times means the computer has to go over the whole string four times to check and replace its content. This is not very efficient. If we cared enough, we could write a more complex version of this function, something that resembles the `splitParagraph` function we saw earlier, to go over it only once. For now, we are too lazy for this. Again, Regular Expressions↓ shows a much better way to do this.
##### ○•○
To turn an HTML element object into a string, we can use a recursive function like this:
``renderHTML = (element) -> pieces = [] renderAttributes = (attributes) -> result = [] if attributes for name of attributes result.push ' ' + name + '="' + escapeHTML(attributes[name]) + '"' result.join '' render = (element) -> # Text node if typeof element is 'string' pieces.push escapeHTML element # Empty tag else if not element.content or element.content.length is 0 pieces.push '<' + element.name + renderAttributes(element.attributes) + '/>' # Tag with content else pieces.push '<' + element.name + renderAttributes(element.attributes) + '>' forEach element.content, render pieces.push '</' + element.name + '>' render element pieces.join ''``
Note the `of` loop that extracts the properties from a CoffeeScript object in order to make HTML tag attributes out of them. Also note that in two places, arrays are being used to accumulate strings, which are then joined into a single result string. Why did I not just start with an empty string and then add the content to it with the `+=` operator?
It turns out that creating new strings, especially big strings, is quite a lot of work. Remember that CoffeeScript string values never change. If you concatenate something to them, a new string is created, the old ones stay intact. If we build up a big string by concatenating lots of little strings, new strings have to be created at every step, only to be thrown away when the next piece is concatenated to them. If, on the other hand, we store all the little strings in an array and then join them, only one big string has to be created.
##### ○•○
So, let us try out this HTML generating system…
``show renderHTML link 'http://www.nedroid.com', 'Drawings!'``
That seems to work.
``body = [tag('h1', ['The Test']), tag('p', ['Here is a paragraph ' + 'and an image...']), image(ostrich)]doc = htmlDoc 'The Test', bodyshow renderHTML doc``
Now, I should probably warn you that this approach is not perfect. What it actually renders is XML, which is similar to HTML, but more structured. In simple cases, such as the above, this does not cause any problems. However, there are some things, which are correct XML, but not proper HTML, and these might confuse a browser that is trying to show the documents we create. For example, if you have an empty `script` tag (used to put JavaScript into a page) in your document, browsers will not realise that it is empty and think that everything after it is JavaScript. (In this case, the problem can be fixed by putting a single space inside of the tag, so that it is no longer empty, and gets a proper closing tag.)
#### Exercise 25
Write a function `renderFragment`, and use that to implement another function `renderParagraph`, which takes a paragraph object (with the footnotes already filtered out), and produces the correct HTML element (which might be a paragraph or a header, depending on the `type` property of the paragraph object).
This function might come in useful for rendering the footnote references:
``footnote = (number) -> tag 'sup', [link '#footnote' + number, String number]show footnote(42), 3``
A `sup` tag will show its content as ‘superscript’, which means it will be smaller and a little higher than other text. The target of the link will be something like `'#footnote1'`. Links that contain a ‘#’ character refer to ‘anchors’ within a page, and in this case we will use them to make it so that clicking on the footnote link will take the reader to the bottom of the page, where the footnotes live.
The tag to render emphasised fragments with is `em`, and normal text can be rendered without any extra tags.
``# Compose a solution here``
##### View Solution
We are almost finished. The only thing that we do not have a rendering function for yet are the footnotes. To make the `'#footnote1'` links work, an anchor must be included with every footnote. In HTML, an anchor is specified with an `a` element, which is also used for links. In this case, it needs a `name` attribute, instead of an `href`.
``renderFootnote = (footnote) -> anchor = tag "a", [], name: "footnote" + footnote.number number = "[#{footnote.number}] " tag "p", [tag("small", [anchor, number, footnote.content])]``
Here, then, is the function which, when given a file in the correct format and a document title, returns an HTML document:
``renderFile = (file, title) -> paragraphs = map file.split('\n\n'), processParagraph footnotes = map extractFootnotes(paragraphs), renderFootnote body = map paragraphs, renderParagraph body = body.concat footnotes renderHTML htmlDoc title, bodyrunOnDemand -> page = renderFile recluseFile, 'The Book of Programming' showDocument page, 565, 500``
The `concat` method of an array can be used to concatenate another array to it, similar to what the `+` operator does with strings.
##### ○•○
In the chapters after this one, elementary higher-order functions like `map` and `reduce` will always be available from the Underscore library and will be used by code examples. Now and then, a new useful tool is explained and added to this. In Modularity↓, we develop a more structured approach to this set of ‘basic’ functions.
##### ○•○
In some functional programming languages operators are functions, for example in Pure you can write `foldl (+) 0 (1..10);` the same in CoffeeScript is `reduce [1..10], ((a, b) -> a + b), 0`. A way to shorten this is by defining an object that is indexed by an operator in a string:
``op = { '+': (a, b) -> a + b '==': (a, b) -> a == b '!': (a) -> !a # and so on }show reduce [1..10], op['+'], 0``
The list of operators is quite long, so it is questionable whether such a data structure improves readability compared to:
``add = (a, b) -> a + bshow reduce [1..10], add, 0``
And what if we need something like `equals` or `makeAddFunction`, in which one of the arguments already has a value? In that case we are back to writing a new function again.
For cases like that, something called ‘partial application’ is useful. You want to create a new function that already knows some of its arguments, and treats any additional arguments it is passed as coming after these fixed arguments. A simple version25 of this could be:
``partial = (func, a...) -> (b...) -> func a..., b...f = (a,b,c,d) -> show "#{a} #{b} #{c} #{d}"g = partial f, 1, 2g 3, 4``
The return value of `partial` is a function where the `a...` arguments have been applied. When the returned function is called the `b...` arguments are appended to the arguments of `func`.
``equals10 = partial op['=='], 10show map [1, 10, 100], equals10``
Unlike traditional functional definitions, Underscore defines the order of its arguments as array before action. That means we can not simply say:
``square = (x) -> x * xtry show map [[10, 100], [12, 16], [0, 1]], partial map, square # Incorrectcatch error show "Error: #{error.message}"``
Since the square function needs to be the second argument of the inner map. But we can define another partial function that reverses its arguments:
``partialReverse = (func, a) -> (b) -> func b, amapSquared = partialReverse map, squareshow map [[10, 100], [12, 16], [0, 1]], mapSquared``
However it is again worthwhile to consider whether the intent of the program is clearer when the functions are defined directly:
``show map [[10, 100], [12, 16], [0, 1]], (sublist) -> map sublist, (x) -> x * x``
##### ○•○
A trick that can be useful when you want to combine functions is function composition. At the start of this chapter I showed a function `negate`, which applies the boolean not operator to the result of calling a function:
``negate = (func) -> (args...) -> not func args...``
This is a special case of a general pattern: call function A, and then apply function B to the result. Composition is a common concept in mathematics. It can be caught in a higher-order function like this:
``compose = (func1, func2) -> (args...) -> func1 func2 args...isUndefined = (value) -> value is undefinedisDefined = compose ((v) -> not v), isUndefinedshow 'isDefined Math.PI = ' + isDefined Math.PIshow 'isDefined Math.PIE = ' + isDefined Math.PIE``
In `isDefined` we are defining a new function without naming it. This can be useful when you need to create a simple function to give to, for example, `map` or `reduce`. However, when a function becomes more complex than this example, it is usually shorter and clearer to define it by itself and name it.
### Searching
This chapter introduces new functional programming concepts and their use in problem solving. We will go through the solution of two problems, discussing some interesting algorithms and techniques along the way.
The Underscore library is used to make it pleasant to work with the functional abstractions. Its functions are not directly available, they need to be written with a `_.` in front of them. See the interactive Underscore reference for details and examples on all the functions.
``# List functions in UnderscorerunOnDemand -> show _.functions _``
The functions in Underscore could be made available as global functions, then they would be more convenient to use. This was done in the static editions of this book and works fine in isolated environments. However as is explained in Modularity↓ many global functions can lead to interference between them. So in this edition Underscore has to be qualified in the same way as when you use it to build reusable libraries or when you work in a shared project.
##### ○•○
Let me introduce our first problem. Take a look at this map. It shows Hiva Oa, a small tropical island in the Pacific Ocean.
The black lines are roads, and the numbers next to them are the lengths of these roads. Imagine we need a program that finds the shortest route between two points on Hiva Oa. How could we approach that? Think about this for a moment.
No really. Do not just steamroll on to the next paragraph. Try to seriously think of some ways you could do this, and consider the issues you would come up against. When reading a technical book, it is way too easy to just zoom over the text, nod solemnly, and promptly forget what you have read. If you make a sincere effort to solve a problem, it becomes your problem, and its solution will be more meaningful.
##### ○•○
The first aspect of this problem is, again, representing our data. The information in the picture does not mean much to our computer. We could try writing a program that looks at the map and extracts the information in it… but that can get complicated. If we had twenty-thousand maps to interpret, this would be a good idea, in this case we will do the interpretation ourself and transcribe the map into a more computer-friendly format.
What does our program need to know? It has to be able to look up which locations are connected, and how long the roads between them are. The places and roads on the island form a graph, as mathematicians call it. There are many ways to store graphs. A simple possibility is to just store an array of road objects, each of which contains properties naming its two endpoints and its length…
``show roads = [ { point1: 'Point Kiukiu', point2: 'Hanaiapa', length: 19 } { point1: 'Point Kiukiu', point2: 'Mt Feani', length: 15 }] # and so on``
However, it turns out that the program, as it is working out a route, will very often need to get a list of all the roads that start at a certain location, like a person standing on a crossroads will look at a signpost and read “Hanaiapa: 19km, Mount Feani: 15km”. It would be nice if this was easy (and quick) to do.
With the representation given above, we have to sift through the whole list of roads, picking out the relevant ones, every time we want this signpost list. A better approach would be to store this list directly. For example, use an object that associates place-names with signpost lists:
``show roads = 'Point Kiukiu': [ {to: 'Hanaiapa', distance: 19} {to: 'Mt Feani', distance: 15} {to: 'Taaoa', distance: 15} ] 'Taaoa': [ ] # et cetera``
When we have this object, getting the roads that leave from Point Kiukiu is just a matter of looking at `roads['Point Kiukiu']`.
##### ○•○
However, this new representation does contain duplicate information: The road between A and B is listed both under A and under B. The first representation was already a lot of work to type in, this one is even worse.
Fortunately, we have at our command the computer’s talent for repetitive work. We can specify the roads once, and have the correct data structure be generated by the computer. First, initialise an empty object called `roads`, and write a function `makeRoad`:
``@roads = {}makeRoad = (from, to, length) -> addRoad = (from, to) -> roads[from] = [] if not (from of roads) roads[from].push to: to, distance: length addRoad from, to addRoad to, from``
Nice, huh? Notice how the inner function, `addRoad`, uses the same names (`from`, `to`) for its parameters as the outer function. These will not interfere: inside `addRoad` they refer to `addRoad`’s parameters, and outside it they refer to `makeRoad`’s parameters.
The `if` statement in `addRoad` makes sure that there is an array of destinations associated with the location named by `from`, if there is not one it puts in an empty array. This way, the next line can assume there is such an array and safely push the new road onto it.
Now the map information looks like this:
``makeRoad 'Point Kiukiu', 'Hanaiapa', 19makeRoad 'Point Kiukiu', 'Mt Feani', 15makeRoad 'Point Kiukiu', 'Taaoa', 15show roads``
#### Exercise 26
In the above description, the string `'Point Kiukiu'` still occurs three times in a row. We could make our description even more succinct by allowing multiple roads to be specified in one line.
Write a function `makeRoads` that takes any uneven number of arguments. The first argument is always the starting point of the roads, and every pair of arguments after that gives an ending point and a distance.
Do not duplicate the functionality of `makeRoad`, but have `makeRoads` call `makeRoad` to do the actual road-making.
``# Compose a solution here``
##### View Solution
You can verify the solution with this code that builds a data structure matching the map of Hiva Oa.
``@roads = {}makeRoads 'Point Kiukiu', 'Hanaiapa', 19, 'Mt Feani', 15, 'Taaoa', 15makeRoads 'Airport', 'Hanaiapa', 6, 'Mt Feani', 5, 'Atuona', 4, 'Mt Ootua', 11makeRoads 'Mt Temetiu', 'Mt Feani', 8, 'Taaoa', 4makeRoads 'Atuona', 'Taaoa', 3, 'Hanakee pearl lodge', 1makeRoads 'Cemetery', 'Hanakee pearl lodge', 6, 'Mt Ootua', 5makeRoads 'Hanapaoa', 'Mt Ootua', 3makeRoads 'Puamua', 'Mt Ootua', 13, 'Point Teohotepapapa', 14show 'Roads from the Airport:'show roads['Airport']``
We managed to considerably shorten our description of the road information by defining some convenient operations. You could say we expressed the information more succinctly by expanding our vocabulary. Defining a ‘little language’ like this is often a very powerful technique — when, at any time, you find yourself writing repetitive or redundant code, stop and try to come up with a vocabulary that makes it shorter and denser.
Redundant code is not only a bore to write, it is also error-prone, people pay less attention when doing something that does not require them to think. On top of that, repetitive code is hard to change, because structure that is repeated a hundred times has to be changed a hundred times when it turns out to be incorrect or suboptimal.
##### ○•○
If you ran all the pieces of code above, you should now have a variable named `roads` that contains all the roads on the island. When we need the roads starting from a certain place, we could just do `roads[place]`. But then, when someone makes a typo in a place name, which is not unlikely with these names, he will get `undefined` instead of the array he expects, and strange errors will follow. Instead, we will use a function that retrieves the road arrays, and yells at us when we give it an unknown place name:
``roadsFrom = (place) -> found = roads[place] return found if found? throw new Error "No place named '#{place}' found."try show roadsFrom "Hanaiapa" show roadsFrom "Hanalapa"catch error show "Oops #{error}"``
##### ○•○
Here is a first stab at a path-finding algorithm, the gambler’s method:
``gamblerPath = (from, to) -> randomInteger = (below) -> Math.floor Math.random() * below randomDirection = (from) -> options = roadsFrom from options[randomInteger(options.length)].to path = [] loop path.push from break if from is to from = randomDirection from pathshow gamblerPath 'Hanaiapa', 'Mt Feani'``
At every split in the road, the gambler rolls his dice to decide which road he shall take. If the dice sends him back the way he came, so be it. Sooner or later, he will arrive at his destination, since all places on the island are connected by roads.
The most confusing line is probably the one containing `Math.random`. This function returns a pseudo-random26 number between 0 and 1. Try calling it a few times from the console, it will (most likely) give you a different number every time. The function `randomInteger` multiplies this number by the argument it is given, and rounds the result down with `Math.floor`. Thus, for example, `randomInteger 3` will produce the number `0`, `1`, or `2`.
##### ○•○
The gambler’s method is the way to go for those who abhor structure and planning, who desperately search for adventure. We set out to write a program that could find the shortest route between places though, so something else will be needed.
A very straightforward approach to solving such a problem is called ‘generate and test’. It goes like this:
1. Generate all possible routes.
2. In this set, find the shortest one that actually connects the start point to the end point.
Step two is not hard. Step one is a little problematic. If you allow routes with circles in them, there is an infinite amount of routes. Of course, routes with circles in them are unlikely to be the shortest route to anywhere, and routes that do not start at the start point do not have to be considered either. For a small graph like Hiva Oa, it should be possible to generate all non-cyclic (circle-free) routes starting from a certain point.
##### ○•○
But first, we will need to expand our vocabulary so we can deal with the problem in a natural way. The words we need exist in CoffeeScript and in the Underscore library, but we will go through how they can be implemented, so it is clear how they function and what they do. The example implementations will be named with an underscore in front of their names to show they are internal and not used in the rest of the book. You can compare with the implementations in the CoffeeScript Underscore example and try all the Underscore functions in the interactive Underscore reference.
The first is a function named `_member`, which is used to determine whether an element is found within an array. The route will be kept as an array of names, and when arriving at a new place, the algorithm could logically call `_member` to check whether we have been at that place already. It could look like this:
``_member = (array, value) -> found = false array.forEach (element) -> if element is value found = true foundshow _member [6, 7, "Bordeaux"], 7``
However, this will go over the whole array, even if the value is found immediately at the first position. What wastefulness. When using a `for` loop, you can use the `break` statement to jump out of it, but in a `forEach` construct this will not work, because the body of the loop is a function, and `break` statements do not jump out of functions. One solution could be to adjust `forEach` to recognise a certain kind of exceptions as signalling a break. Something like `_forEach` here:
``_break = toString: -> "Break"_forEach = (array, action) -> try for element in array action element catch exception if exception isnt _break throw exceptionshow _forEach [1..3], (n) -> n*n# Which btw could in CoffeeScript be written asshow (i*i for i in [1..3])``
Now, if the `action` function throws `_break`, `_forEach` will absorb the exception and stop looping. The object stored in the variable `_break` is used purely as a thing to compare with. The only reason I gave it a `toString` property is that this might be useful to figure out what kind of strange value you are dealing with if you somehow end up with a `_break` exception outside of a `_forEach`. Now `_member` can be defined as:
``_member = (array, value) -> found = false _forEach array, (element) -> if element is value found = true throw _break foundshow _member [6, 7, "Bordeaux"], 7``
Of course we could also have defined `_member` without using the `_forEach`:
``_member = (array, value) -> found = false for element in array if element is value found = true break foundshow _member [6, 7, "Bordeaux"], 7``
This function exists in Underscore as `include` aka `contains`. But it is such a common operation that it is in fact built into CoffeeScript with the `in` operator (outside of a `for``in`). Using `in` is the preferred way to test for array membership.
``show 7 in [6, 7, "Bordeaux"]``
##### ○•○
Having a way to break out of `_forEach` loops can be very useful, but in the case of the `_member` function the result is still rather ugly, because you need to specifically store the result and later return it. We could add yet another kind of exception, `_return`, which can be given a `value` property, and have `_forEach` return this value when such an exception is thrown, but this would be terribly ad-hoc and messy. What we really need is a whole new higher-order function, called `any` (or sometimes `some`). It exists in Underscore under both names. A definition more or less looks like this:
``_any = (array, test) -> for element in array if test element return true falseshow _any [3, 4, 0, -3, 2, 1], (n) -> n < 0show _any [3, 4, 0, 2, 1], (n) -> n < 0# Using Underscoreshow _.any [3, 4, 0, -3, 2, 1], (n) -> n < 0``
``# Redefining member with any_member = (array, value) -> partial = (func, a...) -> (b...) -> func a..., b... _.any array, partial ((a,b) -> a is b), valueshow _member ["Fear", "Loathing"], "Denial"show _member ["Fear", "Loathing"], "Loathing"``
`any` goes over the elements in an array, from left to right, and applies the test function to them. The first time this returns a true-ish value, it returns that value. If no true-ish value is found, `false` is returned. Calling `any test, array` is more or less equivalent to doing `test(array[0]) or test(array[1]) or ...` etcetera.
##### ○•○
Just like `&&` is the companion of `||`, `any` has a companion called `every`:
``_every = (array, test) -> for element in array if not test element return false trueshow _every [1, 2, 0, -1], (n) -> n isnt 0show _every [1, 2, -1], (n) -> n isnt 0show _.every [1, 2, -1], (n) -> n isnt 0 # Using Underscore``
##### ○•○
Another function we will need is `flatten`. This function takes an array of arrays, and puts the elements of the arrays together in one big array.
``_flatten = (array) -> result = [] for element in array if _.isArray element result = result.concat _flatten element else result.push element resultshow _flatten [[1], [2, [3, 4]], [5, 6]]# Using Underscoreshow _.flatten [[1], [2, [3, 4]], [5, 6]]``
#### Exercise 27
Before starting to generate routes, we need one more higher-order function. This one is called `filter` (in Underscore it is also named `select`). Like `map`, it takes a function and an array as arguments, and produces a new array, but instead of putting the results of calling the function in the new array, it produces an array with only those values from the old array for which the given function returns a true-like value. Write a `_filter` function that shows how it works.
``# Compose a solution here``
##### View Solution
Imagine what an algorithm to generate routes would look like — it starts at the starting location, and starts to generate a route for every road leaving there. At the end of each of these roads it continues to generate more routes. It does not run along one road, it branches out. Because of this, recursion is a natural way to model it.
``possibleRoutes = (from, to) -> findRoutes = (route) -> notVisited = (road) -> not (road.to in route.places) continueRoute = (road) -> findRoutes places: route.places.concat([road.to]), length: route.length + road.distance end = route.places[route.places.length - 1] if end is to [route] else _.flatten _.map _.filter(roadsFrom(end), notVisited), continueRoute findRoutes {places: [from], length: 0}show (possibleRoutes 'Point Teohotepapapa', 'Point Kiukiu').lengthshow possibleRoutes 'Hanapaoa', 'Mt Ootua' ``
The function returns an array of route objects, each of which contains an array of places that the route passes, and a length. `findRoutes` recursively continues a route, returning an array with every possible extension of that route. When the end of a route is the place where we want to go, it just returns that route, since continuing past that place would be pointless. If it is another place, we must go on. The `flatten`/`map`/`filter` line is probably the hardest to read. This is what it says: ‘Take all the roads going from the current location, discard the ones that go to places that this route has already visited. Continue each of these roads, which will give an array of finished routes for each of them, then put all these routes into a single big array that we return.’
That line does a lot. This is why good abstractions help: They allow you to say complicated things without typing pages of code.
Does this not recurse forever, seeing how it calls itself (via `continueRoute`)? No, at some point, all outgoing roads will go to places that a route has already passed, and the result of `filter` will be an empty array. Mapping over an empty array produces an empty array, and flattening that still gives an empty array. So calling `findRoutes` on a dead end produces an empty array, meaning ‘there are no ways to continue this route’.
Notice that places are appended to routes by using `concat`, not `push`. The `concat` method creates a new array, while `push` modifies the existing array. Because the function might branch off several routes from a single partial route, we must not modify the array that represents the original route, because it must be used several times.
#### Exercise 28
Now that we have all possible routes, let us try to find the shortest one. Write a function `shortestRoute` that, like `possibleRoutes`, takes the names of a starting and ending location as arguments. It returns a single route object, of the type that `possibleRoutes` produces.
``# Compose a solution here``
##### View Solution
Let us see what route our algorithm comes up with between Point Kiukiu and Point Teohotepapapa…
``show (shortestRoute 'Point Kiukiu', 'Point Teohotepapapa').placesshow (shortestRouteAbstract 'Point Kiukiu', 'Point Teohotepapapa').places``
##### ○•○
On a small island like Hiva Oa, it is not too much work to generate all possible routes. If you try to do that on a reasonably detailed map of, say, Belgium, it is going to take an absurdly long time, not to mention an absurd amount of memory. Still, you have probably seen those online route-planners. These give you a more or less optimal route through a gigantic maze of roads in just a few seconds. How do they do it?
If you are paying attention, you may have noticed that it is not necessary to generate all routes all the way to the end. If we start comparing routes while we are building them, we can avoid building this big set of routes, and, as soon as we have found a single route to our destination, we can stop extending routes that are already longer than that route.
##### ○•○
To try this out, we will use a 20 by 20 grid as our map:
##### View Solution
What you see here is an elevation map of a mountain landscape. The yellowish spots are the peaks, and the dark spots the valleys. The area is divided into squares with a size of a hundred meters. We have at our disposal a function `heightAt`, which can give us the height, in meters, of any square on that map, where squares are represented by objects with `x` and `y` properties.
``show heightAt x: 0, y: 0show heightAt x: 11, y: 18``
##### ○•○
We want to cross this landscape, on foot, from the top left to the bottom right. A grid can be approached like a graph. Every square is a node, which is connected to the squares around it.
We do not like wasting energy, so we would prefer to take the easiest route possible. Going uphill is heavier than going downhill, and going downhill is heavier than going level27. This function calculates the amount of ‘weighted meters’, between two adjacent squares, which represents how tired you get from walking (or climbing) between them. Going uphill is counted as twice as heavy as going downhill.
``weightedDistance = (pointA, pointB) -> heightDifference = heightAt(pointB) - heightAt(pointA) climbFactor = if heightDifference < 0 then 1 else 2 flatDistance = if pointA.x is pointB.x or pointA.y is pointB.y 100 else 141 flatDistance + climbFactor * Math.abs heightDifferenceshow weightedDistance (x: 0, y: 0), (x: 1, y: 1)``
Note the `flatDistance` calculation. If the two points are on the same row or column, they are right next to each other, and the distance between them is a hundred meters. Otherwise, they are assumed to be diagonally adjacent, and the diagonal distance between two squares of this size is a hundred times the square root of two, which is approximately `141`. One is not allowed to call this function for squares that are further than one step apart. (It could double-check this… but it is too lazy.)
##### ○•○
Points on the map are represented by objects containing `x` and `y` properties. These three functions are useful when working with such objects:
``point = (x, y) -> {x, y} # Same as {x: x, y: y}addPoints = (a, b) -> point a.x + b.x, a.y + b.ysamePoint = (a, b) -> a.x is b.x and a.y is b.yshow samePoint addPoints(point(10, 10), point(4, -2)), point(14, 8)``
#### Exercise 29
If we are going to find routes through this map, we will again need a function to create ‘signposts’, lists of directions that can be taken from a given point. Write a function `possibleDirections`, which takes a point object as argument and returns an array of nearby points. We can only move to adjacent points, both straight and diagonally, so squares have a maximum of eight neighbours. Take care not to return squares that lie outside of the map. For all we know the edge of the map might be the edge of the world.
``# Compose a solution here``
##### View Solution
To find a route on this map without having our browser cut off the program because it takes too long to finish, we have to stop our amateurish attempts and implement a serious algorithm. A lot of work has gone into problems like this in the past, and many solutions have been designed (some brilliant, others useless). A very popular and efficient one is called A* (pronounced A-star). We will spend the rest of the chapter implementing an A* route-finding function for our map.
Before I get to the algorithm itself, let me tell you a bit more about the problem it solves. The trouble with searching routes through graphs is that, in big graphs, there are an awful lot of them. Our Hiva Oa path-finder showed that, when the graph is small, all we needed to do was to make sure our paths did not revisit points they had already passed. On our new map, this is not enough anymore.
The fundamental problem is that there is too much room for going in the wrong direction. Unless we somehow manage to steer our exploration of paths towards the goal, a choice we make for continuing a given path is more likely to go in the wrong direction than in the right direction. If you keep generating paths like that, you end up with an enormous amount of paths, and even if one of them accidentally reaches the end point, you do not know whether that is the shortest path.
So what you want to do is explore directions that are likely to get you to the end point first. On a grid like our map, you can get a rough estimate of how good a path is by checking how long it is and how close its end is to the end point. By adding path length and an estimate of the distance it still has to go, you can get a rough idea of which paths are promising. If you extend promising paths first, you are less likely to waste time on useless ones.
##### ○•○
But that still is not enough. If our map was of a perfectly flat plane, the path that looked promising would almost always be the best one, and we could use the above method to walk right to our goal. But we have valleys and hillsides blocking our paths, so it is hard to tell in advance which direction will be the most efficient path. Because of this, we still end up having to explore way too many paths.
To correct this, we can make clever use of the fact that we are constantly exploring the most promising path first. Once we have determined that path A is the best way to get to point X, we can remember that. When, later on, path B also gets to point X, we know that it is not the best route, so we do not have to explore it further. This can prevent our program from building a lot of pointless paths.
##### ○•○
The algorithm, then, goes something like this…
There are two pieces of data to keep track of. The first one is called the open list, it contains the partial routes that must still be explored. Each route has a score, which is calculated by adding its length to its estimated distance from the goal. This estimate must always be optimistic, it should never overestimate the distance. The second is a set of nodes that we have seen, together with the shortest partial route that got us there. This one we will call the reached list. We start by adding a route that contains only the starting node to the open list, and recording it in the reached list.
Then, as long as there are any nodes in the open list, we take out the one that has the lowest (best) score, and find the ways in which it can be continued (by calling `possibleDirections`). For each of the nodes this returns, we create a new route by appending it to our original route, and adjusting the length of the route using `weightedDistance`. The endpoint of each of these new routes is then looked up in the reached list.
If the node is not in the reached list yet, it means we have not seen it before, and we add the new route to the open list and record it in the reached list. If we had seen it before, we compare the score of the new route to the score of the route in the reached list. If the new route is shorter, we replace the existing route with the new one. Otherwise, we discard the new route, since we have already seen a better way to get to that point.
We continue doing this until the route we fetch from the open list ends at the goal node, in which case we have found our route, or until the open list is empty, in which case we have found out that there is no route. In our case the map contains no unsurmountable obstacles, so there is always a route.
How do we know that the first full route that we get from the open list is also the shortest one? This is a result of the fact that we only look at a route when it has the lowest score. The score of a route is its actual length plus an optimistic estimate of the remaining length. This means that if a route has the lowest score in the open list, it is always the best route to its current endpoint — it is impossible for another route to later find a better way to that point, because if it were better, its score would have been lower.
##### ○•○
Try not to get frustrated when the fine points of why this works are still eluding you. When thinking about algorithms like this, having seen ‘something like it’ before helps a lot, it gives you a point of reference to compare the approach to. Beginning programmers have to do without such a point of reference, which makes it rather easy to get lost. Just realise that this is advanced stuff, globally read over the rest of the chapter, and come back to it later when you feel like a challenge.
##### ○•○
I am afraid that, for one aspect of the algorithm, I am going to have to invoke magic again. The open list needs to be able to hold a large amount of routes, and to quickly find the route with the lowest score among them. Storing them in a normal array, and searching through this array every time, is way too slow, so I give you a data structure called a binary heap. You create them with `new`, just like `Date` objects, giving them a function that is used to ‘score’ its elements as argument. The resulting object has the methods `push` and `pop`, just like an array, but `pop` always gives you the element with the lowest score, instead of the one that was `push`ed last.
##### View Solution
``heap = new BinaryHeap()_.each [2, 4, 5, 1, 6, 3], (number) -> heap.push numberwhile heap.size() > 0 show heap.pop()``
Binary Heaps↓ discusses the implementation of this data structure, which is quite interesting. After you have read Object Orientation↓, you might want to take a look at it.
##### ○•○
The need to squeeze out as much efficiency as we can has another effect. The Hiva Oa algorithm used arrays of locations to store routes, and copied them with the `concat` method when it extended them. This time, we can not afford to copy arrays, since we will be exploring lots and lots of routes. Instead, we use a ‘chain’ of objects to store a route. Every object in the chain has some properties, such as a point on the map, and the length of the route so far, and it also has a property that points at the previous object in the chain. Something like this:
Where the blue circles are the relevant objects, and the lines represent properties — the end points are the values of the property. Object `A` is the start of a route here. Object `B` is used to build a new route, which continues from `A`. It has a property, which we will call `from`, pointing at the route it is based on. When we need to reconstruct a route later, we can follow these properties to find all the points that the route passed. Note that object `B` is part of two routes, one that ends in `D` and one that ends in `E`. When there are a lot of routes, this can save us much storage space — every new route only needs one new object for itself, the rest is shared with other routes that started the same way.
#### Exercise 30
Write a function `estimatedDistance` that gives an optimistic estimate of the distance between two points. It does not have to look at the height data, but can assume a flat map. Remember that we are only travelling straight and diagonally, and that we are counting the diagonal distance between two squares as `141`.
``# Compose a solution here``
#### Exercise 31
We will use a binary heap for the open list. What would be a good data structure for the reached list? This one will be used to look up routes, given a pair of `x`, `y` coordinates. Preferably in a way that is fast. Write three functions named `makeReachedList`, `storeReached`, and `findReached`. The first one creates your data structure, the second one, given a reached list, a point, and a route, stores a route in it, and the last one, given a reached list and point, retrieves a route or returns `undefined` to indicate that no route was found for that point.
``# Compose a solution here``
##### View Solution
Defining a type of data structure by providing a set of functions to create and manipulate such structures is a useful technique. It makes it possible to ‘isolate’ the code that makes use of the structure from the details of the structure itself. Note that, no matter which of the above two implementations is used, code that needs a reached list works in exactly the same way. It does not care what kind of objects are used, as long as it gets the results it expected.
This will be discussed in much more detail in Object Orientation↓, where we will learn to make object types like `BinaryHeap`, which are created using `new` and have methods to manipulate them.
##### ○•○
Here we finally have the actual path-finding function:
``findRoute = (from, to) -> routeScore = (route) -> if route.score is undefined route.score = route.length + estimatedDistance route.point, to route.score addOpenRoute = (route) -> open.push route storeReached reached, route.point, route open = new BinaryHeap routeScore reached = makeReachedList() addOpenRoute point: from, length: 0 while open.size() > 0 route = open.pop() if samePoint route.point, to return route _.each possibleDirections(route.point), (direction) -> known = findReached reached, direction newLength = route.length + weightedDistance route.point, direction if not known or known.length > newLength if known open.remove known addOpenRoute point: direction, from: route, length: newLength return null``
First, it creates the data structures it needs, one open list and one reached list. `routeScore` is the scoring function given to the binary heap. Note how it stores its result in the route object, to prevent having to re-calculate it multiple times. `addOpenRoute` is a convenience function that adds a new route to both the open list and the reached list. It is immediately used to add the start of the route. Note that route objects always have the properties `point`, which holds the point at the end of the route, and `length`, which holds the current length of the route. Routes which are more than one square long also have a `from` property, which points at their predecessors.
The `while` loop, as was described in the algorithm, keeps taking the lowest-scoring route from the open list and checks whether this gets us to the goal point. If it does not, we must continue by expanding it. This is what the `_.each` takes care of. It looks up this new point in the reached list. If it is not found there, or the node found has a longer length that the new route, a new route object is created and added to the open list and reached list, and the existing route (if any) is removed from the open list.
What if the route in `known` is not on the open list? It has to be, because routes are only removed from the open list when they have been found to be the most optimal route to their endpoint. If we try to remove a value from a binary heap that is not on it, it will throw an exception, so if my reasoning is wrong, we will probably see an exception when running the function.
When code gets complex enough to make you doubt certain things about it, it is a good idea to add some checks that raise exceptions when something goes wrong. That way, you know that there are no weird things happening ‘silently’, and when you break something, you immediately see what you broke.
##### ○•○
Note that this algorithm does not use recursion, but still manages to explore all those branches. The open list more or less takes over the role that the function call stack played in the recursive solution to the Hiva Oa problem — it keeps track of the paths that still have to be explored. Every recursive algorithm can be rewritten in a non-recursive way by using a data structure to store the ‘things that must still be done’.
##### ○•○
Well, let us try our path-finder:
``route = findRoute point(0, 0), point(19, 19)runOnDemand -> show route``
If you ran all the code above, and did not introduce any errors, that call, though it might take an instant to run, should give us a route object. This object is rather hard to read. That can be helped by using the `showRoute` function which will show a route as a list of coordinates.
You can also pass multiple routes to `showRoute`, which can be useful when you are, for example, trying to plan a scenic route, which must include the beautiful viewpoint at (`11`, `17`).
``traverseRoute = (routes..., func) -> _.each [routes...], (route) -> while route func x: route.point.x y: route.point.y route = route.fromshowRoute = (routes...) -> traverseRoute routes..., showrunOnDemand -> show '\n Easy route' showRoute route show '\n Sightseeing' showRoute findRoute(point( 0, 0), point(11, 17)), findRoute(point(11, 17), point(19, 19))``
If the order the points were listed in were difficult to follow then the list can be refined. Sorting it and removing duplicates can help.
``showSortedRoute = (routes...) -> points = [] traverseRoute routes..., (point) -> points.push point # A sort needs a function that compares two points # Pattern matching can decompose them into unique names points.sort ({x:x1, y:y1}, {x:x2, y:y2}) -> return dx if dx = x1 - x2 return dy if dy = y1 - y2 0 # The Underscore uniq function can get rid of doublets with # help from a function that serializes the relevant properties points = _.uniq points, true, ({x, y}) -> "#{x} #{y}" show point for point in pointsrunOnDemand -> showSortedRoute findRoute(point( 0, 0), point(11, 17)), findRoute(point(11, 17), point(19, 19))``
##### ○•○
You can also display routes on a map with a function like `renderRoute`. A web page with a map is created and the route points are used to create and position a series of small images. You can return later to the techniques that are used in its implementation. They are presented in the next chapters: objects in Object Orientation↓, CoffeeKup in Modularity↓ and pattern matching in Language Extras↓.
``renderRoute = (routes...) -> kup = if exports? then require 'coffeekup' else window.CoffeeKup webdesign = -> doctype 5 html -> head -> style '.map {position: absolute; left: 33px; top: 80px}' body -> header -> h1 'Route' div class: 'map', -> img src: 'http://autotelicum.github.com/' + 'Smooth-CoffeeScript/img/height-small.png' img class: 'map', src: "#{ostrich}", width: size, height: size, \ style: "left:#{x*size}px; top:#{y*size}px" for {x, y} in points points = [] traverseRoute routes..., (point) -> points.push point routePage = kup.render webdesign, locals: size: 500 / 20 # Square map: size divided by fields points: points showDocument routePage, 565, 600 returnrunOnDemand -> show 'Scenic' renderRoute findRoute(point( 0, 0), point(11, 17)), findRoute(point(11, 17), point(19, 19)) show 'Excursion' renderRoute findRoute(point( 0, 0), point(15, 3)), findRoute(point(15, 3), point(19, 19))``
There is a different implementation in the source download: A web page with the map is served to your browser, then the route points are transferred via WebSockets to a snippet of CoffeeScript on the page that uses a canvas to draw the points on top of the map. There is more on that kind of thing in Modularity↓. The map looks like this:
##### ○•○
Variations on the theme of searching an optimal route through a graph can be applied to many problems, many of which are not at all related to finding a physical path. For example, a program that needs to solve a puzzle of fitting a number of blocks into a limited space could do this by exploring the various ‘paths’ it gets by trying to put a certain block in a certain place. The paths that ends up with insufficient room for the last blocks are dead ends, and the path that manages to fit in all blocks is the solution.
### Object-oriented Programming
In the early nineties, a thing called object-oriented programming stirred up the software industry. Most of the ideas behind it were not really new at the time, but they had finally gained enough momentum to start rolling, to become fashionable. Books were being written, courses given, programming languages developed. All of a sudden, everybody was extolling the virtues of object-orientation, enthusiastically applying it to every problem, convincing themselves they had finally found the right way to write programs.
These things happen a lot. When a process is hard and confusing, people are always on the lookout for a magic solution. When something looking like such a solution presents itself, they are prepared to become devoted followers. For many programmers — even today — object-orientation (or their view of it) is the gospel. When a program is not ‘truly object-oriented’, whatever that means, it is considered decidedly inferior.
Few fads have managed to stay popular for as long as this one, though. Object-orientation’s longevity can largely be explained by the fact that the ideas at its core are very solid and useful. In this chapter, we will discuss these ideas, along with CoffeeScript’s (rather succinct) take on them. The above paragraphs are by no means meant to discredit these ideas. What I want to do is warn the reader against developing an unhealthy attachment to them.
##### ○•○
As the name suggests, object-oriented programming is related to objects. The central ideas are encapsulation, inheritance, and higher-order programming (using the same names in different types aka polymorphism). So far, we have used objects as loose aggregations of values, adding and altering their properties whenever we saw fit. In an object-oriented approach, objects are viewed as little worlds of their own, and the outside world may touch them only through a limited and well-defined interface, a number of specific methods and properties. The ‘reached list’ we used at the end of Searching↑ is an example of this: We used only three functions, `makeReachedList`, `storeReached`, and `findReached` to interact with it. These three functions form an interface for such objects.
The `Date`, `Error`, and `BinaryHeap` objects we have seen also work like this. Instead of providing regular functions for working with the objects, they provide a way to create such objects, using the `new` keyword, and a number of methods and properties that provide the rest of the interface.
##### ○•○
One way to give an object methods is to attach function values to it.
``rabbit = {}rabbit.speak = (line) -> show "The rabbit says '#{line}'"rabbit.speak "Well, now you're asking me."``
In most cases, the method will need to know who it should act on. For example, if there are different rabbits, the `speak` method must indicate which rabbit is speaking. For this purpose, there is a special variable called `this`, which is always present when a function is called, and which points at the relevant object when the function is called as a method. A function is called as a method when it is looked up as a property, and immediately called, as in `object.method()`. Since it is very common to use `this` inside an object, it can be abbreviated from `this.property` or `this.method()` to `@property` or `@method()`.
``speak = (line) -> show "The #{this.adjective} rabbit says '#{line}'"whiteRabbit = adjective: "white", speak: speakfatRabbit = adjective: "fat", speak: speakwhiteRabbit.speak "Oh my ears and whiskers, " + "how late it's getting!"fatRabbit.speak "I could sure use a carrot right now."``
##### ○•○
I can now clarify the mysterious first argument to the `apply` method, for which we always used `null` in Functional Programming↑. This argument can be used to specify the object that the function must be applied to. For non-method functions, this is irrelevant, hence the `null`.
``speak.apply fatRabbit, ['Yum.']``
Functions also have a `call` method, which is similar to `apply`, but you can give the arguments for the function separately instead of as an array:
``speak.call fatRabbit, 'Burp.'``
##### ○•○
It is common in object oriented terminology to refer to instances of something as an object. The `whiteRabbit` and `fatRabbit` can be seen as different instances of a more general `Rabbit` concept. In CoffeeScript such concepts are termed a `class`.
``class Rabbit constructor: (@adjective) -> speak: (line) -> show "The #{@adjective} rabbit says '#{line}'"whiteRabbit = new Rabbit "white"fatRabbit = new Rabbit "fat"whiteRabbit.speak "Hurry!"fatRabbit.speak "Tasty!"``
It is a convention, among CoffeeScript programmers, to start the names of classes with a capital letter. This makes it easy to distinguish them from object instances and functions.
##### ○•○
The `new` keyword provides a convenient way of creating new objects. When a function is called with the word `new` in front of it, its `this` variable will point at a new object, which it will automatically return (unless it explicitly returns something else). Functions used to create new objects like this are called constructors.
The constructor for the `Rabbit` class is `constructor: (@adjective) ->`. The `@adjective` argument to the constructor does two things: It declares `adjective` as a property on `this` and it uses ‘pattern matching’ i.e. the same name to assign the argument named `adjective` to a property on `this` that is also named `adjective`. It could have been written in full form as `constructor: (adjective) -> this.adjective = adjective`.
``killerRabbit = new Rabbit 'killer'killerRabbit.speak 'GRAAAAAAAAAH!'show killerRabbit``
When `new Rabbit` is called with the `'killer'` argument, the argument is assigned to a property named `adjective`. So `show killerRabbit``{adjective: 'killer'}`.
Why is the `new` keyword even necessary? After all, we could have simply written this:
``makeRabbit = (adjective) -> adjective: adjective speak: (line) -> show adjective + ': ' + lineblackRabbit = makeRabbit 'black'``
But that is not entirely the same. `new` does a few things behind the scenes. For one thing, our `killerRabbit` has a property called `constructor`, which points at the `Rabbit` function that created it. `blackRabbit` also has such a property, but it points at the `Object` function. They even have `name` properties so we can check them:
``show killerRabbit.constructor.name``
``show blackRabbit.constructor.name``
##### ○•○
The objects that are created, `whiteRabbit` and `fatRabbit`, are specific instances. The `whiteRabbit` is not all kinds of white rabbits just a single one that happen to have the name whiteRabbit. If you want to create a class of say weight conscious rabbits then the `extends` keyword can help you accomplish that.
``class WeightyRabbit extends Rabbit constructor: (adjective, @weight) -> super adjective adjustedWeight: (relativeGravity) -> (@weight * relativeGravity).toPrecision 2tinyRabbit = new WeightyRabbit "tiny", 1.01jumboRabbit = new WeightyRabbit "jumbo", 7.47moonGravity = 1/6jumboRabbit.speak "Carry me, I weigh #{jumboRabbit.adjustedWeight(moonGravity)} stones"tinyRabbit.speak "He ain't heavy, he is my brother"``
The call `super adjective` passes the argument on to the `Rabbit` constructor. A method in a derived class can with `super` call upon a method of the same name in its parent class.
##### ○•○
Inheritance is useful because different types can share a single implementation of an algorithm. But it comes with a price-tag: the derived classes become tightly coupled to the parent class. Normally you make each part of a system as independent as possible, for example avoiding global variables and using arguments instead. That way you can read and understand each part in isolation and you can change them with little risk of breaking other parts of the system.
Due to the tight coupling that inheritance introduces it can be difficult to change a parent class without inadvertently risk breaking derived classes. This is called the fragile base class problem. A class that has no child classes and is only used through its published methods and properties can normally be changed quite freely — as long as the published methods and properties stay the same. When a class derives from it, then the child class may depend on the internal behaviour of the parent and it becomes problematic to change the base class.
To understand a derived class you will often have to understand the parent first. The implementation is distributed, instead of reading down through a function, you may have to look in different places where a class and its parent and their parent… implement each their part of the combined logic. Lets look at an example that matters — the balance on your bank account.
``class Account constructor: -> @balance = 0 transfer: (amount) -> @balance += amount getBalance: -> @balance batchTransfer: (amtList) -> for amount in amtList @transfer amountyourAccount = new Account()oldBalance = yourAccount.getBalance()yourAccount.transfer salary = 1000newBalance = yourAccount.getBalance()show "Books balance: #{salary is newBalance - oldBalance}."``
Hopefully this only shows the principle of how your bank has implemented its accounts. An account starts out with a zero balance, money can be credited (positive transfer) and debited (negative transfer), the balance can be shown and multiple transfers can be handled.
Other parts of the system can balance the books by checking that transfers match the differences on the accounts. Those parts of the system were unfortunately not known to the developer of the `AccountWithFee` class.
``class AccountWithFee extends Account fee: 5 transfer: (amount) -> super amount - @fee # feeAccount.transfer @feeyourAccount = new AccountWithFee()oldBalance = yourAccount.getBalance()yourAccount.transfer salary = 1000newBalance = yourAccount.getBalance()show "Books balance: #{salary is newBalance - oldBalance}."``
The books no longer balance. The issue is that the `AccountWithFee` class has violated what is called the substitution principle. It is a patch of the existing `Account` class and it breaks programs that assume that all account classes behave in a certain way. In a system with thousands of classes such patches can cause severe problems. To avoid this kind of problem it is up to the developer to ensure that inherited classes can fully substitute their parent classes.
To avoid excessive fraudulent transactions when a card is lost or stolen, the bank has implemented a system which checks that withdrawals do not exceed a daily limit. The `LimitedAccount` class checks each `transfer`, reduces the `@dailyLimit` and reports an error if it is exceeded.
``class LimitedAccount extends Account constructor: -> super; @resetLimit() resetLimit: -> @dailyLimit = 50 transfer: (amount) -> if amount < 0 and (@dailyLimit += amount) < 0 throw new Error "You maxed out!" else super amountlacc = new LimitedAccount()lacc.transfer 50show "Start balance #{lacc.getBalance()}"try lacc.batchTransfer [-1..-10]catch error then show error.messageshow "After batch balance #{lacc.getBalance()}"``
Your bank is so successful that `batchTransfer` has to be speeded up (a real version would involve database updates). The developer, that got the task of making `batchTransfer` faster, had been on vacation when the `LimitedAccount` class was implemented and did not see it among the thousands of other classes in the system.
``class Account constructor: -> @balance = 0 transfer: (amount) -> @balance += amount getBalance: -> @balance batchTransfer: (amtList) -> add = (a,b) -> a+b sum = (list) -> _.reduce list, add, 0 @balance += sum amtListclass LimitedAccount extends Account constructor: -> super; @resetLimit() resetLimit: -> @dailyLimit = 50 transfer: (amount) -> if amount < 0 and (@dailyLimit += amount) < 0 throw new Error "You maxed out!" else super amountlacc = new LimitedAccount()lacc.transfer 50show "Starting with #{lacc.getBalance()}"try lacc.batchTransfer [-1..-10]catch error then show error.messageshow "After batch balance #{lacc.getBalance()}"``
Instead of the previous implementation that called `transfer` each time, the whole batch is added together and the balance is directly updated. That made `batchTransfer` much faster, but it also broke the `LimitedAccount` class. It is an example of the fragile base class problem. In this limited example it is easy to spot the issue, in a large system it can cause considerable headache.
Using inheritance in the right way requires careful and thoughtful programming. If your child classes are type compatible with their parent then you are adhering to the substitution principle. Usually using ownership is more appropriate, that is when a class has an instance of another class inside it and uses its public interface.
It is a common convention in CoffeeScript to use an `_` in front of methods that are to be considered private.
##### ○•○
Where did the `constructor` property come from? It is part of the prototype of a rabbit. Prototypes are a powerful, if somewhat confusing, part of the way CoffeeScript objects work. Every object is based on a prototype, which gives it a set of inherent properties. Simple objects are based on the most basic prototype, which is associated with the `Object` constructor. In fact, typing `{}` is equivalent to typing `new Object()`.
``simpleObject = {}show simpleObject.constructor.nameshow simpleObject.toString()``
`toString` is a method that is part of the `Object` prototype. This means that all simple objects have a `toString` method, which converts them to a string. Our rabbit objects are based on the prototype associated with the `Rabbit` constructor. You can use a constructor’s `prototype` property to get access to, well, their prototype:
``show _.methods Rabbitshow _.methods Rabbit.prototypeshow Rabbit.prototype.constructor.nameRabbit.prototype.speak 'I am generic' Rabbit::speak 'I am not initialized'``
Instead of `Rabbit.prototype.speak` you can write `Rabbit::speak`.
``Rabbit::speak 'I am not initialized'``
Every function automatically gets a `prototype` property, whose `constructor` property points back at the function. Because the rabbit prototype is itself an object, it is based on the `Object` prototype, and shares its `toString`.
``show killerRabbit.toString is simpleObject.toString``
##### ○•○
Even though objects seem to share the properties of their prototype, this sharing is one-way. The properties of the prototype influence the object based on it, but the properties of this object never change the prototype.
The precise rules are this: When looking up the value of a property, CoffeeScript first looks at the properties that the object itself has. If there is a property that has the name we are looking for, that is the value we get. If there is no such property, it continues searching the prototype of the object, and then the prototype of the prototype, and so on. If no property is found, the value `undefined` is given. On the other hand, when setting the value of a property, CoffeeScript never goes to the prototype, but always sets the property in the object itself.
``Rabbit::teeth = 'small'show killerRabbit.teethkillerRabbit.teeth = 'long, sharp, and bloody'show killerRabbit.teethshow Rabbit::teeth``
This does mean that the prototype can be be used at any time to add new properties and methods to all objects based on it. For example, it might become necessary for our rabbits to dance.
``Rabbit::dance = -> show "The #{@adjective} rabbit dances a jig."killerRabbit.dance()``
And, as you might have guessed, the prototypical rabbit is the perfect place for values that all rabbits have in common, such as the `speak` method. Here is a new approach to the `Rabbit` constructor:
``Rabbit = (adjective) -> @adjective = adjectiveRabbit::speak = (line) -> show "The #{@adjective} rabbit says '#{line}'"hazelRabbit = new Rabbit "hazel"hazelRabbit.speak "Good Frith!"``
##### ○•○
The fact that all objects have a prototype and receive some properties from this prototype can be tricky. It means that using an object to store a set of things, such as the cats from Data Structures↑, can go wrong. If, for example, we wondered whether there is a cat called `'constructor'`, we would have checked it like this:
``noCatsAtAll = {}if "constructor" of noCatsAtAll show "Yes, there is a cat called 'constructor'."``
This is problematic. A related problem is that it can often be practical to extend the prototypes of standard constructors such as `Object` and `Array` with new useful functions. For example, we could give all objects a method called `allProperties`, which returns an array with the names of the (non-hidden) properties that the object has:
``Object::allProperties = -> for property of this propertytest = x: 10, y: 3show test.allProperties()delete Object::allProperties``
And that immediately shows the problem. Now that the `Object` prototype has a property called `allProperties`, looping over the properties of any object, using `for` and `of`, will also give us that shared property, which is generally not what we want. We are interested only in the properties that the object itself has.
Fortunately, there is a way to find out whether a property belongs to the object itself or to one of its prototypes. Every object has a method called `hasOwnProperty`, which tells us whether the object has a property with a given name. When looping over the properties of an object CoffeeScript provides a keyword `own` so we are saved from using a clumsy `if` test on each property. Using `own`, we can write an `ownProperties` method like this:
``Object::ownProperties = -> for own property of this propertytest = 'Fat Igor': true, 'Fireball': trueshow test.ownProperties()delete Object::ownProperties``
And of course, we can abstract that into a higher-order function. Note that the `action` function is called with both the name of the property and the value it has in the object.
``forEachOf = (object, action) -> for own property, value of object action property, valuechimera = head: "lion", body: "goat", tail: "snake"forEachOf chimera, (name, value) -> view "The #{name} of a #{value}."``
But, what if we find a cat named `hasOwnProperty`? (You never know.) It will be stored in the object, and the next time we want to go over the collection of cats, calling `object.hasOwnProperty` will fail, because that property no longer points at a function value. This can be solved by doing something ugly:
``forEachIn = (object, action) -> for property of object if Object::hasOwnProperty.call object, property action property, object[property]test = name: "Mordecai", hasOwnProperty: "Uh-oh"forEachIn test, (name, value) -> view "Property #{name} = #{value}"``
Here, instead of using the method found in the object itself, we get the method from the `Object` prototype, and then use `call` to apply it to the right object. Unless someone actually messes with the `Object.prototype` method (do not do that), this should work correctly.
``test = name: "Mordecai", hasOwnProperty: "Uh-oh"for own property, value of test show "Property #{property} = #{value}"``
Fortunately the `own` keyword saves the day even in this situation, so it is the right thing to use.
##### ○•○
`hasOwnProperty` can also be used in those situations where we have been using the `of` operator to see whether an object has a specific property. There is one more catch, however. We saw in Data Structures↑ that some properties, such as `toString`, are ‘hidden’, and do not show up when going over properties with `for`/`of`. It turns out that browsers in the Gecko family (Firefox, most importantly) give every object a hidden property named `__proto__`, which points to the prototype of that object. `hasOwnProperty` will return `true` for this one, even though the program did not explicitly add it. Having access to the prototype of an object can be very convenient, but making it a property like that was not a very good idea. Still, Firefox is a widely used browser, so when you write a program for the web you have to be careful with this.
The advice here is not to name any of your own properties with double underscore since they then can clash with system specific details like `__proto__`. As mentioned before identifiers beginning with one underscore is fine and used to indicate something private to your implementation. There is a method `propertyIsEnumerable`, which returns `false` for hidden properties, and which can be used to filter out strange things like `__proto__`. An expression such as this one can be used to reliably work around this:
``obj = foo: 'bar'# This test is needed to avoid hidden properties ...show Object::hasOwnProperty.call(obj, 'foo') and Object::propertyIsEnumerable.call(obj, 'foo')# ... because this returns true ...show Object::hasOwnProperty.call(obj, '__proto__')# ... this is required to get false.show Object::hasOwnProperty.call(obj, '__proto__') and Object::propertyIsEnumerable.call(obj, '__proto__')``
Nice and simple, no? This is one of the not-so-well-designed aspects of the system underlying CoffeeScript (recondite JavaScript). Objects play both the role of ‘values with methods’, for which prototypes work great, and ‘sets of properties’, for which prototypes only get in the way.
##### ○•○
Writing the above kind of expression every time you need to check whether a property is present in an object is unworkable. We could put it into a function, but a better approach is to write a constructor and a prototype specifically for situations like this, where we want to approach an object as just a set of properties. Because you can use it to look things up by name, we will call it a `Dictionary`.
``class Dictionary constructor: (@values = {}) -> store: (name, value) -> @values[name] = value lookup: (name) -> @values[name] contains: (name) -> Object::hasOwnProperty.call(@values, name) and Object::propertyIsEnumerable.call(@values, name) each: (action) -> for own property, value of @values action property, valuecolours = new Dictionary Grover: 'blue' Elmo: 'orange' Bert: 'yellow'show colours.contains 'Grover'colours.each (name, colour) -> view name + ' is ' + colour``
Now the whole mess related to approaching objects as plain sets of properties has been ‘encapsulated’ in a convenient interface: one constructor and four methods. Note that the `values` property of a `Dictionary` object is not part of this interface, it is an internal detail, and when you are using `Dictionary` objects you do not need to directly use it.
##### ○•○
Whenever you write an interface, it is a good idea to add a comment with a quick sketch of what it does and how it should be used. This way, when someone, possibly yourself three months after you wrote it, wants to work with the interface, they can quickly see how to use it, and do not have to study the whole program.
Using a piece of paper, a whiteboard or as here a pen and tablet can be effective to sketch your designs. Using a notation roughly similar to UML (Unified Modeling Language) can help in communicating your design to others. Usually you do not need a tool28 — unless you really want to or work in a large administrative setting. You can find a quick reference to UML notation via web search. The most useful diagram is the interaction or sequence diagram. Use it to specify how objects talk to each other — especially in distributed systems. The class diagram shown above presents a static view of the system so it is mostly for initial designs.
Most of the time, when you are designing an interface, you will soon find some limitations and problems in whatever you came up with, and change it. To prevent wasting your time, it is advisable to document your interfaces only after they have been used in a few real situations and proven themselves to be practical. — Of course, this might make it tempting to forget about documentation altogether. Personally, I treat writing documentation as a ‘finishing touch’ to add to a system. When it feels ready, it is time to write something about it, and to see if it sounds as good in English (or whatever language) as it does in CoffeeScript (or whatever programming language).
##### ○•○
The distinction between the external interface of an object and its internal details is important for two reasons. Firstly, having a small, clearly described interface makes an object easier to use. You only have to keep the interface in mind, and do not have to worry about the rest unless you are changing the object itself.
Secondly, it often turns out to be necessary or practical to change something about the internal implementation of a class, to make it more efficient, for example, or to fix some problem. When outside code is accessing every single property and detail in the object, you can not change any of them without also updating a lot of other code. If outside code only uses a small interface, you can do what you want, as long as you do not change the interface.
Some people go very far in this. They will, for example, never include properties in the interface of object, only methods — if their object type has a length, it will be accessible with the `getLength` method, not the `length` property. This way, if they ever want to change their object in such a way that it no longer has a `length` property, for example because it now has some internal array whose length it must return, they can update the function without changing the interface.
My own take is that in most cases this is not worth it. Adding a `getLength` method which only contains `return this.length` mostly just adds meaningless code, and, in most situations, I consider meaningless code a bigger problem than the risk of having to occasionally change the interface to my objects.
##### ○•○
Adding new methods to existing prototypes can be very convenient. Especially the `Array` and `String` prototypes in CoffeeScript could use a few more basic methods. We could, for example, replace `forEach` and `map` with methods on arrays, and make the `startsWith` function we wrote in Data Structures↑ a method on strings.
However, if your code has to run as a library used by others or as a program on a web-page together with another program (either written by you or by someone else) which uses `for`/`of` naively — the way we have been using it so far — then adding things to prototypes, especially the `Object` and `Array` prototype, will definitely break something, because these loops will suddenly start seeing those new properties. For this reason, some people prefer not to touch these prototypes at all. Of course, if you are careful, and you do not expect your code to have to coexist with badly-written code, adding methods to standard prototypes is a perfectly good technique.
### Regular Expressions
At various points in the previous chapters, we had to look for patterns in string values. In Data Structures↑ we extracted date values from strings by writing out the precise positions at which the numbers that were part of the date could be found. Later, in Functional Programming↑, we saw some particularly ugly pieces of code for finding certain types of characters in a string, for example the characters that had to be escaped in HTML output.
Regular expressions are a language for describing patterns in strings. They form a small, separate language, which is embedded inside CoffeeScript (and in various other programming languages, in one way or another). It is not a very readable language — big regular expressions tend to be quite unreadable. To make them more readable CoffeeScript has extended regular expressions where you can generously add comments to the different parts. Examples of these are shown later. Still regular expressions are difficult to read, but they are a useful tool that can really simplify string-processing programs.
##### ○•○
Just like strings get written between quotes, regular expression patterns get written between slashes ( `/`). This means that slashes inside the expression have to be escaped.
``slash = /\//;show 'AC/DC'.search slash``
The `search` method resembles `indexOf`, but it searches for a regular expression instead of a string. Patterns specified by regular expressions can do a few things that strings can not do. For a start, they allow some of their elements to match more than a single character. In Functional Programming↑, when extracting mark-up from a document, we needed to find the first asterisk or opening brace in a string. That could be done like this:
``asteriskOrBrace = /[\{\*]/story = 'We noticed the *giant sloth*, ' + 'hanging from a giant branch.';show story.search asteriskOrBrace``
The `[` and `]` characters have a special meaning inside a regular expression. They can enclose a set of characters, and they mean ‘any of these characters’. Most non-alphanumeric characters have some special meaning inside a regular expression, so it is a good idea to always escape them with a backslash29 when you use them to refer to the actual characters.
##### ○•○
There are a few shortcuts for sets of characters that are needed often. The dot (`.`) can be used to mean ‘any character that is not a newline’, an escaped ‘d’ (`\d`) means ‘any digit’, an escaped ‘w’ (`\w`) matches any alphanumeric character (including underscores, for some reason), and an escaped ’s’ (`\s`) matches any white-space (tab, newline, space) character.
``digitSurroundedBySpace = /\s\d\s/show '1a 2 3d'.search digitSurroundedBySpace``
The escaped ‘d’, ‘w’, and ’s’ can be replaced by their capital letter to mean their opposite. For example, `\S` matches any character that is not white-space. When using `[` and `]`, a pattern can be inverted by starting with a `^` character:
``notABC = /[^ABC]/show 'ABCBACCBBADABC'.search notABC``
As you can see, the way regular expressions use characters to express patterns makes them A) very short, and B) very hard to read.
#### Exercise 32
Write a regular expression that matches a date in the format `'XX/XX/XXXX'`, where the `X`s are digits. Test it against the string `'born 15/11/2003 (mother Spot): White Fang'`.
``# Compose a solution here``
##### View Solution
Sometimes you need to make sure a pattern starts at the beginning of a string, or ends at its end. For this, the special characters `^` and `\$` can be used. The first matches the start of the string, the second the end.
``show /a+/.test 'blah'show /^a+\$/.test 'blah'``
The first regular expression matches any string that contains an `a` character, the second only those strings that consist entirely of `a` characters.
Note that regular expressions are objects, and have methods. Their `test` method returns a boolean indicating whether the given string matches the expression.
The code `\b` matches a ‘word boundary’, which can be punctuation, white-space, or the start or end of the string.
``show /cat/.test 'concatenate'show /\bcat\b/.test 'concatenate'``
##### ○•○
Parts of a pattern can be allowed to be repeated a number of times. Putting an asterisk ( `*` ) after an element allows it to be repeated any number of times, including zero. A plus (`+`) does the same, but requires the pattern to occur at least one time. A question mark (`?`) makes an element ‘optional’ — it can occur zero or one times.
``parenthesizedText = /\(.*\)/show "Its (the sloth's) claws were gigantic!"\.search parenthesizedText``
When necessary, braces can be used to be more precise about the amount of times an element may occur. A number between braces (`{4}`) gives the exact amount of times it must occur. Two numbers with a comma between them (`{3,10}`) indicate that the pattern must occur at least as often as the first number, and at most as often as the second one. Similarly, `{2,}` means two or more occurrences, while `{,4}` means four or less.
To make big regular expressions more readable CoffeeScript has extended regular expressions. They are delimited by `///` and allow comments to be added to the different parts. They ignore formatting, so they can be split over several lines and placed in a column.
``datePattern = /\d{1,2}\/\d\d?\/\d{4}/show 'born 15/11/2003 (mother Spot): White Fang'\.search datePatterndatePattern = /// \d{1,2} # day / # separator \d\d? # month / # separator \d{4} # year///show 'born 15/11/2003 (mother Spot): White Fang'\.search datePattern``
The pieces `/\d{1,2}/` and `/\d\d?/` both express ‘one or two digits’.
#### Exercise 33
Write a pattern that matches e-mail addresses. For simplicity, assume that the parts before and after the `@` can contain only alphanumeric characters and the characters `.` and `-` (dot and dash), while the last part of the address, the country code or top level domain after the last dot, may only contain alphanumeric characters, and must be two or three characters long.
``# Compose a solution here``
##### View Solution
Part of a regular expression can be grouped together with parentheses. This allows us to use `*` and such on more than one character. For example:
``cartoonCrying = /boo(hoo+)+/ishow "Then, he exclaimed 'Boohoooohoohooo'"\.search cartoonCrying``
Where did the `i` at the end of that regular expression come from? After the closing slash, ‘options’ may be added to a regular expression. An `i`, here, means the expression is case-insensitive, which allows the lower-case B in the pattern to match the upper-case one in the string.
A pipe character (`|`) is used to allow a pattern to make a choice between two elements. For example:
``holyCow = /(sacred|holy) (cow|bovine|bull|taurus)/ishow holyCow.test 'Sacred bovine!'``
##### ○•○
Often, looking for a pattern is just a first step in extracting something from a string. In previous chapters, this extraction was done by calling a string’s `indexOf` and `slice` methods a lot. Now that we are aware of the existence of regular expressions, we can use the `match` method instead. When a string is matched against a regular expression, the result will be `null` if the match failed, or an array of matched strings if it succeeded.
``show 'No'.match /Yes/show '... yes'.match /yes/show 'Giant Ape'.match /giant (\w+)/i``
The first element in the returned array is always the part of the string that matched the pattern. As the last example shows, when there are parenthesized parts in the pattern, the parts they match are also added to the array. Often, this makes extracting pieces of a string very easy.
``quote = "My mind is a swirling miasma " + "(a poisonous fog thought to " + "cause illness) of titilating " + "thoughts and turgid ideas."parenthesized = quote.match /// (\w+) # Word \s* # Whitespace \((.*)\) # Explanation///if parenthesized isnt null show "Word: #{parenthesized[1]} " + "Explanation: #{parenthesized[2]}"``
If you insert another set of parentheses in this example, then you will see that the match stretches from the first to the last parentheses. This is because the matching is greedy, it tries to match as long a string as possible. You can change the `.*` to `[^)]*` to prevent the match from stretching beyond the first closing parentheses.
#### Exercise 34
Re-write the function `extractDate` that we wrote in Data Structures↑. When given a string, this function looks for something that follows the date format we saw earlier. If it can find such a date, it puts the values into a `Date` object. Otherwise, it throws an exception. Make it accept dates in which the day or month are written with only one digit.
``# Compose a solution here``
##### View Solution
The `replace` method of string values, which we saw in Functional Programming↑, can be given a regular expression as its first argument. `show 'Borobudur'.replace /[ou]/g, 'a'` Notice the `g` character after the regular expression. It stands for ‘global’, and means that every part of the string that matches the pattern should be replaced. When this `g` is omitted, only the first `'o'` would be replaced.
Sometimes it is necessary to keep parts of the replaced strings. For example, we have a big string containing the names of people, one name per line, in the format “Lastname, Firstname”. We want to swap these names, and remove the comma, to get a simple “Firstname Lastname” format.
``names = '''Picasso, PabloGauguin, PaulVan Gogh, Vincent'''show names.replace /([\w ]+), ([\w ]+)/g, '\$2 \$1'# Non-printable characters can be tricky in regular# expressions, actually all non-ASCII characters can# be, they can be represented with their numeric codes:# unicode \u0020 = hexadecimal \x20 = ascii #32 = ' 'show names.replace /// ([\w\x20]+) # Lastname ,\u0020 ([\w\x20]+) # Firstname///g, '\$2 \$1'``
The `\$1` and `\$2` in the replacement string refer to the parenthesized parts in the pattern. `\$1` is replaced by the text that matched against the first pair of parentheses, `\$2` by the second, and so on, up to `\$9`.
If you have more than 9 parentheses parts in your pattern, this will no longer work. But there is one more way to replace pieces of a string, which can also be useful in some other tricky situations. When the second argument given to the `replace` method is a function value instead of a string, this function is called every time a match is found, and the matched text is replaced by whatever the function returns. The arguments given to the function are the matched elements, similar to the values found in the arrays returned by `match`: The first one is the whole match, and after that comes one argument for every parenthesized part of the pattern.
``eatOne = (match, amount, unit) -> amount = Number(amount) - 1 if amount is 1 unit = unit.slice 0, unit.length - 1 else if amount is 0 unit = unit + 's' amount = 'no' amount + ' ' + unitstock = '1 lemon, 2 cabbages, and 101 eggs'stock = stock.replace /(\d+) (\w+)/g, eatOneshow stock``
#### Exercise 35
That last trick can be used to make the HTML-escaper from Functional Programming↑ more efficient. You may remember that it looked like this:
``escapeHTML = (text) -> replacements = [[/&/g, '&'] [/"/g, '"'] [/</g, '<'] [/>/g, '>']] forEach replacements, (replace) -> text = text.replace replace[0], replace[1] textshow escapeHTML '< " & " >'``
Write a new function `escapeHTML`, which does the same thing, but only calls `replace` once.
``# Compose a solution here``
##### View Solution
There are cases where the pattern you need to match against is not known while you are writing the code. Say we are writing a (very simple-minded) obscenity filter for a message board. We only want to allow messages that do not contain obscene words. The administrator of the board can specify a list of words that he or she considers unacceptable.
The most efficient way to check a piece of text for a set of words is to use a regular expression. If we have our word list as an array, we can build the regular expression like this:
``badWords = ['ape', 'monkey', 'simian', 'gorilla', 'evolution']pattern = new RegExp badWords.join('|'), 'i'isAcceptable = (text) -> !pattern.test textshow isAcceptable 'Mmmm, grapes.'show isAcceptable 'No more of that monkeybusiness, now.'``
We could add `\b` patterns around the words, so that the thing about grapes would not be classified as unacceptable. That would also make the second one acceptable, though, which is probably not correct. Obscenity filters are hard to get right (and usually way too annoying to be a good idea).
The first argument to the `RegExp` constructor is a string containing the pattern, the second argument can be used to add case-insensitivity or globalness. When building a string to hold the pattern, you have to be careful with backslashes. Because, normally, backslashes are removed when a string is interpreted, any backslashes that must end up in the regular expression itself have to be escaped:
``digits = new RegExp '\\d+'show digits.test '101'``
##### ○•○
The most important thing to know about regular expressions is that they exist, and can greatly enhance the power of your string-mangling code. They are so cryptic that you will probably have to look up the details on them the first ten times you want to make use of them. Persevere, and you will soon be off-handedly writing expressions that look like occult gibberish.
When your tasks become too complex for regular expressions then have a look at how CoffeeScript is implemented. It uses Jison, a parser generator. With it you define a grammar for the language you want your program to be able to read i.e. parse. Jison then generates a module that can read data in that format. You integrate the module in your program and can then perform appropriate actions when different parts of the data is read. It is an advanced tool, that begins where regular expressions leave off.
### Modularity
This chapter deals with the process of organising programs.30 In small programs, organisation rarely becomes a problem. As a program grows, however, it can reach a size where its structure and interpretation become hard to keep track of. Easily enough, such a program starts to look like a bowl of spaghetti, an amorphous mass in which everything seems to be connected to everything else.
In top down design you look at the overall structure of your application and divide it into smaller parts. There are many ways to do this, one way is to distinguish between technical and application specific areas. For example instead of inserting statements that write application activity in files in various places, the technical part — writing in a file, checking for errors and handling daily log roll-overs — can be placed in a logging utility as a technical service.
A useful technique is to use a layered approach, where lower layers do not know of higher layers. For example when communicating between processes, lower level protocols can be encapsulated in a layer and when data arrives — instead of a lower layer directly calling a function to handle the data in a higher layer — the lower layer raises an event31 that an interested higher layer can be listening to. It is analogue to what we saw in Error Handling↑ where a failing function does not have any knowledge of who or how an exception will be handled.
When structuring a program in CoffeeScript, we do two things. We separate it into smaller parts, called modules, each of which has a specific role, and we specify the relations between these parts.
In Searching↑, while finding routes, we made use of a number of functions described in Functional Programming↑. The chapter also defined some concepts that had nothing in particular to do with route planning, such as `flatten`, `partial` and the `BinaryHeap` type. `BinaryHeap` was treated as a black box, we only had to know how to use it, we did not have to know how it internally works. That kind of encapsulation is the essence of modularity and of Object Orientation↑.
The `flatten` function was reused from the Underscore library. We needed the `partial` function in a few places and haphazardly just added it to the environment where we needed it. It would have been easy to simply add `partial` to the Underscore library, but that would mean that we had to add it every time a new version of Underscore is released.
We could create our own module instead and place the missing parts there, then we would have to refer to two libraries whenever we are using fundamental functions. Or we could create a module with our functions that include Underscore and use the functions from it to build our own. Our module would then depend on Underscore. When a module depends on another module, it uses functions or variables from that module, and will only work when the module is loaded.
It is a good idea to make sure dependencies never form a circle. Not only do circular dependencies create a practical problem (if module `A` and `B` depend on each other, which one should be loaded first?), it also makes the relation between the modules less straightforward, and can result in a modularised version of the spaghetti I mentioned earlier.
##### ○•○
Most modern programming languages have some kind of module system built in. In CoffeeScript it sort of depends… In the standard CoffeeScript environment we have a module system based on CommonJS `require`. When using CoffeeScript in other environments, such as on a page in a web browser, then we do not have `require` and must rely on system specific services or once again invent something ourselves32.
The most obvious way to start is to put every module in a different file. This makes it clear which code belongs to which module. In this chapter we will return to the Seed of Life example you saw in the Preface↑ and make a server controlled, animated version of it. For this purpose I have extracted the mathematical calculations behind the drawing and placed them in a file, `'10-MathFix.coffee'`. It contains a `CircularPosition` class we can use to get positions about a 16 apart on a unit circle and a fix for a floating point rounding error. It also uses the prelude and has a small print utility at the end. It is not a module yet, we will stepwise refine it so it can be used in both a server and a browser environment.
``require "./prelude"Pi2 = Math.PI*2# Create an array with angles on the unit circle.angles = (angle for angle in [0...Pi2] by 1/3*Math.PI)# Remove the last element if 2*PI were included# due to floating point rounding on additions.epsilon = 1e-14lastAngle = angles[angles.length - 1]# Use an interval to test floating point valueif Pi2 - epsilon < lastAngle < Pi2 + epsilon angles.length = angles.length - 1# Encapsulation of a pair of (x, y) coordinatesclass Point constructor: (@x, @y) -> toString: -> "{x:#{@x.toPrecision 4}," + " y:#{@y.toPrecision 4}}"# Math class that returns points on the unit# circle, offset by step if given non-zero.class CircularPosition constructor: (@_step = 0) -> @_count = 0 nextPoint: -> index = @_count % angles.length angle = angles[index] + @_step * @_count++ new Point Math.cos(angle), Math.sin(angle)circ = new CircularPosition 0.01for i in [0...6] show "#{i}: #{circ.nextPoint()}"``
The first thing is to remove the print utility at the end33. The next is the `require './prelude'`, while the prelude has served us well in the course of this book, it is not intended for modules. The prelude includes a variety of definitions so we have been free to focus on each aspect of CoffeeScript without distraction. However it pulls these definitions into a shared namespace either directly or via the `globalize` function (described later in this chapter). This namespace is shared between all modules that a program consists of and since the purpose of a module is for us to be able to reuse it in various projects, a module should not ‘pollute’ this scarce, shared resource. In other words, you can use the prelude, when you are experimenting with a new algorithm, but do not use it when you create a reusable module.
##### ○•○
When much code is loaded into an environment, it will use many top-level variable names. Once there is more code than you can really keep track of, it becomes very easy to accidentally use a name that was already used for something else. This will break the code that used the original value. The proliferation of top-level variables is called name-space pollution, and it has been a rather severe problem in JavaScript — the language will not warn you when you redefine an existing variable.
CoffeeScript greatly reduces this problem by automatically encapsulating modules. You only have to avoid directly using top-level variables. In particular modules should not use top-level variables for values that are not part of their external interface.
##### ○•○
In CoffeeScript, ‘top-level’ variables all live together in a single place. In browsers, this place is an object that can be found under the name `window`. The name is somewhat odd, `environment` or `top` would have made more sense, but since browsers associate an environment with a window or a ‘frame’, someone decided that `window` was a logical name. In the standard CoffeeScript environment it is called `global`.
``show global.process.argv[0]show global.console.log is console.logshow global.global.global.global.global.console``
As the third line shows, the name `global` is merely a property of this environment object, pointing at itself.
##### ○•○
Not being able to define any internal functions and variables at all in your modules is, of course, not very practical. Fortunately, there is a trick to get around this. We could write all the code for the module inside a function, and then add the variables that are part of the module’s interface to the top-level object. Because they were created in the same parent function, all the functions of the module can see each other, but code outside of the module can not. The wrapping in a function is done by CoffeeScript automatically. In the server environment, instead of assigning directly to the top-level environment our module will export its definitions. In the browser we have to assign to the top-level `window`, which when a module is loaded by a browser is the same as `this`. All we have to do is append this one line to our math module:
``(exports ? this).CircularPosition = CircularPosition``
If `exports` is defined then `CircularPosition` is added to it otherwise it is added to the top-level environment object. This change has been done in `'10-Circular.coffee'` and the module is now usable from another module. The other definitions such as the `Point` class are not visible to other modules. Objects of `Point` type are of course visible as they are returned by `nextPoint`. You can use this modular information hiding when creating classes, if you want some definitions to be more private than simply naming them with an `'_'` in front. We can verify this in an ad-hoc test, this one is called `'10-CircularTest.coffee'`:
``cp = require "./10-Circular"show = console.logcirc = new cp.CircularPosition 0.01for i in [0...6] show "#{i}: #{circ.nextPoint()}"try show "Instantiating a new Point:" p = new cp.Point 0, 0 show "Created a Point"catch e then show e.messageshow "CircularPosition namespace:"show cp``
The show function was defined in the prelude, so we no longer have access to it. In the underlying environment there is a function `console.log` that can be used for output. Instead of using `console.log` directly, defining `show` as an alias for it makes it easy to copy such code into another environment, the browser were `console.log` does not exist. When we have many `show` calls, we only have to change in one place to redirect them to `alert` or debug output.
The process that we have been through — identifying a rounding error, extracting the implementation into a separate module where it is fixed — is called refactoring and it is an essential part of developing applications. Looking through code and finding places where it can be improved in functionality or in clarity results in a better overall system.
##### ○•○
Designing an interface for a module or an object type is one of the subtler aspects of programming. On the one hand, you do not want to expose too many details. They will only get in the way when using the module. On the other hand, you do not want to be too simple and general, because that might make it impossible to use the module in complex or specialised situations.
Sometimes the solution is to provide two interfaces, a detailed ‘low-level’ one for complicated things, and a simple ‘high-level’ one for straightforward situations. The second one can usually be built very easily using the tools provided by the first one.
In other cases, you just have to find the right idea around which to base your interface. The best way to learn the value of good interface design is, unfortunately, to use bad interfaces. Once you get fed up with them, you will figure out a way to improve them, and learn a lot in the process. Try not to assume that a lousy interface is ‘just the way it is’. Fix it, or wrap it in a new interface that is better.
##### ○•○
Just like an object type, a module has an interface. In collection-of-functions modules such as Underscore, the interface usually consists of all the functions that are defined in the module. In other cases, the interface of the module is only a small part of the functions defined inside it.
For example, our manuscript-to-HTML system from Functional Programming↑ only needs an interface of a single function, `renderFile`. The sub-system for building HTML would be a separate module. For modules which only define a single type of object, such as `Dictionary`, the object’s interface is the same as the module’s interface.
##### ○•○
There are cases where a module will export so many variables that it is a bad idea to put them all into the top-level environment. In cases like this, you can do what the standard `Math` object does, and represent the module as a single object whose properties are the functions and values it exports. For example…
``HTML = tag: (name, content, properties) -> name: name properties: properties content: content link: (target, text) -> HTML.tag 'a', [text], {href: target} # ... many more HTML-producing functions ...``
When you need the content of such a module so often that it becomes cumbersome to constantly type `HTML`, you can always move it into the top-level environment using a function like `globalize`.
``# As defined in the preludeglobalize = (ns, target = global) -> target[name] = ns[name] for name of nsglobalize HTML, window?show link 'http://citeseerx.ist.psu.edu/viewdoc/' + 'download?doi=10.1.1.102.244&rep=rep1&type=pdf', 'What Every Computer Scientist Should Know ' + 'About Floating-Point Arithmetic'``
You can even combine the function and object approaches, by putting the internal variables of the module inside a function, and having this function return an object containing its external interface.
##### ○•○
When adding methods to standard prototypes, such as those of `Array` and `Object` a similar problem to name-space pollution occurs. If two modules decide to add a `map` method to `Array.prototype`, you might have a problem. If these two versions of `map` have the precise same effect, things will continue to work, but only by sheer luck.
##### ○•○
There are functions which require a lot of arguments. Sometimes this means they are just badly designed, and can easily be remedied by splitting them into a few more modest functions. But in other cases, there is no way around it. Typically, some of these arguments have a sensible ‘default’ value. We could, for example, write yet another extended version of `range`.
``range = (start, end, stepSize, length) -> if stepSize is undefined stepSize = 1 if end is undefined end = start + stepSize * (length - 1) result = [] while start <= end result.push start start += stepSize resultshow range 0, undefined, 4, 5``
It can get hard to remember which argument goes where, not to mention the annoyance of having to pass `undefined` as a second argument when a `length` argument is used. We can make passing arguments to this unfriendly function more comprehensive by wrapping them in an object.
``defaultTo = (object, values) -> for name, value of values if not object.hasOwnProperty name object[name] = valuerange = (args) -> defaultTo args, {start: 0, stepSize: 1} if args.end is undefined args.end = args.start + args.stepSize * (args.length - 1) result = []; while args.start <= args.end result.push args.start args.start += args.stepSize resultshow range {stepSize: 4, length: 5}``
The `defaultTo` function is useful for adding default values to an object. It copies the properties of its second argument into its first argument, skipping those that already have a value.
##### ○•○
For the Seed of Life from the Preface↑, to use the `'10-Circular.coffee'` module in the web application, we have a few things to do. First we can replace the first line with `kup = require './prelude/coffeekup'`. The source code assigned to `webpage` has a similar structure as the HTML we saw in Functional Programming↑, but instead of being object attributes or text, it is CoffeeScript function calls.
It is like we have HTML embedded as a language extension in CoffeeScript. The correspondence between HTML and Coffeekup is practically 1:1, so when you know which HTML tag you want to use it is straightforward to write it in Coffeekup. You can find more information on the CoffeeKup website or even better take the time to read the source code, it is only a few hundred lines of CoffeeScript and quite readable. There are examples in its standard distribution and A Beginner’s Introduction to CoffeeKup by Mark Hahn.
In HTML5 there is a `canvas` tag with which you can create a drawing area. Then you can draw with vector graphics commands on the area. The commands are documented by W3C in HTML Canvas 2D Context. If you are already familiar with vector graphics then the HTML5 Canvas Cheat Sheet may be all you need to create your own drawings. When you choose the technologies that you base your application on, then look for vendor independent, standardized technologies first. They tend to have a long lifespan, so you learn something you can bring with you from task to task. An alternative to the W3C `canvas` is Adobe Flash.
Browsers load JavaScript files when they find a `<script>` tag with an `src` attribute in the HTML of a web page. The extension `.js` is usually used for files containing JavaScript code. We could let the `coffee` command line compiler produce JavaScript files for us. But then we have two files for every module and someone is bound to forget to compile a file someday, which could lead to strange behaviour and some difficult to find bugs. So instead let’s keep our CoffeeScript files and have the server compile them on the fly. That means we let the web page request a CoffeeScript file, but we will send the browser the corresponding JavaScript.
``script src: './10-Circular.coffee'``
Adding a `type: 'text/coffeescript'` attribute would tell the client to expect and compile CoffeeScript code (if the CoffeeScript compiler was loaded in the client). While that may sound like a good idea, it has some problems. The biggest one is that the code is only available after the code under the `coffeescript` function has run. And that function is were we want to use the class from the imported module. So forget about that possibility and compile on the server instead.
``circ = new CircularPosition()for i in [0...6] pt = circ.nextPoint() circle ctx, x+100*pt.x, y+100*pt.y``
Replacing the loop that used π and trigonometric functions with the one above is the last step for the client side. It is only a few visible changes but we got rid of the rounding error, that fix would have bloated the client with irrelevant details. Here is the client from `'10-SeedLife.coffee'`:
``kup = require './prelude/coffeekup'# Client-side web page with canvaswebpage = kup.render -> doctype 5 html -> head -> meta charset: 'utf-8' title 'My drawing | My awesome website' style ''' body {font-family: sans-serif} header, nav, section, footer {display: block} ''' # DO NOT USE: type: 'text/coffeescript' script src: './10-Circular.coffee' coffeescript -> draw = (ctx, x, y) -> circle = (ctx, x, y) -> ctx.beginPath() ctx.arc x, y, 100, 0, 2*Math.PI, false ctx.stroke() ctx.strokeStyle = 'rgba(255,40,20,0.7)' circle ctx, x, y circ = new CircularPosition() for i in [0...6] pt = circ.nextPoint() circle ctx, x+100*pt.x, y+100*pt.y window.onload = -> canvas = document.getElementById 'drawCanvas' context = canvas.getContext '2d' draw context, 300, 200 body -> header -> h1 'Seed of Life' canvas id: 'drawCanvas', width: 600, height: 400``
##### ○•○
It is possible not use a `script` tag, but to fetch the content of a file directly, and then use the `eval` function to execute it. This makes script loading instantaneous, and thus easier to deal with. `eval`, short for ‘evaluate’, is an interesting function. You give it a string value, and it will execute the content of the string as code.
``runOnDemand -> eval 'function IamJavaScript() {' + ' alert(\"Repeat after me:' + ' Give me more {();};.\");};' + ' IamJavaScript();'``
The global function will however only accept JavaScript, to get a usable CoffeeScript `eval` function you need to refer to the `'coffee-script'` library. In a browser it can be like this:
``runOnDemand -> CoffeeScript.eval 'alert ((a, b) -> a + b) 3, 4'``
It is similar in the server environment. But notice the `require` statement — we are not interested in creating instances of the `CoffeeScript` class, only in calling its static `eval` method — so that line helps us avoid writing: `cs.CoffeeScript.eval`
``cs = (require 'coffee-script').CoffeeScriptcs.eval 'show ((a, b) -> a + b) 3, 4'``
You can imagine that `eval` can be used to do some interesting things. Code can build new code, and run it. In most cases, however, problems that can be solved with creative uses of `eval` can also be solved with creative uses of anonymous functions, and the latter is less likely to cause strange problems and security nightmares. When `eval` is called inside a function, all new variables will become local to that function.
##### ○•○
A web server functions by responding to web browser (client) requests. The minimal implementation given in Seed of Life create a server that takes a function as an argument. Every time a request comes in, the function is called. Such a function is called an event handler or sometimes a callback. This function prints a message and creates a response by writing a header, describing the content of the response, and adding the web page as the response body. The server is started by giving it a port number to listen to. The server then goes to sleep, waiting for a client to request something of it. You can do that by typing `http://localhost:3389/` in your web browser34. To stop the server press `CTRL-C`. It’s similar to this:
``# Server-side HTTP serverhttp = require "http"server = http.createServer (req, res) -> show "#{req.client.remoteAddress} " + "#{req.method} #{req.url}" res.writeHead 200, "Content-Type": "text/html" res.write webpage res.end()server.listen 3389show "Server running at"show server.address()``
That is a bit too minimal for our Seed of Life application because we have to send either the web page or our module depending on what the client will be asking for. We could replace our code with a web server framework but that is not very instructive, it only shows you that something can be done, not how it works internally.
Instead lets expand the server with the minimum of what is required to get it to work. You can reverse engineer the client/server communication by adding a `show req` to the minimal server. Using the new web page we can see that three items are requested by the client:
Path Content
`'/'` the web page,
`'/10-Circular.coffee'` the code module, and
`'/favicon.ico'` that we can ignore.
That is enough information to add some if statements to it.
For the module request we have to read the file, compile it with CoffeeScript (using the same module as `eval`) and return the compiled code, that the browser can understand, in the response. When a request is made for anything else then a simple response is ‘not found’, that’s a `404` in HTTP.
If you looked in the prelude how it was reading a file, then that is not the way to read a file in a server. The prelude uses a synchronous function, that means that everything stops until the file has been read. In the world of servers hard disks are slow, and one request should not stop other requests. Using `readFile` solves this because its last argument is for a function that will only be run when the file has been read — the server can then process other requests in the meantime.
``# Server-side HTTP servershow = console.loghttp = require "http"fs = require "fs"cs = require("coffee-script").CoffeeScriptserver = http.createServer (req, res) -> show "#{req.client.remoteAddress} " + "#{req.method} #{req.url}" if req.method is "GET" if req.url is "/" res.writeHead 200, "Content-Type": "text/html" res.write webpage res.end() return else if req.url is "/10-Circular.coffee" fs.readFile ".#{req.url}", "utf8", (err, data) -> if err then throw err compiledContent = cs.compile data res.writeHead 200, "Content-Type": "application/javascript" res.write compiledContent res.end() return res.writeHead 404, "Content-Type": "text/html" res.write "404 Not found" res.end()server.listen 3389show "Server running at"show server.address()``
I think you can see that this code practically begs to be abstracted and modularized. Instead of an `if` chain, we could have a function with an object as argument where each attribute is a request and each value a function to handle the request. Then there is error handling, if the file is not found then the server throws an error and stops. The purpose of `'10-SeedLife.coffee'` is to show how you can use a module on a web page, so refactoring and beautifying the server is up to you35.
Building a program as a set of nice, small modules often means the program will use a lot of different files. When programming for the web, having lots of small code files on a page tend to make the page slower to load. This does not have to be a problem though. You can write and test your program as a number of small files, and put them all into a single big file when ‘publishing’ the program to the web. The CoffeeScript compiler has a join feature for this.
##### ○•○
A module or group of modules that can be useful in more than one program is usually called a library. For many programming languages, there is a huge set of quality libraries available. This means programmers do not have to start from scratch all the time, which can make them a lot more productive.
For CoffeeScript, unfortunately, the number of available libraries is not very large. But you can use JavaScript libraries as a stopgap. Underscore is an example of a good library with its ‘basic’ tools, things like `map` and `clone`. Other languages tend to provide such obviously useful things as built-in standard features, but with CoffeeScript you will have to either build a collection of them for yourself or use a library.
Using a library is recommended: It is less work, and the code in a library has usually been tested more thoroughly than the things you write yourself. Some things to check when selecting a library is functionality, adequate documentation, and readable source code — and that the library is small enough that you can understand and fix bugs in it — if need be on your own.
##### ○•○
Beyond the concept of modules and programs is the world of distributed programming where processes running on different machines collaborate on solving a task or interact with each other in near real time. One protocol that enables distributed programming is WebSockets.
A WebSocket is a bi-directional, full-duplex communications channel over a TCP socket. Possibly the same socket that a web server is using. With bi-directional, full-duplex support a server and its clients can send messages back and forth concurrently. It opens the door to many kinds of web applications that would have been difficult to implement over http.
Most of the latest web browsers have support for WebSockets. Some have it disabled by default because of potential security issues in the draft protocol, the prelude shows how to enable WebSocket support in Firefox and Opera. It is enabled in Chrome and Safari. To see if your web browser works with WebSockets you can run `'10-TestWebSocket.coffee'`.
##### ○•○
To connect a client to a WebSocket server, you create a `WebSocket` object with a server as its endpoint. Note the `ws` protocol instead of `http`:
``websocket = new WebSocket 'ws://localhost:8080/'``
You can attach functions to a `WebSocket` object, so your code can react when certain events occur. There are four of them:
• `onopen`
• `onclose`
• `onmessage`
• `onerror`
Sending a message to the server is done with `send`. For example:
``websocket.onopen = (evt) -> writeToScreen 'CONNECTED' websocket.send 'WebSocket works!'``
That is how simple it is with a client on a web page. In CoffeeScript’s server-side environment there is not yet support for WebSockets, but that is easily fixed with the `ws` library from Jacek Becela. It is less than 200 lines when translated into CoffeeScript. It is present in the prelude or you can use it directly from `prelude/ws.coffee`.
To create a server you pass a function to its `createServer` method and get a `websocket` object as an argument where you can attach methods to listen for events. To send a message to a client, call its `write` method with a string. Here is an abbreviated use of it:
``wsHandler = (websocket) -> websocket.on 'connect', (resource) -> show 'connect: ' + resource # ... websocket.on 'data', (data) -> show data # process data websocket.write 'Cowabunga!' # respond # ...wsServer = ws.createServer wsHandlerwsServer.listen 8080``
##### ○•○
So without further ado here follows an animated version of Seed of Life… Find it in `'10-WebSocketLife.coffee'` and change it any way you like.
``kup = require './prelude/coffeekup'# Web page with canvas and client-side WebSocketwebpage = kup.render -> doctype 5 html -> head -> meta charset: 'utf-8' title 'My animation | My awesome website' style ''' body {font-family: sans-serif} header, nav, section, footer {display: block} ''' coffeescript -> show = (msg) -> console.log msg color = 'rgba(255,40,20,0.7)' circle = (ctx, x, y) -> ctx.strokeStyle = color ctx.beginPath() ctx.arc x, y, 100, 0, 2*Math.PI, false ctx.stroke() addElement = (ni, num, text) -> newdiv = document.createElement 'div' newdiv.setAttribute 'id', 'div' + num newdiv.innerHTML = text ni.appendChild newdiv wsUri = 'ws://localhost:8080/' websocket = undefined num = 0 socketClient = (buffer, ctx, x, y) -> websocket = new WebSocket wsUri websocket.onopen = (evt) -> show 'Connected' websocket.onclose = (evt) -> show 'Closed' websocket.onerror = (evt) -> show 'Error: ' + evt.data websocket.onmessage = (evt) -> #show evt.data addElement buffer, num++, evt.data pt = JSON.parse evt.data if pt.color? then color = pt.color circle ctx, x+100*pt.x, y+100*pt.y window.onload = -> canvas = document.getElementById 'drawCanvas' context = canvas.getContext '2d' buffer = document.getElementById 'message' socketClient buffer, context, 300, 200 window.sendMessage = -> msg = document.getElementById('entryfield').value websocket.send msg body -> header -> h1 'Seed of Life' input id:'entryfield', value:'rgba(40,200,25,0.7)' button type: 'button', onclick: 'sendMessage()' 'Change Color' br canvas id: 'drawCanvas', width: 600, height: 400 div id: 'message'# Server-side WebSocket serverws = require './prelude/ws'cp = require './10-Circular'wsHandler = (websocket) -> websocket.on 'connect', (resource) -> show 'connect: ' + resource # close connection after 10s setTimeout websocket.end, 10 * 1000 websocket.on 'data', (data) -> show data # process data blue = 'rgba(40,20,255,0.7)' websocket.write JSON.stringify color: if data is '' then blue else data websocket.on 'close', -> show 'closing' process.exit 0 # Exit server completely circ = new cp.CircularPosition 0.01 annoy = setInterval (-> websocket.write JSON.stringify circ.nextPoint()), 20wsServer = ws.createServer wsHandlerwsServer.listen 8080# Launch test server and client UIrequire './prelude'viewServer webpage``
##### ○•○
Through the examples, exercises and explanations you have a foundation for your own projects: a collaborative web application, a multi-user game, a library with an aesthetically pleasing implementation, …
First choose what your project is going to be about, then look to github for supporting code and libraries. A couple of interesting ones for web applications are Zappa and SocketStream.
If you would like a broader foundation in computer science then read: Concepts, Techniques, and Models of Computer Programming by Peter van Roy and Seif Haridi. Not a word of CoffeeScript, but plenty of insight.
He who has begun has half done. Dare to be wise; begin!
Quintus Horatius Flaccus
## Part IV. Appendix
### Language extras
Most of the CoffeeScript language has been introduced in Smooth CoffeeScript. This appendix illustrates some extra language constructs and idioms that may come in handy.
##### ○•○
There is a statement called `switch` which can be used to choose which code to execute based on some value. Alternatives include a chain of `if` statements or an associative data structure.
``weatherAdvice = (weather) -> show 'When it is ' + weather switch weather when 'sunny' show 'Dress lightly.' show 'Go outside.' when 'cloudy' show 'Go outside.' when 'tornado', 'hurricane' show 'Seek shelter' else show 'Unknown weather type: ' + weatherweatherAdvice 'sunny'weatherAdvice 'cloudy'weatherAdvice 'tornado'weatherAdvice 'hailstorm'``
Inside the block opened by `switch`, you can write a number of `when` labels. The program will jump to the label that corresponds to the value that `switch` was given or to `else` if no matching value is found. Then it executes statements in the following block until it reaches the next `when` or `else` statement. Unlike some other languages there is no fall-through between `when` cases and no `break` statement is therefore needed. You can have a comma-separated lists after a `when`, any of the values is then used to find a match.
##### ○•○
Closely related to `break`, there is `continue`. They can be used in the same places. While `break` jumps out of a loop and causes the program to proceed after the loop, `continue` jumps to the next iteration of the loop.
``for i in [20...30] if i % 3 isnt 0 continue show i + ' is divisible by three.'``
A similar effect can usually be produced using just `if`, but there are cases where `continue` looks nicer.
##### ○•○
Pattern matching has been touched upon, a few more examples can clarify the scope of it. You can assign an object to an anonymous object with matching attribute names.
``class Point constructor: (@x, @y) ->pt = new Point 3, 4{x, y} = ptshow "x is #{x} and y is #{y}"``
Attribute names can be inferred from an anonymous object. A function can have an anonymous object as argument and extract the attributes as variables.
``firstName = "Alan"lastName = "Turing"name = {firstName, lastName}show namedecorate = ({firstName, lastName}) -> show "Distinguished #{firstName} " + "of the #{lastName} family."decorate name``
##### ○•○
Unicode can be used in identifiers. Letter forms that are very similar to characters in the western alphabets should be avoided. It can be difficult in internationally shared projects due to different keyboard layouts, but useful in teaching math or in a local language.
``pi = π = Math.PIsphereSurfaceArea = (r) -> 4 * π * r * rradius = 1show '4 * π * r * r when r = ' + radiusshow sphereSurfaceArea radius``
##### ○•○
Qualifying a `for` statement with a `when` clause can be used to filter array or object elements on a logical condition. Great for one-liners.
``evens = (n) -> i for i in [0..n] when i % 2 is 0show evens 6steppenwolf = title: 'Tonight at the Magic Theater' warning: 'For Madmen only' caveat: 'Price of Admittance: Your Mind.' caution: 'Not for Everybody.'stipulations = (text for key, text of steppenwolf \ when key in ['warning', 'caveat'])show stipulationsshow ultimatum for ultimatum in stipulations \ when ultimatum.match /Price/``
##### ○•○
Destructuring assignment can be used to swap or reassign variables. It is also handy for extracting values or returning multiple values from a function.
``tautounlogical = "the reason is because I say so"splitStringAt = (str, n) -> [str.substring(0,n), str.substring(n)][pre, post] = splitStringAt tautounlogical, 14[pre, post] = [post, pre] # swapshow "#{pre} #{post}"``
``[re,mi,fa,sol,la,ti] = [1..6][dal,ra...,mim] = [ti,re,fa,sol,la,mi]show "#{dal}, #{ra} and #{mim}"[key, word] = if re > ti then [mi, fa] else [fa, mi]show "#{key} and #{word}"``
##### ○•○
A function can be bound to the `this` value that is in effect when it is defined. This can be needed when you are using event handlers or callback based libraries e.g. jQuery. Instead of `->` use `=>` to bind `this`.
In the following example the `a.display` would show `undefined` when called in the `Container` if `display` was defined with `->`.
``class Widget id: 'I am a widget' display: => show @idclass Container id: 'I am a container' callback: (f) -> show @id f()a = new Widgeta.display()b = new Containerb.callback a.display``
##### ○•○
With the `do` statement you can call a named or an anonymous function:
``n = 3f = -> show "Say: 'Yes!'"do f(do -> show "Yes!") while n-- > 0``
Which is reminiscent of the syntax for a function that takes a function as its parameter.
``echoEchoEcho = (msg) -> msg() + msg() + msg()show echoEchoEcho -> "No"``
The `do` statement captures the environment, so it is available inside its block. The `setTimeout` function calls the innermost function after the loop has finished. Without capture the `i` variable is `4`.
``runOnDemand -> for i in [1..3] do (i) -> setTimeout (-> show 'With do: ' + i), 0 for i in [1..3] setTimeout (-> show 'Without: ' + i), 0``
### Binary Heaps
In Searching↑, the binary heap was introduced as a method to store a collection of objects in such a way that the smallest element can be quickly found. As promised, this appendix will explain the details behind this data structure.
Consider again the problem we needed to solve. The A* algorithm created large amounts of small objects, and had to keep these in an ‘open list’. It was also constantly removing the smallest element from this list. The simplest approach would be to just keep all the objects in an array, and search for the smallest one when we need it. But, unless we have a lot of time, this will not do. Finding the smallest element in an unsorted array requires going over the whole array, and checking each element.
The next solution would be, of course, to sort our array. CoffeeScript arrays have a wonderful `sort` method, which can be used to do the heavy work. Unfortunately, re-sorting a whole array every time an element is removed is more work than searching for a minimum value in an unsorted array. Some tricks can be used, such as, instead of re-sorting the whole array, just making sure new values are inserted in the right place so that the array, which was sorted before, stays sorted. This is coming closer to the approach a binary heap uses already, but inserting a value in the middle of an array requires moving all the elements after it one place up, which is still just too slow.
Another approach is to not use an array at all, but to store the values in a set of interconnected objects. A simple form of this is to have every object hold one value and two (or less) links to other objects. There is one root object, holding the smallest value, which is used to access all the other objects. Links always point to objects holding greater values, so the whole structure looks something like this:
Such structures are usually called trees, because of the way they branch. Now, when you need the smallest element, you just take off the top element and rearrange the tree so that one of the top element’s children — the one with the lowest value — becomes the new top. When inserting new elements, you ‘descend’ the tree until you find an element less than the new element, and insert it there. This takes a lot less searching than a sorted array does, but it has the disadvantage of creating a lot of objects, which also slows things down.
##### ○•○
A binary heap, then, does make use of a sorted array, but it is only partially sorted, much like the tree above. Instead of objects, the positions in the array are used to form a tree, as this picture tries to show:
Array element `1` is the root of the tree, array element `2` and `3` are its children, and in general array element `X` has children `X * 2` and `X * 2 + 1`. You can see why this structure is called a ‘heap’. Note that this array starts at `1`, while CoffeeScript arrays start at `0`. The heap will always keep the smallest element in position `1`, and make sure that for every element in the array at position `X`, the element at `X / 2` (round down) is smaller.
Finding the smallest element is now a matter of taking the element at position `1`. But when this element is removed, the heap must make sure that there are no holes left in the array. To do this, it takes the last element in the array and moves it to the start, and then compares it to its child elements at position `2` and `3`. It is likely to be greater, so it is exchanged with one of them, and the process of comparing it with its children is repeated for the new position, and so on, until it comes to a position where its children are greater, or a position where it has no children.
``[2, 3, 5, 4, 8, 7, 6]# Take out 2, move 6 to the front.[6, 3, 5, 4, 8, 7]# 6 is greater than its first child 3, so swap them.[3, 6, 5, 4, 8, 7]# Now 6 has children 4 and 8 (position 4 and 5).# It is greater than 4, so we swap again.[3, 4, 5, 6, 8, 7]# 6 is in position 4, and has no more children.# The heap is in order again.``
Similarly, when an element has to be added to the heap, it is put at the end of the array and allowed to ‘bubble’ up by repeatedly exchanging it with its parent, until we find a parent that is less than the new node.
``[3, 4, 5, 6, 8, 7]# Element 2 gets added again, it starts at the back.[3, 4, 5, 6, 8, 7, 2]# 2 is in position 7, its parent is at 3, which# is a 5. 5 is greater than 2, so we swap.[3, 4, 2, 6, 8, 7, 5]# The parent of position 3 is position 1.# Again, we swap.[2, 4, 3, 6, 8, 7, 5]# The element can not go further than position 1,# so we are done.``
Note how adding or inserting an element does not require it to be compared with every element in the array. In fact, because the jumps between parents and children get bigger as the array gets bigger, this advantage is especially large when we have a lot of elements36.
##### ○•○
Here is the full code of a binary heap implementation. Two things to note are that, instead of directly comparing the elements put into the heap, a function (`scoreFunction`) is first applied to them, so that it becomes possible to store objects that can not be directly compared. The default is the identity function.
Also, because CoffeeScript arrays start at `0`, and the parent/child calculations use a system that starts at `1`, there are a few strange calculations to compensate.
``class BinaryHeap # Public #-------- constructor: (@scoreFunction = (x) -> x) -> @content = [] push: (element) -> # Add the new element to the end of the array. @content.push element # Allow it to bubble up. @_bubbleUp @content.length - 1 pop: -> # Store the first element so we can return it later. result = @content[0] # Get the element at the end of the array. end = @content.pop() # If there are any elements left, put the end # element at the start, and let it sink down. if @content.length > 0 @content[0] = end @_sinkDown 0 result size: -> @content.length remove: (node) -> len = @content.length # To remove a value, we must search through the # array to find it. for i in [0...len] if @content[i] is node # When it is found, the process seen in 'pop' # is repeated to fill up the hole. end = @content.pop() if i isnt len - 1 @content[i] = end if @scoreFunction(end) < @scoreFunction(node) @_bubbleUp i else @_sinkDown i return throw new Error 'Node not found.' # Private #--------- _bubbleUp: (n) -> # Fetch the element that has to be moved. element = @content[n] # When at 0, an element can not go up any further. while n > 0 # Compute the parent element index, and fetch it. parentN = Math.floor((n + 1) / 2) - 1 parent = @content[parentN] # Swap the elements if the parent is greater. if @scoreFunction(element) < @scoreFunction(parent) @content[parentN] = element @content[n] = parent # Update 'n' to continue at the new position. n = parentN # Found a parent that is less, # no need to move it further. else break return _sinkDown: (n) -> # Look up the target element and its score. length = @content.length element = @content[n] elemScore = @scoreFunction element loop # Compute the indices of the child elements. child2N = (n + 1) * 2 child1N = child2N - 1 # This is used to store the new position of # the element, if any. swap = null # If the first child exists (is inside the array)... if child1N < length # Look it up and compute its score. child1 = @content[child1N] child1Score = this.scoreFunction child1 # If the score is less than our elements, # we need to swap. if child1Score < elemScore swap = child1N # Do the same checks for the other child. if child2N < length child2 = @content[child2N] child2Score = @scoreFunction child2 compScore = if swap is null elemScore else child1Score if child2Score < compScore swap = child2N # If the element needs to be moved, # swap it, and continue. if swap isnt null @content[n] = @content[swap] @content[swap] = element n = swap # Otherwise, we are done. else break return(exports ? this).BinaryHeap = BinaryHeap``
And some test cases…
``runOnDemand -> sortByValue = (obj) -> _.sortBy obj, (n) -> n buildHeap = (c, a) -> heap = new BinaryHeap heap.push number for number in a c.note heap declare 'heap is created empty', [], (c) -> c.assert (new BinaryHeap).size() is 0 declare 'heap pop is undefined when empty', [], (c) -> c.assert _.isUndefined (new BinaryHeap).pop() declare 'heap contains number of inserted elements', [arbArray(arbInt)], (c, a) -> c.assert buildHeap(c, a).size() is a.length declare 'heap contains inserted elements', [arbArray(arbInt)], (c, a) -> heap = buildHeap c, a c.assert _.isEqual sortByValue(a), \ sortByValue(heap.content) declare 'heap pops elements in sorted order', [arbArray(arbInt)], (c, a) -> heap = buildHeap c, a for n in sortByValue a then c.assert n is heap.pop() c.assert heap.size() is 0 declare 'heap does not remove non-existent elements', [arbArray(arbInt), arbInt], expectException (c, a, b) -> if b in a then c.guard false heap = buildHeap c, a heap.remove b declare 'heap removes existing elements', [arbArray(arbInt), arbInt], (c, a, b) -> if not (b in a) then c.guard false aSort = sortByValue _.without a, b count = a.length - aSort.length heap = buildHeap c, a heap.remove b for i in [0...count] for n in aSort then c.assert n is heap.pop() c.assert heap.size() is 0 test()``
### Performance
To give an idea of the relative performance of CoffeeScript for problem solving and number crunching we can compare it with CPython. First a small test of operations on a million floating point numbers.
``# CoffeeScriptrunOnDemand -> start = new Date() N = 1000000 a = Array(N) for i in [0...N] a[i] = Math.random() s = 0 for v in a s += v t = 0 for v in a t += v*v t = Math.sqrt t duration = new Date() - start show "N: #{N} in #{duration*0.001} s" show "Result: #{s} and #{t}"``
``# CPythonimport timeimport randomimport mathstart = time.clock()N = 1000000a = [random.random() for i in range(N)]s = 0for v in a: s += vt = 0for v in a: t += v*vt = math.sqrt(t)duration = time.clock() - startprint 'N:', N, 'in', duration, 's'print 'Result:', s, 'and', t``
``// C++ pointer-free version using standard valarray template.// LLVM: clang++ -O3 -std=c++0x A3-Microrun.cpp -o mb; ./mb// Visual 2010 C++: cl /Ox /Ob2 /Oi /Ot /Oy- /EHsc A3-Microrun.cpp#include <cstdlib>#include <cmath>#include <ctime>#include <valarray>#include <iostream>using namespace std;inline double rand01() { return static_cast<double>(rand()) / static_cast<double>(RAND_MAX);}void test() { clock_t start = clock(); static const size_t N = 1000000; valarray<double> a(N); for (size_t i = 0; i < N; ++i) a[i] = rand01(); double s = a.sum(); double t = sqrt((a*a).sum()); double duration = (clock() - start) / static_cast<double>(CLOCKS_PER_SEC); cout << "N: " << N << " in " << duration << "s" << endl; cout << "Result: " << s << " and " << t << endl; }int main() { srand(static_cast<unsigned int>(time(NULL))); test(); return 0;}``
The source code and the results are quite similar. On my machine the time for CoffeeScript is about 0.41s and around 1.03s for CPython. You can try it yourself with `coffee A3-Microbench.coffee` and if you have CPython installed `python A3-Microtest.py`. C++ is 10 × to 30 × faster.
##### ○•○
Below you can find implementations37 for the classic 8 queens problem. The objective is to place eight queens on a chessboard without any of the queens occupying the same row, column or diagonal. The two implementations use the same algorithm and produce the same results. Timing for CPython 0.27s and for CoffeeScript 0.06s.
``# CoffeeScript encoding: utf-8# Create variations to trypermute = (L) -> n = L.length return ([elem] for elem in L) if n is 1 [a, L] = [ [L[0]], L.slice 1 ] result = [] for p in permute L for i in [0...n] result.push p[...i].concat a, p[i...] result# Check a variationtest = (p, n) -> for i in [0...n - 1] for j in [i + 1...n] d = p[i] - p[j] return true if j - i is d or i - j is d false# N queens solvernQueen = (n) -> result = [] for p in permute [0...n] result.push p unless test p, n result# Repeat a string a number of timesrep = (s, n) -> (s for [0...n]).join ''# Display a board with a solutionprintBoard = (solution) -> board = "\n" end = solution.length for pos, row in solution board += "#{end - row} #{rep ' ☐ ', pos} " + "♕ #{rep ' ☐ ', end - pos - 1}\n" # Using radix 18 hack! board += ' ' + (n.toString 18 \ for n in [10...18]).join(' ').toUpperCase() board + "\n"# Find all solutionssolve = (n) -> for solution, count in nQueen n show "Solution #{count+1}:" show printBoard solution countrunOnDemand -> start = new Date() solve 8 # Normal chessboard size show "Timing: #{(new Date() - start)*0.001}s"``
``# CPython encoding: utf-8def permute(L): "Create variations to try" n = len(L) if n == 1: for elem in L: yield [elem] else: a = [L.pop(0)] for p in permute(L): for i in range(n): yield p[:i] + a + p[i:]def test(p, n): "Check a variation" for i in range(n - 1): for j in range(i + 1, n): d = p[i] - p[j] if j - i == d or i - j == d: return True return Falsedef n_queen(n): "N queens solver" for p in permute(range(n)): if not test(p, n): yield p# Start columns from Abase_char = ord('A')def print_board(solution): "Display a board with a solution" board = [] end = len(solution) for row, pos in enumerate(solution): board += ["\n%s %s ♕ %s" % ((end - row), (' ☐ ' * pos), (' ☐ ' * (end - pos - 1)))] # Using character set hack! board += '\n ' + \ ' '.join([chr(base_char+i) for i in range(0, end)]) return ''.join(board) + '\n'def solve(n): "Find all solutions" for count, solution in enumerate(n_queen(n)): print "Solution %d:" % count print print_board(solution) return countimport timet=time.clock()solve(8) # Normal chessboard sizeprint "Timing: ", time.clock()-t, "s"``
Solution 1 ➞ Solution 92
### Command Line Utility
The utility used to remove solutions from source files is shown here. Partly as an example of a command line program, partly to make this book and its accompanying source code self-containing.
The program uses the asynchronous file system functions in the same manner as server programs. The synchronous functions are more conventional, but less illustrative.
``````fs = require "fs"
show = console.log
String::contains = (pattern) ->
///#{pattern}///.test this
(str.match /(\s*)\w/)[1] ? ""
errorWrapper = (action) ->
(err, args...) ->
if err then throw err
action args...
ifFileExists = (filename, action) ->
fs.stat filename, errorWrapper (stat) ->
if stat.isFile() then action()
getFileAsLines = (filename, action) ->
ifFileExists filename, ->
errorWrapper (content) ->
action content.split "\n"
saveFile = (filename, content) ->
fs.writeFile filename, content,
errorWrapper -> show "Saved #{filename}"
stripSolutions = (lines) ->
out = ""
inSolution = false
concat = (str) -> out += str + "\n"
for line in lines
if line.contains "'— Exercise \\d+ —'"
inSolution = true
concat line
concat "#{indent}process.exit()" +
" # Replace this line with your solution"
else if inSolution
if line.contains "'— End of Exercise —'"
concat line
inSolution = false
# else ignore line in solution
else
concat line
# Remove trailing newline
out[...out.length-1]
stripFile = (fromName, toName) ->
if fromName?
getFileAsLines fromName, (lines) ->
saveFile toName, stripSolutions lines
else
show "Expected a file name " +
"to strip for solutions"
copyFile = (fromName, toName) ->
if fromName?
ifFileExists fromName, ->
errorWrapper (content) ->
saveFile toName, content,
else
show "Expected a file name to copy"
toDir = "../src-no-solutions"
fs.mkdir toDir, 0777, (err) ->
if err
throw err unless err.code is 'EEXIST'
show "Reusing"
else
show "Created"
fromDir = process.argv[2]
if fromDir?
for filename in files
if filename.contains "\\w\\w-\\w+.coffee"
stripFile filename, "#{toDir}/#{filename}"
else
copyFile filename, "#{toDir}/#{filename}"
else
show "Expected a directory with " +
"solutions to strip"
``````
## Part V. Reference and Index
### Language Reference
Also available as a two page Quick Reference in pdf format.
Refer to coffeescript.org for further reference material and examples.
General
• Whitespace is significant
• Ending a line will terminate expressions - no need to use semicolons
• Semicolons can be used to fit multiple expressions onto a single line
• Use indentation instead of curly braces `{ }` to surround blocks of code in functions, `if` statements, `switch`, and `try/catch`
• Comments starts with `#` and run to the end of the line
Embedded JavaScript
• Use backquotes `` to embed JavaScript code within CoffeeScript
Functions
• Functions are defined by an optional list of parameters in parentheses, an arrow, and an optional function body. The empty function looks like: `->`
• Mostly no need to use parentheses to invoke a function if it is passed arguments. The implicit call wraps forward to the end of the line or block expression.
• Functions may have default values for arguments. Override the default value by passing a non-null argument.
Objects and arrays
• Objects and arrays are similar to JavaScript
• When each property is listed on its own line, the commas are optional
• Objects may be created using indentation instead of explicit braces, similar to YAML
• Reserved words, like `class`, can be used as properties of an object without quoting them as strings
Lexical Scoping and Variable Safety
• Variables are declared implicitly when used (no `var` keyword).
• The compiler ensures that variables are declared within lexical scope. An outer variable is not redeclared within an inner function when it is in scope
• Using an inner variable can not shadow an outer variable, only refer to it. So avoid reusing the name of an external variable in a deeply nested function
• CoffeeScript output is wrapped in an anonymous function, making it difficult to accidentally pollute the global namespace
• To create top-level variables for other scripts, attach them as properties on `window`, or to `exports` in CommonJS. Use: `exports ? this`
If, Else, Unless, and Conditional Assignment
• `if/else` can be written without parentheses and curly braces
• Multi-line conditionals are delimited by indentation
• `if` and `unless` can be used in postfix form i.e. at the end of the statement
• `if` statements can be used as expressions. No need for `?:`
Splats
• Splats `...` can be used instead of the variable number of `arguments` object and are available for both function definition and invocation
Loops and Comprehensions
• Comprehensions `for ... in` work over arrays, objects, and ranges
• Comprehensions replace for loops, with optional guard clauses and the value of the current array index: `for value, index in array`
• Array comprehensions are expressions, and can be returned and assigned
• Comprehensions may replace `each/forEach`, `map` or `select/filter`
• Use a range when the start and end of a loop is known (integer steps)
• Use `by` to step in fixed-size increments
• When assigning the value of a comprehension to a variable, CoffeeScript collects the result of each iteration into an array
• Return `null`, `undefined` or `true` if a loop is only for side-effects
• To iterate over the key and value properties in an object, use `of`
• Use: `for own key, value of object` to iterate over the keys that are directly defined on an object
• The only low-level loop is the `while` loop. It can be used as an expression, returning an array containing the result of each iteration through the loop
• `until` is equivalent to `while not`
• `loop` is equivalent to `while true`
• The `do` keyword inserts a closure wrapper, forwards any arguments and invokes a passed function
Array Slicing and Splicing with Ranges
• Ranges can be used to extract slices of arrays
• With two dots `[3..6]`, the range is inclusive (3, 4, 5, 6)
• With three dots `[3...6]`, the range excludes the end (3, 4, 5)
• The same syntax can be used with assignment to replace a segment of an array with new values, splicing it
• Strings are immutable and can not be spliced
Everything is an Expression
• Functions return their final value
• The return value is fetched from each branch of execution
• Return early from a function body by using an explicit `return`
• Variable declarations are at the top of the scope, so assignment can be used within expressions, even for variables that have not been seen before
• Statements, when used as part of an expression, are converted into expressions with a closure wrapper. This allows assignment of the result of a comprehension to a variable
• The following are not expressions: `break`, `continue`, and `return`
Operators and Aliases
• CoffeeScript compiles `==` into `===`, and `!=` into `!==`. There is no equivalent to the JavaScript `==` operator
• The alias `is` is equivalent to `===`, and `isnt` corresponds to `!==`
• `not` is an alias for `!`
• Logical operator aliases: `and` is `&&`, `or` is `||`
• In `while`, `if/else` and `switch/when` statements the `then` keyword can be used to keep the body on the same line
• Alias for boolean `true` is `on` and `yes` (as in YAML)
• Alias for boolean `false` is `off` and `no`
• For single-line statements, `unless` can be used as the inverse of `if`
• Use `@property` instead of `this.property`
• Use `in` to test for array presence
• Use `of` to test for object-key presence
Existential Operator
• Use the existential operator `?` to check if a variable exists
• `?` returns `true` unless a variable is `null` or `undefined`
• Use `?=` for safer conditional assignment than `||=` when handling numbers or strings
• The accessor variant of the existential operator `?.` can be used to soak up null references in a chain of properties
• Use `?.` instead of the dot accessor `.` in cases where the base value may be `null` or `undefined`. If all of the properties exist then the expected result is returned, if the chain is broken, then `undefined` is returned instead
Classes, Inheritance, and Super
• Object orientation as in most other object oriented languages
• The `class` structure allows to name the class, set the superclass with `extends`, assign prototypal properties, and define a `constructor`, in a single assignable expression
• Constructor functions are named as the `class` name, to support reflection
• Lower level operators: The `extends` operator helps with proper prototype setup. `::` gives access to an object’s prototype. `super()` calls the immediate ancestor’s method of the same name
• A class definition is a block of executable code, which may be used for meta programming.
• In the context of a class definition, `this` is the class object itself (the `constructor` function), so static properties can be assigned by using `@property: value`, and functions defined in parent classes can be called with: `@inheritedMethodName()`
Destructuring Assignment
• To make extracting values from complex arrays and objects convenient, CoffeeScript implements destructuring assignment
• When assigning an array or object literal to a value, CoffeeScript breaks up and matches both sides against each other, assigning the values on the right to the variables on the left
• The simplest case is parallel assignment `[a,b] = [b,a]`
• It can be used with functions that return multiple values
• It can be used with any depth of array and object nesting to get deeply nested properties and can be combined with splats
Function binding
• The fat arrow `=>` can be used to define a function and bind it to the current value of `this`
• This is helpful when using callback-based libraries, for creating iterator functions to pass to `each` or event-handler functions to use with `bind`
• Functions created with `=>` are able to access properties of the `this` where they are defined
Switch/When/Else
• The `switch` statement do not need a `break` after every case
• A `switch` is a returnable, assignable expression
• The format is: `switch` condition, `when` clauses, `else` the default case
• Multiple values, comma separated, can be given for each `when` clause. If any of the values match, the clause runs
Try/Catch/Finally
• `try/catch` statements are as in JavaScript (although they are expressions)
String Interpolation, Heredocs, and Block Comments
• Single-quoted strings are literal. Use backslash for escape characters
• Double-quoted strings allow for interpolated values, using `#{ ... }`
• Multiline strings are allowed
• A heredoc `'''` can be used for formatted or indentation-sensitive text (or to avoid escaping quotes and apostrophes)
• The indentation level that begins a heredoc is maintained throughout, so the text can be aligned with the body of the code
• Double-quoted heredocs `"""` allow for interpolation
• Block comments `###` are similar to heredocs, and are preserved in the generated code
Chained Comparisons
• Use a chained comparison `minimum < value < maximum` to test if a value falls within a range
Extended Regular Expressions
• Extended regular expressions “heregexes” are delimited by `///` and are similar to heredocs and block comments
• Extended regular expressions ignore internal whitespace and can contain comments
### Reserved Words
Keywords
``````break by catch
class continue debugger
delete do else
extends false finally
for if in
instanceof loop new
null of return
super switch then
this throw true
try typeof undefined
unless until when
while
``````
Aliases
``````and : && or : || not : !
is : == isnt : !=
yes : true no : false
on : true off : false
``````
### Underscore
General call convention: `functional obj, iterator, context`.
See the interactive Underscore reference.
### QuickCheck
Exported definitions from prelude version of `qc.js`
arbChoose
Generator that chooses uniformly among the given generators.
`parameter: generators…`
arbConst
Generator that always returns one of the given constant values.
`parameter: values…`
arbBool
Boolean value generator with 50:50 chance of `true` or `false`.
arbNull
Null generator that always generates `null`.
arbWholeNum
Integer value generator for values ≥ 0. Supports shrinking.
arbInt
Integer value generator. Supports shrinking.
arbFloatUnit
Generator for a floating point value in between 0.0 and 1.0. Supports shrinking.
arbRange
Integer range value generator.
`parameter minimum value`
`parameter maximum value`
arbNullOr
Chooses `null` with 10% probability and the given generator with 90%. Supports shrinking.
`parameter another generator`
arrShrinkOne
Array shrinking strategy that builds new Arrays by removing one element from a given array.
arbArray
Array generator. Generates an array of arbitrary length with the given generator.
`parameter generator that creates the resulting array values.`
`parameter an optional shrinking strategy. Default is 'arrShrinkOne'.`
arbDate
Date value generator. Always generates a new Date object by calling `new Date()`.
arbMod
Basis generator for arbChar and arbString.
arbChar
Character value generator for any character with character code in range 32–255.
arbString
String value generator. All characters in the generated String are in range 32–255. Supports shrinking.
arbUndef
Generator that always generates `undefined`.
arbUndefOr
Chooses undefined with 10% probability and the given generator with 90%. Supports shrinking.
`parameter another generator`
expectException
Property test function modifier. Using this modifier, it is assumed that the testing function will throw an exception and if not the property will fail.
failOnException
Property test function modifier. Instead of finishing testing when an unexpected exception is thrown, the offending property is marked as failure and `qc` will continue.
Case
Test case class generated every time a property is tested. An instance of Case is always passed as first argument to a property’s testing function so it can control the test case’s properties.
Case::assert
Tests and notifies `qc` if a property fails or not.
`parameter pass false, if the property failed, true otherwise`
Case::guard
Used to test if the input is good for testing the property.
`parameter pass false to mark the property as invalid for the given input.`
Case::classify
Adds a tag to a test run.
`parameter if true then the tag is added to the case, else not.`
`parameter tag value to add`
Case::collect
Collect builds a histogram of all collected values for all runs of the property.
`parameter value to collect`
Case::noteArg
Adds the given value to the test case for reporting in case of failure.
`parameter value to add`
Case::note
Same as Case::noteArg but returning its argument so it can be used inline. Defined in prelude.
Case::noteVerbose
Same as Case::note but also logs the noted args. Defined in prelude.
declare
Builds and registers a new property.
`parameter the property's name`
`parameter array of generators with a length equal to the arity of the body function. The entry at position i will drive the i-th argument of the body function.`
`parameter body function that test the property`
`return a new registered property object of type Prop.`
testPure
Helper to declare a named test property for a pure function. Defined in prelude.
`parameter function to test`
`parameter array of generators matching argument types`
`parameter a descriptive name`
`parameter property function which is passed the test-case, the arguments and the result of calling the function being tested. Must return true if the property test succeeded, false otherwise.`
Prop
Creates a new property with a given array of argument generators and a testing function. For each generator a value is generated, so for testing a 2-ary function the array must contain 2 generators.
Prop::run
Tests the property.
`parameter configuration of type Config to test property with`
`return depending on test result a Pass, Fail or Invalid object`
allProps
Internal array of all declared/registered properties
resetProps
Deletes all declared properties.
runAllProps
Tests all registered properties.`\`parameter configuration of type Config to test the properties with`\`parameter listener of a subclass of ConsoleListener`
test
Test all known properties with a NodeListener and a configuration of 100 pass and 1000 invalid tests. Defined in prelude.
Invalid
Report class for invalid tested properties.
Pass
Report class for successful tested properties.
Fail
Report class for failed tested properties.
Stats
Statistics class for counting number of pass/invalid runs, building histograms and other statistics for reporting on a property and its test results.
Config
Testing Configuration.`\`parameter maximum passes per property`\`parameter maximum invalid tests per property`\`parameter maximum number of shrinking steps per property`
ConsoleListener
Abstract class for building ‘console’ based listeners.
NodeListener
Listener with node compatible output in colors. Defined in prelude.
FBCListener
Listener for sending property results to FireBug’s console
RhinoListener
Listener for Rhino, sending property results to stdout.
Distribution
Probability distributions
genvalue
Draws a new value from a generator. A generator is either a function accepting a seed argument or an object with a method ‘arb’ accepting a seed argument.
genshrinked
Uses the generator specific shrinking method to shrink a value that the generator created earlier. If the generator is a function or has no method named ‘shrink’ or is an objects with a ‘shrink’ method set to `null`, then no shrinking will be performed. If a shrinking method is defined, then it is called with the original seed and the value the generator created. The shrinking method must return an array of ‘shrinked’ values or null, undefined, or an empty array if no ‘shrinked’ values could be created.
justSize
Passes the size ‘seed’ argument used to drive generators directly to the property test function.
Utilities
A group of utilities that can be used to build generators: `frequency`, `choose`, `randWhole`, `randInt`, `randRange`, `randFloatUnit`.
Prelude definitions
``show globalizeconfirm prompt getServerURLviewURL viewServer stopServerfileExists readTextFile readBinaryFile_ kup qc``
Refer to `src/prelude/prelude.coffee` for the annotated source.
CoffeeScript environment
``Buffer clearInterval clearTimeoutconsole global GLOBALprocess root setIntervalsetTimeout``
View the interactive environment with: →∣ / ‘Tab’.
JavaScript future words
``````abstract boolean byte case
char const default double
enum export final float
function goto implements import
int interface let long
native package private protected
public short static synchronized
throws transient var void
volatile with yield
``````
## Footnotes
1. The runtime environment for this interactive edition is a work in progress. It is itself an interactive literate HTML5 application that contains its own code and documentation: Grimoire.
The interactive edition was created with acme and ssam from plan9port, Pandoc, CodeMirror and CoffeeScript.
2. The interactive environment requires a web browser with support for HTML5 technologies. Most current desktop browsers (Safari, Firefox, Opera) and Mobile Safari on iOS 5 are fully compatible. Offline reading is enabled on capable browsers.
Chrome 16 and Internet Explorer 9 do not display MathML, but there is very little math in the book anyway. Internet Explorer 9 does not display some of the graphical output (showDocument uses a data URL), so choose another browser if possible; older versions of Internet Explorer are less functional.
A test of the Android 2.3 browser on a mobile device did not result in a usable interactive environment. An upgrade to a newer version was not available.
Used technologies and API’s: EcmaScript 5, canvas with text, data URLs, contenteditable, mathml, offline manifest, localstorage, CSS hyphenation, getElementsByClassName and UTF–8 unicode. This list relates to the interactive environment — CoffeeScript is compatible with a much wider range of browsers and servers including Internet Explorer 6. More information on browser capabilities.
3. Some examples are read-only and can not be executed. A few are locked to avoid spoiling exercises, some because they are intended to run on a server, and a few only show a concept without the surrounding program.
4. Manual evaluation is only intended for net-books and mobile devices without sufficient processing power. It is more inconvenient when you have to remember to evaluate every time you have typed something. In particular code attached to `Run` buttons do not change unless the newly entered program text is evaluated.
5. ‘Code’ is the substance that programs are made of. Every piece of a program, whether it is a single line or the whole thing, can be referred to as ‘code’.
6. Bits are any kinds of two-valued things, usually described as `0`s and `1`s. Inside the computer, they take forms like a high or low electrical charge, a strong or weak signal, a shiny or dull spot on the surface of a CD.
7. If you were expecting something like `10010000` here — good call, but read on. CoffeeScript’s numbers are not stored as integers.
8. Actually, 53, because of a trick that can be used to get one bit for free. Look up the ‘IEEE 754’ format if you are curious about the details.
9. An example of this, if `p = 1/3` then `6*p` is equal to `2`. However `p+p+p+p+p+p` is not equal to `2` because the minute rounding errors increase for every addition. This happens in the `for` loop shown in the Foreword example, Seed of Life↑. It is a general issue with floating point approximations and not a bug in CoffeeScript. A way of dealing with it is to test if a number is inside a narrow interval instead of testing for an exact match.
10. Note that there is no space between the unary minus and the value.
11. The bit bucket is supposedly the place where old bits are kept. On some systems, the programmer has to manually empty it now and then. Fortunately, CoffeeScript comes with a fully-automatic bit-recycling system.
12. In the interactive environment the rules are slightly different. Variables can only be used outside the code block where they are defined if they are named in the `@globalNames` list (defined in Getting Started) or if they are attached explicitly to the global environment i.e. starts with an `@`-sign or is a `window` property. This prevents unintended use of variables from earlier chapters and gives a bit of protection from accidental overwrites of system variables.
13. Technically, a pure function can not use the value of any external variables. These values might change, and this could make the function return a different value for the same arguments. In practice, the programmer may consider some variables ‘constant’ — they are not expected to change — and consider functions that use only constant variables pure. Variables that contain a function value are often good examples of constant variables.
14. Koen Claessen and John Hughes from Chalmers University of Technology created QuickCheck for Haskell and its ideas has since been reimplemented in many other programming languages. The `qc` library is an implementation for JavaScript by Darrin Thompson. A CoffeeScript compatible version is included in the interactive environment.
15. The ECMAScript standard allows these deviations for JavaScript and thus for CoffeeScript. If you should need unlimited precision integers then either use a third party library or a more mathematically inclined programming language such as Pure.
16. Maxima is the source of this result. Maxima is a computer algebra system that does symbolic and unlimited precision calculations.
17. Most JavaScript test tools are compatible with CoffeeScript or can easily be adapted.
18. Such a function is already present, it is `show`.
19. In the underscore library you can find a function `isEqual` which compares two objects based on all layers of their contents.
20. Some test cases for `startsWith` are present in the source code file for this chapter.
21. In Mathematica a function can be set as `Listable` — eliminating the need for a loop:
``````f[x_] := If[x > 0, Sqrt[x], Sqrt[-x]];
SetAttributes[f, Listable];
f[{3, 0, -2}]
⇒ {√3, 0, √2}
``````
22. Unfortunately, on at least older versions of the Internet Explorer browser a lot of built-in functions, such as `alert`, are not really functions… or something. They report their type as `'object'` when given to the `typeof` operator, and they do not have an `apply` method. Your own functions do not suffer from this, they are always real functions.
23. In the standalone environment, the prelude has a function `viewURL` that can be used to look at HTML documents. The example document above is stored in the file `'06-Quote.html'`, so you can view it by executing the following code:
``viewURL '06-Quote.html'``
You can also run a tiny server from your program or the interactive CoffeeScript environment (REPL). It can serve a webpage either from a string variable or from a file. If you have created a web page in a string variable, say `stroustrupQuote` then you can start the server with:
``viewServer stroustrupQuote``
Or you could give it a filename as its argument. When you are done with the server then you can either type `stopServer()` or `CTRL-C`.
24. Like this…
25. An extended version can allow arbitrary arguments to be fixed.
26. Computers are deterministic machines: They always react in the same way to the input they receive, so they can not produce truly random values. Therefore, we have to make do with series of numbers that look random, but are in fact the result of some complicated deterministic computation.
27. No really, it is.
28. A few integrated development environments automatically generate diagrams as you are writing your code, for example Visual Studio Ultimate, but it is neither cheap nor compatible with CoffeeScript.
29. In this case, the backslashes were not really necessary, because the characters occur between `[` and `]`, but it is easier to just escape them anyway, so you will not have to think about it.
30. The main examples in this chapter show web and WebSocket servers, to run them you have to install CoffeeScript with the book source code, see Quick CoffeeScript Install. It is optional and you can follow along without running those examples. You can recognize the examples that you can run in the browser; they have a line in them with output below.
31. The system underlying standard CoffeeScript has an EventEmitter to help you do this. Search for event in the documentation for the runtime system you use to learn more.
32. Projects such as browserify is aiming at bringing `require` to the browser.
33. How to use a separate test module is shown in Binary Heaps↓.
34. Depending on your operating system you can also use numerical IP addresses. `127.0.0.1` conventionally address your local machine. In a `hosts` file you can map names into IP addresses, its location depend on your operating system.
35. If you want to understand how to write your own HTTP server then Manuel Kiessling has a tutorial that you can easily translate into your own CoffeeScript web server.
36. The amount of comparisons and swaps that are needed — in the worst case — can be approached by taking the logarithm (base 2) of the amount of elements in the heap.
37. Background information on 8 queens puzzle, implementation and performance. | 2017-11-18 13:47:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40007203817367554, "perplexity": 789.9769480616475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804965.9/warc/CC-MAIN-20171118132741-20171118152741-00146.warc.gz"} |
https://www.nature.com/articles/s41598-022-16470-2?error=cookies_not_supported&code=e8a55217-a357-410f-bb41-dc82b0c81a42 | Introduction
Severe acquired brain injury can result in a prolonged disorder of consciousness (PDOC), such as the minimally conscious state (MCS), one of the most dramatic chronic conditions in medicine. MCS is characterized by partial preservation of consciousness, expressed in patients by inconsistent non-reflexive reactions, such as visual pursuit, object manipulation, recognizable yes–no responses, and/or command following1,2,3. MCS is subcategorized, based on the complexity of the observed behaviors, in MCS- with low-level behavioral responses (visual pursuit, localization of noxious stimulation) and MCS + with high-level language-dependent responses (command following, intelligible verbalizations, intentional communication)4,5. Although patients show discernable purposeful behavior and exhibit signs of minimal self-awareness and awareness of their environment, they are incapable of adequate and consistent responses to the outside world. Medical care for these patients is mainly supportive, as treatment options remain scarce6.
Since the early 1960’s there have been several attempts to use deep brain stimulation (DBS) to improve behavioral performance in patients with PDOC7,8. Most previous studies in humans describe heterogeneous patient populations with different forms of severe brain injury and clinical results of the procedure vary from literally no effects of stimulation to rather spectacular improvements of consciousness8,9,10,11. In the majority of these cases, the nuclei of the central thalamus, including the centrolateral (CL) and centromedian-parafascicular complex (CM-Pf) were stimulated, based on the hypothesis that DBS of these nuclei may result in re-activation of damaged central thalamic outflow tracts and, therefore, may restore dysfunctional neurocircuits involved in arousal regulation7,10,12. The pivotal role of the central thalamus in the regulation of arousal and the possibility of modulating arousal through its fiber pathways using neurostimulation is now demonstrated in an increasing corpus of evidence, mainly derived from animal research13,14,15,16,17. Yet, it remains unknown what the exact mechanisms of DBS in humans are responsible for its effects on consciousness and why these effects are so variable8.
In the current open-ended exploration, we set out to study the clinical and neurophysiological effects of central thalamic DBS in a single patient with prolonged MCS, more than eight years after brain injury. We conducted an experimental trial of 24 months of DBS targeted at the CM-Pf (see methods), with different stimulation settings, varying in pulse-width and frequency (50 Hz versus 130 Hz), and used magnetoencephalography (MEG) to measure the DBS-induced changes in relative band power, frequency-specific functional connectivity, and neural variability, which are measures used to characterize neuronal activity and functional interactions between brain regions18. MEG is a technique that uniquely allows the study of direct stimulation-induced changes in oscillatory activity with an excellent spatial and temporal resolution19. The evidence from the MEG recordings is used to propose hypotheses that explain the effects of DBS in patients with PDOC after severe brain injury.
Results
Clinical results
At baseline, this patient scored 9–14 on the Coma Recovery Scale-Revised (CRS-R), meaning a baseline MCS- condition. After starting monopolar high-frequency stimulation (130 Hz; 60 μsec) with the more ventral contact points as cathode, situated in the central region of the thalamus, a direct improvement was seen in arousal after reaching a minimum current of 1 mA. These arousal effects included direct pupillary dilation, raising of the otherwise flexed head, increased respiratory rate, and signs of active visual pursuit throughout the room. These arousal effects became increasingly evident when the current was raised to a level of 4 mA. At this level, we observed some signs of purposeful behavior, such as visual interaction with her father (smiling and following him through the room), though these signs remained short-lived (minutes) and could not be reproduced by the clinical investigator/assessor. After various trials of DBS with different contact points, it was decided to set the stimulator on at contact points 3L and 11R, encircling the central nuclei of the thalamus, and to use a cycling mode consisting of 30 min on and 90 min off stimulation. The stimulator was turned on in the morning and off at night by her family. During the following months, the family reported signs of increased instances of visual pursuit during the day, a decrease of spasticity in both arms and legs, and a significant reduction of paroxysmal sympathetic hyperactivity she usually experienced, such as sweating episodes. We decided to wait for the recurrence of signs of awareness and possible signs of ‘upregulation’ of behavior as previously described by other authors11. During a period of six months, stimulation parameters were intermittently changed according to observations of behavioral performance by her family. These changes included changes in current, as well as in cycling periods (for study design and stimulation parameters see Supplementary Fig. 1 and Supplementary Table 1). However, after these six months, it was concluded that there were no persistent beneficial changes in behavioral performance with 130 Hz DBS. Therefore, it was decided to switch to monopolar lower frequency stimulation (30 Hz; 450 μsec). After changing the settings, once again, a direct and strong arousal effect was seen, as well as evidence of the return of visual pursuit. Moreover, it was observed that she could perform small tasks on request, such as sticking out her tongue, or moving a finger, though these reactions could not consistently be elicited. A trial episode was started at 2.5 mA on contact points 2 and 3L and 10 and 11R, with a cycling mode consisting of 90 min on and 30 min off. After two months, it was decided to add stimulation during the nights. However, after a couple of days, it was observed that the patient could not sleep at night and kept her eyes open, which caused excessive sleepiness during the day. Thereafter, DBS was discontinued at night and started in the mornings. In the following months, the frequency was increased to 50 Hz and the current was slowly raised to 3 mA with reports of increased arousal, visual pursuit, return of swallowing, and reduction of spasticity. The signs of increased arousal and visual pursuit, such as recognition of family members, were directly seen after starting the stimulation in the morning and only present during moments of stimulation (see Supplementary Video). After discontinuation of stimulation, arousal effects slowly, but progressively vanished, usually within 30 min to 1 h. Therefore, the daytime cycling mode was changed to continuous stimulation. The improvement in swallowing and reduction of spasticity were more permanent and also observed by her speech therapist and physical therapist. Nevertheless, at the end of the trial, no signs of a persistent return of behavioral performance were observed using the CRS-R. The eventual CRS-R at 24 months after implantation remained between 9 and 12, which matches her baseline CRS-R (for subscores, see Supplementary Table 2).
Neurophysiological results
The MEG source-space spectral analysis of the pre-DBS situation, the post-implantation situation with stimulation turned off (resting-state before stimulation), and active DBS turned on in both lower (50 Hz) and higher frequency (130 Hz) settings revealed no differences in relative band power. In contrast, there were several differences in functional connectivity throughout the brain between the four above-described situations and between the patient and the healthy controls. There was significantly lower functional connectivity in all frequency bands between this patient in the pre-DBS situation and healthy controls (see Figs. 1 and 2, also for p-values). 50 Hz stimulation was associated with a significant increase in functional connectivity in all frequency bands, both compared to the DBS off state as well as compared to the period of higher frequency DBS. However, the levels of functional connectivity after 50 Hz stimulation were still significantly lower than those observed in healthy controls. 130 Hz stimulation resulted in a relatively limited increase in alpha2 functional connectivity only, and it remained significantly weaker than following lower frequency stimulation. Moreover, small significant differences in functional connectivity were observed between the pre-DBS and post-DBS off situation, such as a small increase in alpha1 functional connectivity, and a decrease in beta-band functional connectivity in the post-DBS off condition. The above-described differences in functional connectivity were seen throughout the brain with some differences in frontal, parietal, and occipital areas that are variable between different frequency bands (see Fig. 2 and Supplementary Material Fig. 2). Specifically, 50 Hz DBS was associated with some increase of functional connectivity in (especially the right) frontoparietal and occipital areas.
We also found significant differences in global neural variability between the different situations and between this patient and healthy controls (see Fig. 3 and Supplementary Material Fig. 3). The patient maintained significantly lower levels of neural variability within the brain in all frequency bands compared to healthy controls. Only 50 Hz stimulation was associated with a small, yet significant, increase in all frequency bands when compared with the other DBS conditions, except when compared with 130 Hz stimulation in the alpha bands, and when compared with the pre-DBS situation in the alpha2 band. 130 Hz stimulation was only associated with a significantly small increase in the alpha2 and beta-band compared to the DBS off state. However, all of these levels were still evidently lower than those observed in healthy controls. The analysis of regional differences between the different conditions showed variable changes in neural variability throughout the brain after stimulation, without a clear consistency in frequency bands to indicate involvement of specific brain areas (Supplementary Material Fig. 3).
Discussion
General discussion
In this study, we showed that DBS is associated with significant changes in functional connectivity and neural variability in MCS. DBS with a lower frequency (50 Hz) and larger volume of activation (VTA) was associated with a stronger increase in functional connectivity and neural variability. This increase in functional connectivity and neural variability after DBS was observed in all frequency bands and throughout the brain, suggesting a widespread reorganization of brain networks, though these neurophysiological markers remained significantly lower after DBS than observed in healthy controls. The increase in functional connectivity and neural variability was paralleled by direct increases in arousal and more subtle permanent improvements in visual pursuit, spasticity, and swallowing, though, no improvements in behavioral performance were observed. The return of arousal and some basic functions without an overall improvement of behavioral performance may indicate that some disrupted neural networks are re-activated by DBS, but that the injured brain still lacks the ability to adapt to changing cognitive demands. It may be indicative of the severity of the brain injury and represent a state in which the brain is unable to work as a coherent unit.
Some observations from this study correlate with other studies in patients with PDOC. We showed that functional connectivity in the resting-state of our patient was significantly lower for all frequency bands compared to resting-state examinations in healthy subjects. It is known that levels of functional connectivity are, more or less, inversely correlated with the severity of impairments in consciousness. For instance, UWS patients are known to have significantly lower levels of functional connectivity throughout the brain than those with MCS and patients recovering from severe brain injury show increasing levels of functional connectivity throughout their period of recovery20,21,22. Previous studies with patients with PDOC have shown decreased levels of functional connectivity in specific brain networks, especially in frontoparietal networks, subcortical networks, and the default mode network, all believed to be involved in arousal regulation23. However, in our analysis, there were only small regional differences in functional connectivity. Similar observations have been reported for neural variability. For instance, previous research has shown a loss of variability after severe brain injury24,25. Neural variability is a measure of the ability of the brain to adapt to rapid changes in cognitive demands26. It has been shown to be a much better correlate for behavior than traditional measures of neuronal oscillatory power27,28. Awareness is thought to depend strongly on neural variability of large-scale cortico-thalamo-cortical networks29,30. The observation that, after DBS, neural variability remains lower than in healthy controls may therefore indicate that, despite improvements in arousal, awareness is still significantly disturbed. The heterogeneity of brain damage possibly causes different levels of network disruption resulting in different baseline levels of functional connectivity and neural variability in each individual patient31. This may mean that some patients may be more susceptible to the effects of thalamic DBS than others. It remains a matter of future research to define if pre-DBS levels of functional connectivity and neural variability may be used as biomarkers for a patient’s responsiveness to any neuromodulatory intervention.
Limitations
Our findings should be interpreted with some caution. Firstly, different stimulation parameters were used in our study and were based on previous clinical experience in other neurological conditions and clinical (side) effects during the titration phase of the study. The use of a different pulse width and frequency results in different effective electrical field sizes and corresponding VTA within the central thalamus (see Supplementary Material Fig. 4). Consequently, using different stimulation parameters, a different mix of (central) thalamic nuclei could have been stimulated, especially in a severely atrophic brain with a shrunken thalamus. Similar situations have been described in animal DBS research, in which a smaller thalamus, more densely packed with nuclei, is possibly more affected by nonspecific spreading of DBS currents near the surroundings of the intended target area17. The reconstruction of the VTA’s of both stimulation frequencies shows a rather large area of stimulation, likely including both the CM-Pf, CL, and parts of other intrathalamic pathways that are involved in arousal regulation, such as the medial dorsal tegmental tract (see Supplementary Material Fig. 4 for a 3D overview of both VTA’s)16. The reconstruction also shows that the 50 Hz DBS regime, with a larger pulse-width, did not only extend more laterally, but also more ventrally than the 130 Hz DBS regime This could also allow for concomitant anti-dromic modulation of different brainstem nuclei that project into the intralaminar nuclei which are known to be involved in arousal regulation, including those from, for instance, the pedunculopontine arousal system that have a frequency effectiveness plateau around 40–60 Hz32. Thus, although we intentionally targeted the DBS electrodes to the CM-Pf of the central thalamus using traditional stereotactic coordinates, the mechanism of action of the observed arousal and autonomic response and concomitant changes in functional connectivity/variability may also directly or indirectly involve areas well beyond the central thalamus itself. The differences that are observed between lower and higher frequency stimulation are somewhat surprising since previous DBS studies in both animals and humans mainly used stimulation at higher frequencies to increase arousal and even reported signs of behavioral arrest after (very) low-frequency stimulation7,8,33. This may also mean that the effects of stimulation are patient-specific and/or that the effects of neurostimulation differ between animals and humans. Secondly, this study is subject to performance confounding, meaning that if the behavioral performance is substantially different in two experimental conditions, as is the case for the pre- and post-DBS conditions, some measures of brain physiology might also differ between those conditions, thereby representing an epiphenomenon21,34. Neurophysiological activity is well known to change under different behavioral conditions. For instance, the changes in the alpha-band associated with transitions between arousal states observed in the present study are consistent with the functional roles of alpha-band oscillations in the regulation of attention and information processing35. Thirdly, during 50 Hz stimulation, there were significant artifacts in the MEG signal, limiting its direct analysis. Therefore, for the analysis of 50 Hz stimulation, we were obliged to perform the analyses directly after cessation of the stimulation. These artifacts of DBS are well known in MEG research36. Previous studies showed that the effects of stimulation can be reliably analyzed after cessation, and, because the effects of stimulation on arousal were relatively long-lasting (> 30 min after stimulation), we think that this potential limitation is negligible36. Fourthly, arousal in patients with MCS varies during the day, which may complicate the comparison between conditions. These variations, including the 15 months interval between baseline and experimental MEG recordings, may contribute to the slight differences between the pre-DBS and post-DBS off-state in this patient. Lastly, statistics with n = 1 remain challenging and, since amplitude envelope data is non-Gaussian, we were restricted to non-parametric tests to evaluate differences between conditions. In addition, recordings were too short to perform robust trial-by-trial statistics. Hence, current statistical results should be interpreted with caution.
Ethical considerations
The decision to carry out research and perform an experimental neurosurgical procedure in a mentally incompetent patient was not made lightly. We specifically selected a subject in MCS instead of UWS, because MCS patients are known to have more intact, but ‘dormant’ cerebral networks that may be more susceptible to the effects of activation by neurostimulation10,37. Previously, several ethical reasons have been proposed to continue treatment and the pursuit of higher levels of consciousness in patients with MCS that are known to have covert cognitive capabilities, but remain in affective and cognitive isolation10,38,39. One of the most important purposes of DBS in this study was to improve the patient’s ability to interact with others in a meaningful manner, providing an opportunity for a more detailed assessment of her feelings and preferences and permitting the patient to assume a more active role in her treatment10. The patient’s family were the legal representatives of the patient and were actively involved during the study, accompanying her with hospital visits, helping us in the evaluation of the effects of the DBS, and monitoring for possible signs of discomfort. To minimize burdensome travels for the patient and her family, follow-up took place at her home with both operating surgeons paying her multiple visits during the DBS titration phase. Though the primary goal of this study, improvement of behavioral performance, was not reached, there were improvements in her condition that were eventually considered meaningful by her family, including reduced spasticity, fewer periods of paroxysmal hyperactivity, and improvement of swallowing, which even allowed the patient to eat by mouth. These improvements were considered a benefit for the patient, since they seemed to reduce suffering. In the current patient, DBS was performed eight years after brain injury. One could hypothesize that performing DBS at an earlier stage could possibly have had more beneficial effects, for instance by reducing long-term complications of disorders of consciousness, such as spasticity, rigidity, contractures, and severe tongue/muscle atrophy.
Conclusion
In conclusion, this study shows that DBS can re-activate ‘dormant’ functional brain networks, but that the severely injured brain still lacks the ability to serve cognitive demands. This evidence provides more fundamental insight into the network-level mechanisms underlying DBS of the intralaminar thalamus and inspires future research on neurostimulation in patients with PDOC, especially on DBS in those patients who have more intact brain networks and residual dynamic properties of functional connectivity. A larger sample of patients is necessary to build further on our preliminary findings. International collaboration and clustering of cases in specialized centers will be vital to explore the possible application of DBS to restore function in this group of severely impaired patients.
Methods
Ethical approval
The legal representatives of the patient gave written informed consent to the research protocol, which was approved by the medical ethical committee of the Amsterdam University Medical Centers (protocol number: NL58841.018.16). Moreover, written informed consent was obtained for publishing information/images/videos in an online open-access publication. Ethics review criteria conformed to the Declaration of Helsinki. No part of the study procedures or analysis plans was pre-registered in an institutional registry prior to the research being conducted.
Clinical case description
On examination, the patient was alert and demonstrated spontaneous, though inconsistent, signs of visual fixation. The right pupil was minimally reactive and the left pupil was fixed. The oculomotor exam demonstrated disconjugate eye movements. Reflexes were hyperactive throughout. Some signs of sympathetic hyperactivity were still present, including excessive sweating, drooling, and recurrent episodes of tachypnea. The motor exam was notable for severe spastic contractures of all four extremities with flexion at elbows, wrists, fingers, and knees. There were also equinovarus deformities involving both feet. The axial tone of neck musculature was severely reduced with a continuous flexion position of her neck. Command-following was inconsistent with some minor evidence of reactions to verbal and visual cues (turning her head towards voices or her family), although there was no adequate response to complex commands. She could not speak, nor communicate reliably with yes/no responses, though, at times, could stick out her tongue on request or imitation. EEG examination revealed no signs of epilepsy, with a background pattern with relatively little differentiation, predominantly fast activity, and signs of reactivity with eye-opening. A structural MRI revealed widespread cortical atrophy, including marked tissue loss in both frontal regions, as well as in the basal ganglia, thalamus, and mesencephalon. Medications during the study period remained unchanged and included amantadine and baclofen, drugs that are often given to patients with decreased levels of consciousness to reduce spasticity and increase daytime arousal.
Clinical assessments
The current explorative study was designed to evaluate the clinical and neurophysiological effects of DBS of the central thalamus in a patient in MCS for the duration of 24 months (see Supplementary Material Fig. 1 for study design). After inclusion in the study, one month before DBS implantation, four separate baseline assessments of the level of consciousness were done using the Coma Recovery Scale-Revised (CRS-R). The CRS-R is the most recommended assessment scale for level of consciousness determination in PDOC patients40,41,42. It is a compound scale covering the domains of auditory, visual, motor, and verbal function, responsiveness, and arousal. The total score ranges from 0 (worst) to 23 (best) with specific subscores that can individually denote MCS-, MCS + , or emergence from MCS. Since the CRS-R has a ceiling effect, video recordings were performed to qualitatively capture the full spectrum of behavioral changes. The CRS-R examinations and video recordings were performed in the patient’s parental home to avoid unnecessary hospital visits. At the end of the study, 24 months after DBS implantation, four additional CRS-R examinations and video recordings were made to assess the clinical changes after long-term stimulation.
Surgical strategy: implantation of DBS electrodes
Before surgery, the patient underwent a 3T stereotactic MRI (Siemens, Malvern, Pennsylvania, USA), including axial T2-weighted and post-gadolinium (Gd) volumetric axial T1-weighted sequences. Pre-operative CM-Pf targets were determined from the mid-commissural point on anterior–posterior (AC-PC) aligned MRI images. Target planning for the central thalamus (intentionally aimed at the CM-Pf complex using traditional stereotactic methods) was optimized, based on the width of the third ventricle with final coordinates: 9.8 mm lateral, 9.5 mm posterior, and 2.8 mm ventral to the midcommissural point. Planned trajectories were inspected to be pre-coronal, start on top of a gyrus, and to avoid ventricles and blood vessels. 18F-fluorodeoxyglucose (FDG)-PET/CT brain imaging was performed on a Siemens PET/CT system (Biograph mCT FlowTrue-V-128), conform European Association of Nuclear Medicine guidelines, to confirm the presence of FDG-uptake in the center of the anticipated target area (see Fig. 4)43. More specifically, FDG images were acquired for 10 min, starting at 30 min after bolus intravenous injection, and low-dose CT was used for attenuation correction. Images were reconstructed iteratively with point‐spread function and time‐of‐flight modelling, and a 2‐mm full‐width at half‐maximum Gaussian filter. Planning, including fusion of the MRI and PET was done using Brainlab Elements software (Brainlab AG, Munich, Germany, version 3.2.0.281).
On the day of the surgery, a Leksell stereotactic frame (G-model, Elekta Ab, Stockholm, Sweden) was placed under general anesthesia and the patient was transported to the 1.5T MRI, where a frame-based stereotactic MRI was obtained. After fusion with the pre-operative 3T MRI, stereotactic coordinates of the planned targets were obtained. The patient was returned to the operating room and burr holes were made. A rigid macrostimulation electrode (Elekta) was inserted into the left and right target and replaced by a Boston Vercise™ Cartesia lead with eight 1.5 mm contact points separated by 0.5 mm interspaces (model DB2202, Boston Scientific, California, USA) under fluoroscopy. Left-sided ventral-to-dorsal contacts encoded 1–8 and right-sided ventral-to-dorsal contacts encoded 9–15. Subsequent implantation of a corresponding Boston Vercise™ pulse generator was done in a subcutaneous pocket in the infraclavicular region under general anesthesia in the same surgical session. One day after the operation, a CT-scan was made and co-registered to the MRI for lead localization. The patient was discharged two days after surgery.
Clinical follow-up and DBS ‘titration’
Two weeks after discharge, the pulse generator was turned on. During this experimental ‘titration-session’, a wide variety of settings was used to carefully study the direct effects of DBS on arousal, and to rule out any discomfort or aberrant neurological symptoms. Similar to DBS in movement disorders, programming was started with monopolar high-frequency stimulation (130 Hz; 60 μsec). Thereafter, the patient was visited multiple times and slight adjustments in DBS settings were made to test the effects on arousal and rule out any discomfort or aberrant neurological symptoms (for an overview see section Clinical Results and Supplementary Table 1). Changes in behavioral performance were directly evaluated. Additional visits were made at the request of the family when signs of behavioral responsiveness or deterioration were noted throughout the study period. After one year of high-frequency stimulation, a switch was made to a lower frequency setting (30 Hz; 450 μsec, similar to periaqueductal gray DBS in chronic pain), since a significant change in behavioral performance remained absent. Once again, different stimulation parameters were studied (see Supplementary Table 1). Finally, a switch to monopolar 50 Hz stimulation was made, which, after clinical evaluation, was considered the optimal setting (50 Hz; 450 μsec; 2.5 mA). Fifteen months after DBS implantation, and after using the various stimulation parameters, MEG studies were performed to evaluate the neurophysiological profiles of both stimulation settings. The effective electrical field size of both DBS regimes was visualized by calculating the volume of tissue activated (VTA) in Brainlab’s Guide-XT software module (Brainlab AG, Munich, Germany, version 3.2.0.281) (see Supplementary Material Fig. 4).
MEG data-acquisition and pre-processing
The MEG recordings were obtained in a magnetically shielded room in a supine resting-state condition. Pre-DBS, 30 min of MEG data had been recorded as a reference for further research. Fifteen months after implantation, multiple post-DBS datasets were recorded, starting with another resting-state condition without stimulation (the “DBS off” condition). The stimulator had been off for 12 h. Hereafter, the DBS was turned on, starting with low-frequency (50 Hz; 450 μsec; 2.5 mA) stimulation. After 10 min of recording, the DBS was turned off and a wash-out period followed (five minutes), which was also recorded since 50 Hz stimulation causes artifacts in MEG signal. Then the patient was removed from the magnetically shielded room for more than an hour, which was considered a normal wash-out period for all arousal effects of low-frequency stimulation. The DBS was then changed to the higher frequency setting (130 Hz; 60 μsec; 2.5 mA) and another 10-min MEG recording followed, succeeded by another wash-out recording of five minutes.
MEG data were recorded using a 306-channel whole-head system (Elekta Neuromag Oy, Helsinki, Finland) with a sampling frequency of 1250 Hz and online anti-aliasing (410 Hz) and high-pass filtering (0.1 Hz). The head position relative to the MEG sensors was recorded continuously using the signals from five head position indicator (HPI) coils. The HPI positions and the outline of the patient's scalp (around 500 points) were digitized before the MEG recording using a 3D digitizer (Fastrak, Polhemus, Colchester, VT, USA). The patient's MEG data were co-registered to her structural MRI, using a surface-matching procedure with an estimated resulting accuracy of 4 mm44. This structural MRI of the head had been obtained two months before the baseline MEG session as part of clinical care, using a 3T Siemens MRI scanner (Siemens, Malvern, Pennsylvania, USA).
For MEG source-level analysis, extra processing steps were undertaken. The MEG data were first cleaned using both spatial and temporal filtering, after which the sensor-level data were projected to source-space using an atlas-based beamformer. Neuronal activity (relative power and variability) and functional connectivity were quantified at the source-level. In more detail, bad channels and data segments were first removed after visual inspection of the data. Thereafter, the temporal extension of Signal Space Separation (tSSS) in MaxFilter software (Elekta Neuromag Oy, version 2.2.15) was applied using standard settings: a correlation limit of 0.9, and a sliding window of 10 s45,46. The automated anatomical labeling (AAL) atlas was used to label the voxels in 78 cortical and 12 subcortical regions of interest (ROIs)47,48. This was done by registering the anatomical T1-weighted image to an MNI template and labeling all voxels according to the 90 ROIs. Subsequently, an inverse registration to anatomical subject space was performed. We used each ROI's centroid voxel as a representative for that ROI49. Subsequently, a scalar beamforming approach (beamformer, version 2.1.28; Elekta Neuromag Oy), similar to Synthetic Aperture Magnetometry (Robinson and Vrba, 1999) was used to project the sensor-level data to these centroids. The beamformer weights were based on the covariance of the recorded time-series within a 0.5–48 Hz frequency window and the forward solution (lead field) of a dipolar source at the centroid voxel location, and using a single sphere head model fitted to the MRI scalp surface as extracted from the co-registered MRI50. Source orientation that maximized the beamformer output was obtained using eigenvalue decomposition51. Singular value truncation was used when inverting the data covariance matrix to deal with the rank deficiency of the data after SSS ( 70 components). Broadband data (0.5–48 Hz) were projected through the normalized beamformer weights52, resulting in a broadband time series for each centroid of the 90 ROIs49.
The amount of data used for further analysis was determined by the amount of artifact-free data (based on visual inspection by PT) for any of the four conditions (pre-DBS, DBS off, DBS low-frequency stimulation, DBS high-frequency stimulation). Based on this, we kept the amount of data the same for all conditions (4.5 min). For analysis of low-frequency stimulation, we used the MEG dataset during the washout phase, the first minutes directly after cessation of stimulation, since low-frequency stimulation was associated with a direct MEG-artefact, which limited its interpretation. MEG-based functional connectivity (see below) for the patient (in all conditions) was compared with the average functional connectivity obtained from healthy volunteers. Based on gender and age, we selected out of a previously published dataset of healthy volunteers all females of approximately the same age as this patient49,53. This resulted in six healthy age-matched females (mean age of 39), who had all undergone one five-minute, eyes-closed, resting-state MEG recording. Data acquisition, pre-processing, and analysis was performed in the same way as for the patient dataset.
MEG: estimation of functional connectivity and neural variability
We estimated power spectral densities using a Fast Fourier Transform (FFT) after applying a Hanning window. To this end, we used overlapping epochs of 3.2 s (4096 samples). Power spectral densities were averaged over all ROIs and epochs. We defined frequency bands as follows: theta (4–8 Hz), alpha1 (8–10 Hz), alpha2 (10–13 Hz), and beta (15–25 Hz). The choice for this range for the beta band was justified by the presence of artifacts in the data with frequency components above 25 Hz for recordings during which DBS was on. Functional connectivity was estimated using the amplitude envelope correlation (AEC) from band-pass filtered time-series54. The AEC captures co-fluctuations in modulations of the amplitude envelope. Pairwise orthogonalization was first performed to reduce the effects of signal leakage prior to connectivity estimation55,56. The amplitude envelopes were extracted from the analytical signal obtained from the Hilbert transform for every band-pass filtered time series. No further smoothing or downsampling was applied to the amplitude envelopes. Input to the AEC computation was the total amount of artifact free data (4.5 min for all conditions, see above), hence data was not fed in short epochs to the AEC computation. Pearson correlations between amplitudes envelopes were computed The implementation of the AEC was the same as in Brookes et al. 201657. The AEC was calculated for all possible pairs of ROIs, resulting in a 90 × 90 weighted adjacency matrix that contained the functional connectivity values between all pairs (with a potential range of values between − 1 and 1). Averaging over rows in this weighted adjacency matrix subsequently led to one mean functional connectivity value per ROI (i.e. the average functional connectivity of that ROI with the rest of the brain) per condition. Further averaging across mean connectivity values per ROI resulted in the global (whole-brain) functional connectivity for that condition.
Past work has shown that fluctuations in the amplitude envelope coincide with strong functional connectivity, i.e. periods of high amplitude envelope could serve as a window of opportunity for ongoing functional connectivity58. Hence, as the AEC captures co-fluctuations in the amplitude envelope, we lastly estimated the variability of the amplitude envelopes in the context of neural variability26. Neural variability was quantified in terms of the detrended standard deviation of the amplitude envelope for every ROI. We first subtracted the mean from every time series for every ROI, after which we computed the standard deviation (based on the total amount of artifact free data without dividing the data into epochs)59. The lower limit for values for detrended standard deviation is zero and the upper limit is determined by the range of the values for every time series. Power spectral density, neural variability, and functional connectivity analyses were performed in MATLAB 2018b (Mathworks; 9.1.0.441655) using in-house scripts.
Statistics
We compared the mean AEC between the different conditions (average over rows and columns in the AEC matrix). Since values between conditions were dependent and non-Gaussian we used a non-parametric permutation test to assess differences between the means60. Note that we obtained one AEC matrix per condition. The genuine difference in mean AEC between two conditions was compared to a null distribution of mean differences of surrogate data. A null distribution was obtained as follows: (i) starting points were two AEC matrices from condition one (AEC1) and two (AEC2); (ii) we then created two dummy matrices A and B. We assigned a value to an entry i,j in the dummy matrix A by randomly selecting a value from either AEC1(i,j) or AEC2(i,j). Here, i, j were the same for dummy matrix A and AEC1 and AEC2. This was done for all matrix entries of dummy matrix A; (iii) the entries from AEC1(i,j) or AEC2(i,j) that were not selected for dummy matrix A were used to create dummy matrix B. Again, matrix elements i,j in dummy matrix B were only assigned by matrix values from AEC1(i,j) or AEC2(i,j) with the same index i,j; (iv) we computed the mean for every dummy group (average over rows and columns of A or B) and their difference was added to the null distribution; (v) this procedure was repeated 100,000 times to obtain a null-distribution of mean differences. The genuine difference between the mean values was assumed to be significant if this value was in the right 2.5% tails of the distribution. The same test was used to test a difference between healthy controls and subject-specific condition, with the difference that we used the average AEC matrix across healthy controls as input rather than individual matrices. The same test was also used for neural variability, with the difference that we created a null distribution based on the mean difference of two dummy vectors and not dummy matrices. We performed correction for multiple tests using the False Discovery Rate (80 tests: 2 (number of metrics) × $$\left(\begin{array}{c}5\\ 2\end{array}\right)$$ (number of comparisons between conditions/groups) × 4 (number of frequency bands)). | 2022-09-25 21:11:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38406628370285034, "perplexity": 3730.23877995968}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00553.warc.gz"} |
https://brilliant.org/problems/peas-in-a-pod/ | # Peas in a Pod
Geometry Level 3
Five peas are all tangent to two nonparallel lines and tangent to its neighbor. The largest pea has a radius of $$18$$ and the smallest pea has a radius of $$8$$.
What is the middle pea's radius?
× | 2017-03-25 13:43:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3710073232650757, "perplexity": 1622.299237298663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188926.39/warc/CC-MAIN-20170322212948-00247-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://www.educative.io/answers/what-is-a-mealy-machine | Related Tags
automata
theoretical cs
fa
# What is a Mealy machine?
Zainab Ilyas
### Overview
Mealy machine is a finite state machine that has an output value rather than a final state. For a given input, the machine generates a corresponding output. The output of the Mealy machine depends on the present state of the FA as well as the current input symbol.
Unlike other finite automata that determine the acceptance of a particular string in a given language, Mealy machines determine the output against the given input.
### Formal theorem
The Mealy machine is a 6 tuple machine $(Q, \Sigma, q_0, \Delta, \delta, \lambda)$:
$Q$ : This is a set of states.
$\Sigma$: This is a set of input alphabets.
$q_0$: This is an initial state.
$\Delta$: This is a set of output states.
$\delta$: This is a transition function, that is: $Q \times \Sigma \to Q$
$\lambda$: This is an output function, that is: $Q \times \Sigma \to \Delta$
Note: The output function means that for every transition at a particular state, there is a corresponding output associated with it.
### Formation of the machine
The Mealy machine forms an FA of the type:
An example format of a Mealy machine
In the machine above, the transition on input $i$ will give an output $x$ from the state $q_0$. Moreover, from $q_1$ a transition on input $j$ will provide an output $y$.
Note: If the input string is of length $n$, the output produced by a Mealy machine will also be of length $n$.
### Example
Suppose we have the following Mealy machine:
An example Mealy machine.
Now, we take an input string in $\Sigma \{0,1\}$ and see its output in $\Sigma \{a,b\}$. Let's take $x= 0010$.
#### Transitions Explanation
• The first input character is $0$ . Hence, we move to $q_1$ and output a $b$. Our output string is $b$.
"b" is produced by transition q0 to q1 on "0".
• For the next $0$, we move to $q_0$, which outputs an $a$. The output string becomes $ba$.
"a" is produced by transition q1 to q0 on "0".
• The next $1$ gives another $a$. The output string becomes $baa$.
"a" is produced by transition q0 to q0 on "1".
• The last $0$ takes us to $q_1$ and produces a $b$. The final output string is $baab$.
"b" is produced by transition q0 to q1 on "0".
#### Transition table
The transition table for the above machine will be:
Current State Destination State for 0 Output at 0 Destination State for 1 Output at 1 q0 q1 b q0 a q1 q0 a q1 b
The Mealy machine is faster than the Moore machine as it is asynchronous to the fluctuations of a clock pulse.
RELATED TAGS
automata
theoretical cs
fa
CONTRIBUTOR
Zainab Ilyas
RELATED COURSES
View all Courses
Keep Exploring
Learn in-demand tech skills in half the time | 2022-08-12 02:24:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 35, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49770644307136536, "perplexity": 1158.35173583771}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571538.36/warc/CC-MAIN-20220812014923-20220812044923-00036.warc.gz"} |
https://www.ademcetinkaya.com/2023/03/kvsa-khosla-ventures-acquisition-co.html | Outlook: Khosla Ventures Acquisition Co. Class A Common Stock is assigned short-term Ba1 & long-term Ba1 estimated rating.
Dominant Strategy : Hold
Time series to forecast n: 11 Mar 2023 for (n+8 weeks)
Methodology : Transductive Learning (ML)
## Abstract
Khosla Ventures Acquisition Co. Class A Common Stock prediction model is evaluated with Transductive Learning (ML) and Stepwise Regression1,2,3,4 and it is concluded that the KVSA stock is predictable in the short/long term. According to price forecasts for (n+8 weeks) period, the dominant strategy among neural network is: Hold
## Key Points
1. Operational Risk
2. Why do we need predictive models?
3. Buy, Sell and Hold Signals
## KVSA Target Price Prediction Modeling Methodology
We consider Khosla Ventures Acquisition Co. Class A Common Stock Decision Process with Transductive Learning (ML) where A is the set of discrete actions of KVSA stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4
F(Stepwise Regression)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Transductive Learning (ML)) X S(n):→ (n+8 weeks) $\stackrel{\to }{S}=\left({s}_{1},{s}_{2},{s}_{3}\right)$
n:Time series to forecast
p:Price signals of KVSA stock
j:Nash equilibria (Neural Network)
k:Dominated move
a:Best response for target price
For further technical information as per how our model work we invite you to visit the article below:
How do AC Investment Research machine learning (predictive) algorithms actually work?
## KVSA Stock Forecast (Buy or Sell) for (n+8 weeks)
Sample Set: Neural Network
Stock/Index: KVSA Khosla Ventures Acquisition Co. Class A Common Stock
Time series to forecast n: 11 Mar 2023 for (n+8 weeks)
According to price forecasts for (n+8 weeks) period, the dominant strategy among neural network is: Hold
X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.)
Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.)
Z axis (Grey to Black): *Technical Analysis%
## IFRS Reconciliation Adjustments for Khosla Ventures Acquisition Co. Class A Common Stock
1. Paragraph 6.3.6 states that in consolidated financial statements the foreign currency risk of a highly probable forecast intragroup transaction may qualify as a hedged item in a cash flow hedge, provided that the transaction is denominated in a currency other than the functional currency of the entity entering into that transaction and that the foreign currency risk will affect consolidated profit or loss. For this purpose an entity can be a parent, subsidiary, associate, joint arrangement or branch. If the foreign currency risk of a forecast intragroup transaction does not affect consolidated profit or loss, the intragroup transaction cannot qualify as a hedged item. This is usually the case for royalty payments, interest payments or management charges between members of the same group, unless there is a related external transaction. However, when the foreign currency risk of a forecast intragroup transaction will affect consolidated profit or loss, the intragroup transaction can qualify as a hedged item. An example is forecast sales or purchases of inventories between members of the same group if there is an onward sale of the inventory to a party external to the group. Similarly, a forecast intragroup sale of plant and equipment from the group entity that manufactured it to a group entity that will use the plant and equipment in its operations may affect consolidated profit or loss. This could occur, for example, because the plant and equipment will be depreciated by the purchasing entity and the amount initially recognised for the plant and equipment may change if the forecast intragroup transaction is denominated in a currency other than the functional currency of the purchasing entity.
2. For the purpose of applying paragraphs B4.1.11(b) and B4.1.12(b), irrespective of the event or circumstance that causes the early termination of the contract, a party may pay or receive reasonable compensation for that early termination. For example, a party may pay or receive reasonable compensation when it chooses to terminate the contract early (or otherwise causes the early termination to occur).
3. Lifetime expected credit losses are not recognised on a financial instrument simply because it was considered to have low credit risk in the previous reporting period and is not considered to have low credit risk at the reporting date. In such a case, an entity shall determine whether there has been a significant increase in credit risk since initial recognition and thus whether lifetime expected credit losses are required to be recognised in accordance with paragraph 5.5.3.
4. At the date of initial application, an entity shall assess whether a financial asset meets the condition in paragraphs 4.1.2(a) or 4.1.2A(a) on the basis of the facts and circumstances that exist at that date. The resulting classification shall be applied retrospectively irrespective of the entity's business model in prior reporting periods.
*International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS.
## Conclusions
Khosla Ventures Acquisition Co. Class A Common Stock is assigned short-term Ba1 & long-term Ba1 estimated rating. Khosla Ventures Acquisition Co. Class A Common Stock prediction model is evaluated with Transductive Learning (ML) and Stepwise Regression1,2,3,4 and it is concluded that the KVSA stock is predictable in the short/long term. According to price forecasts for (n+8 weeks) period, the dominant strategy among neural network is: Hold
### KVSA Khosla Ventures Acquisition Co. Class A Common Stock Financial Analysis*
Rating Short-Term Long-Term Senior
Outlook*Ba1Ba1
Income StatementB3Ba2
Balance SheetBaa2Baa2
Leverage RatiosCBaa2
Cash FlowCC
Rates of Return and ProfitabilityCBaa2
*Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents.
How does neural network examine financial reports and understand financial state of the company?
### Prediction Confidence Score
Trust metric by Neural Network: 72 out of 100 with 855 signals.
## References
1. S. J. Russell and A. Zimdars. Q-decomposition for reinforcement learning agents. In Machine Learning, Proceedings of the Twentieth International Conference (ICML 2003), August 21-24, 2003, Washington, DC, USA, pages 656–663, 2003.
2. J. Spall. Multivariate stochastic approximation using a simultaneous perturbation gradient approximation. IEEE Transactions on Automatic Control, 37(3):332–341, 1992.
3. Wu X, Kumar V, Quinlan JR, Ghosh J, Yang Q, et al. 2008. Top 10 algorithms in data mining. Knowl. Inform. Syst. 14:1–37
4. uyer, S. Whiteson, B. Bakker, and N. A. Vlassis. Multiagent reinforcement learning for urban traffic control using coordination graphs. In Machine Learning and Knowledge Discovery in Databases, European Conference, ECML/PKDD 2008, Antwerp, Belgium, September 15-19, 2008, Proceedings, Part I, pages 656–671, 2008.
5. J. Harb and D. Precup. Investigating recurrence and eligibility traces in deep Q-networks. In Deep Reinforcement Learning Workshop, NIPS 2016, Barcelona, Spain, 2016.
6. Matzkin RL. 2007. Nonparametric identification. In Handbook of Econometrics, Vol. 6B, ed. J Heckman, E Learner, pp. 5307–68. Amsterdam: Elsevier
7. Bottou L. 1998. Online learning and stochastic approximations. In On-Line Learning in Neural Networks, ed. D Saad, pp. 9–42. New York: ACM
Frequently Asked QuestionsQ: What is the prediction methodology for KVSA stock?
A: KVSA stock prediction methodology: We evaluate the prediction models Transductive Learning (ML) and Stepwise Regression
Q: Is KVSA stock a buy or sell?
A: The dominant strategy among neural network is to Hold KVSA Stock.
Q: Is Khosla Ventures Acquisition Co. Class A Common Stock stock a good investment?
A: The consensus rating for Khosla Ventures Acquisition Co. Class A Common Stock is Hold and is assigned short-term Ba1 & long-term Ba1 estimated rating.
Q: What is the consensus rating of KVSA stock?
A: The consensus rating for KVSA is Hold.
Q: What is the prediction period for KVSA stock?
A: The prediction period for KVSA is (n+8 weeks) | 2023-03-23 08:17:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46629035472869873, "perplexity": 7605.68793452628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945030.59/warc/CC-MAIN-20230323065609-20230323095609-00056.warc.gz"} |
https://love2d.org/forums/viewtopic.php?f=4&t=82307 | ## Rhythm game help
Questions about the LÖVE API, installing LÖVE and other support related questions go here.
Forum rules
Le_juiceBOX
Citizen
Posts: 71
Joined: Sat Mar 26, 2016 3:07 pm
### Rhythm game help
im making a rhythm game(like Guitar Hero, Rockband etc.) and when it comes the the white bar it(where the "notes" are detected) only makes the hit counter go down -1 and i scripted a variable that detect whether the note is colliding with the white bar so my script is as follows:
Code: Select all
if collision:detect(note.x, note.y, note.w, note.h, notehit.x, notehit.y, notehit.w, notehit.h) then
note.TouchingHitter = 1
else
note.TouchingHitter = 0
end
end
This did have the variable change to 1 when the note was touching the white bar, this is the script that for some reason does not work:
Code: Select all
if note.TouchingHitter == 0 and love.keyboard.isDown("e") then
note.hits = note.hits - 1
if note.TouchingHitter == 1 and love.keyboard.isDown("e") then
note.hits = note.hits + 1
end
end
end
I will provide all my scripts to help find the problem(s), i cant compile them, sorry. If you can, note where i went wrong thanks!
Attachments
notes.lua
notehit.lua
main.lua
SteamLibrary-like Program For Love2D Games:
take me to the forum thread!
Le_juiceBOX
Citizen
Posts: 71
Joined: Sat Mar 26, 2016 3:07 pm
### Re: Rhythm game help
the rest-
Attachments
conf.lua
collisions.lua
SteamLibrary-like Program For Love2D Games:
take me to the forum thread!
Kingdaro
Party member
Posts: 395
Joined: Sun Jul 18, 2010 3:08 am
### Re: Rhythm game help
First off, a few things:
You can "compile" all of your game scripts by selecting them all, then adding them to a zip file and changing the extension to .love. This'll make your game easier to view and test for us. This is what you're going to want to do, essentially:
Your tabbing's a little off. If you have proper tabbing, bugs are a lot easier to find when you have proper tabbing. The general rule is to tab forward after function/if/while/for/do lines, then tab back on end/else/elseif. Here's a fixed portion of this bit of code:
Code: Select all
function NotCollideNoteHitter()
if note.TouchingHitter == 0 and love.keyboard.isDown("e") then
note.hits = note.hits - 1
if note.TouchingHitter == 1 and love.keyboard.isDown("e") then
note.hits = note.hits + 1
end
end
end
I'd recommend using "true" and "false" instead of 1 and 0, since they work a little more nicely in Lua if statements. You can compress this function pretty easily:
Code: Select all
function AntiSpamHitter()
note.TouchingHitter = collision:detect(note.x, note.y, note.w, note.h, notehit.x, notehit.y, notehit.w, notehit.h)
end
So then, onto your actual problem, here it seems like a simple matter of responding to a keypress, then checking if the note is colliding with the hitter (which is more technically called a Receptor, fun fact), and "hit" the note if this is true. Add this function to your notes.lua:
Code: Select all
function TapNote()
if note.TouchingHitter then
note.x = 0
note.hits = note.hits + 1
end
end
Code: Select all
function love.keypressed(key)
if key == 'e' then
TapNote()
end
end
Sulunia
Party member
Posts: 201
Joined: Tue Mar 22, 2016 1:10 pm
Location: SRS, Brazil
### Re: Rhythm game help
I'm going to give my bits of advice here!
On rhythm games, as the name say, notes fall down at a very fixed speed. Thing is, normally, the song current playing time is what dictates when the note will hit the bottom of the screen.
For an example, note number one will hit the bottom of the screen when the music is at 0:15 seconds..
Code: Select all
note[1].y = - 15000 --this value should be in milliseconds!
Remember: objects moving down ingame needs to have their Y value added, not subtracted! Otherwise, notes would slide up instead of down.
Then, every frame, you draw the note according to the song time and making sure it hits the bottom of the screen, or your hitBar Y height...
Code: Select all
for i = 1, #notes do --for every note in your list, do
note[i]:draw(note[i].x, (note[i].y - hitBar.y + screenHeight + currentSongTime))
end
So, instead of defining the note will slide down a fixed bit every frame (which is what you're doing), to make sure it stays synchronized with the music you must update it's position using the songTime as basis for this, otherwise, you're gonna have a bad time.
Then later, you can worry about song timer interpolation, relaxed hitInterval, audio delays..
Now, as for the problem you're facing, this is because the interval you have to press the note is quite short i'm afraid. So, i'd start relaxing the precision the user has to have to hit the note correctly, of, say, 30ms
Don't check my github! It contains thousands of lines of spaghetti code in many different languages cool software!
https://github.com/Sulunia
### Who is online
Users browsing this forum: No registered users and 14 guests | 2020-09-28 04:32:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2850462794303894, "perplexity": 6217.257835168956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401585213.82/warc/CC-MAIN-20200928041630-20200928071630-00011.warc.gz"} |
https://blender.stackexchange.com/questions/100485/rotation-matrix | # Rotation Matrix
i am having a hard time trying to dechiper how blender game engine allocate the rotation values on the rotation Matrix, this website describes the values like this.
But when i print the rotation matrix on the console it give me weird positions
That value 0.766 should be on the first column first row where the 1.0 value is, that value should describe the rotation on x. Plz halp :c.
You rotated the object around the X-axis. In other words it is a rotation on the YZ-plane.
The projection on the X-axis (rotation axis) will remain unchanged, regardless how much you turn the object. It was (0,0,0) and is still (0,0,0).
Hint: the rotation axis does not change.
The rotation is a 2D-rotation on the YZ-plane. Here you can see the formulas you might remember from school:
$$x = \sin(\alpha)$$ $$y = \cos(\alpha)$$
As we have a plane, we can use a 2D rotation matrix:
$$\begin{bmatrix} \cos(\alpha) & -\sin(\alpha)\\ \sin(\alpha) & \cos(\alpha) \end{bmatrix} \begin{bmatrix} y\\ z\\ \end{bmatrix}$$
One component is y the other is z.
• Sorry, i figured out this like a week ago, yes that was my bad xD, i need to rotate around z axis. Thx a lot :) – Yasef Feb 21 '18 at 0:04 | 2021-05-09 16:39:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35575827956199646, "perplexity": 739.6211420052665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989006.71/warc/CC-MAIN-20210509153220-20210509183220-00308.warc.gz"} |
https://en.wikipedia.org/wiki/Majority_problem_(cellular_automaton) | # Majority problem (cellular automaton)
The majority problem, or density classification task is the problem of finding one-dimensional cellular automaton rules that accurately perform majority voting.
Using local transition rules, cells cannot know the total count of all the ones in system. In order to count the number of ones (or, by symmetry, the number of zeros), the system requires a logarithmic number of bits in the total size of the system. It also requires the system send messages over a distance linear in the size of the system and for the system to recognize a non-regular language. Thus, this problem is an important test case in measuring the computational power of cellular automaton systems.
## Problem statement
Given a configuration of a two-state cellular automaton with i + j cells total, i of which are in the zero state and j of which are in the one state, a correct solution to the voting problem must eventually set all cells to zero if i > j and must eventually set all cells to one if i < j. The desired eventual state is unspecified if i = j.
The problem can also be generalized to testing whether the proportion of zeros and ones is above or below some threshold other than 50%. In this generalization, one is also given a threshold ${\displaystyle \rho }$; a correct solution to the voting problem must eventually set all cells to zero if ${\displaystyle {\tfrac {i}{i+j}}<\rho }$ and must eventually set all cells to one if ${\displaystyle {\tfrac {j}{i+j}}>\rho }$. The desired eventual state is unspecified if ${\displaystyle {\tfrac {j}{i+j}}=\rho }$.
## Approximate solutions
Gács, Kurdyumov, and Levin found an automaton that, although it does not always solve the majority problem correctly, does so in many cases.[1] In their approach to the problem, the quality of a cellular automaton rule is measured by the fraction of the ${\displaystyle 2^{i+j}}$ possible starting configurations that it correctly classifies.
The rule proposed by Gacs, Kurdyumov, and Levin sets the state of each cell as follows. If a cell is 0, its next state is formed as the majority among the values of itself, its immediate neighbor to the left, and its neighbor three spaces to the left. If, on the other hand, a cell is 1, its next state is formed symmetrically, as the majority among the values of itself, its immediate neighbor to the right, and its neighbor three spaces to the right. In randomly generated instances, this achieves about 78% accuracy in correctly determining the majority.
Das, Mitchell, and Crutchfield showed that it is possible to develop better rules using genetic algorithms.[2]
## Impossibility of a perfect classifier
In 1995, Land and Belew[3] showed that no two-state rule with radius r and density ρ correctly solves the voting problem on all starting configurations when the number of cells is sufficiently large (larger than about 4r/ρ).
Their argument shows that because the system is deterministic, every cell surrounded entirely by zeros or ones must then become a zero. Likewise, any perfect rule can never make the ratio of ones go above ${\displaystyle \rho }$ if it was below (or vice versa). They then show that any assumed perfect rule will either cause an isolated one that pushed the ratio over ${\displaystyle \rho }$ to be cancelled out or, if the ratio of ones is less than ${\displaystyle \rho }$, will cause an isolated one to introduce spurious ones into a block of zeros causing the ratio of ones to be become greater than ${\displaystyle \rho }$.
## Exact solution with alternative termination conditions
As observed by Capcarrere, Sipper, and Tomassini,[4][5] the majority problem may be solved perfectly if one relaxes the definition by which the automaton is said to have recognized the majority. In particular, for the Rule 184 automaton, when run on a finite universe with cyclic boundary conditions, each cell will infinitely often remain in the majority state for two consecutive steps while only finitely many times being in the minority state for two consecutive steps.
Alternatively, a hybrid automaton that runs Rule 184 for a number of steps linear in the size of the array, and then switches to the majority rule (Rule 232), that sets each cell to the majority of itself and its neighbors, solves the majority problem with the standard recognition criterion of either all zeros or all ones in the final state. However, this machine is not itself a cellular automaton.[6] Moreover, it has been shown that Fukś's composite rule is very sensitive to noise and cannot outperform the noisy Gacs-Kurdyumov-Levin automaton, an imperfect classifier, for any level of noise (e.g., from the environment or from dynamical mistakes).[7]
## References
1. ^ Gács, Péter; Kurdyumov, G. L.; Levin, L. A. (1978). "One dimensional uniform arrays that wash out finite islands". Problemy Peredachi Informatsii (in Russian). 14: 92–98.
2. ^ Das, Rajarshi; Crutchfield, J. P.; Mitchell, Melanie; Hanson, J. E. (1995). Eshelman, Larry J., ed. Evolving globally synchronized cellular automata (PDF). Proceedings of the Sixth International Conference on Genetic Algorithms. San Francisco: Morgan Kaufmann.
3. ^ Land, Mark; Belew, Richard (1995). "No perfect two-state cellular automata for density classification exists". Physical Review Letters. 74 (25): 1548–1550. Bibcode:1995PhRvL..74.5148L. doi:10.1103/PhysRevLett.74.5148. PMID 10058695.
4. ^ Capcarrere, Mathieu S.; Sipper, Moshe; Tomassini, Marco (1996). "Two-state, r = 1 cellular automaton that classifies density". Phys. Rev. Lett. 77 (24): 4969–4971. Bibcode:1996PhRvL..77.4969C. doi:10.1103/PhysRevLett.77.4969. PMID 10062680.
5. ^ Sukumar, N. (1998). "Effect of boundary conditions on cellular automata that classify density". arXiv:comp-gas/9804001. Bibcode:1998comp.gas..4001S.
6. ^ Fukś, Henryk (1997). "Solution of the density classification problem with two cellular automata rules". Physical Review E. 55 (3): 2081–2084. arXiv:comp-gas/9703001. Bibcode:1997comp.gas..3001F. doi:10.1103/physreve.55.r2081.
7. ^ Mendonça, J. R. G. (2011). "Sensitivity to noise and ergodicity of an assembly line of cellular automata that classifies density". Physical Review E. 83 (3): 031112. arXiv:1010.0239. Bibcode:2011PhRvE..83c1112M. doi:10.1103/PhysRevE.83.031112. | 2018-12-11 11:54:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 9, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7434357404708862, "perplexity": 1192.5833253882918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823618.14/warc/CC-MAIN-20181211104429-20181211125929-00520.warc.gz"} |
https://gssc.esa.int/navipedia/index.php/GLONASS_Satellite_Coordinates_Computation | If you wish to contribute or participate in the discussions about articles you are invited to join Navipedia as a registered user
# GLONASS Satellite Coordinates Computation
Fundamentals
Title GLONASS Satellite Coordinates Computation
Author(s) J. Sanz Subirana, J.M. Juan Zornoza and M. Hernández-Pajares, Technical University of Catalonia, Spain.
Level Intermediate
Year of Publication 2011
The GLONASS satellite coordinates shall be computed according to the specifications in the GLONASS-ICD document. An accuracy level of about three meters can be reached using the algorithm provided by this ICD.
In table 1 are listed the broadcast ephemeris parameters which are used to compute GLONASS satellites coordinates. Essentially, the ephemeris contain the initial conditions of position and velocity to perform the numerical integration of the GLONASS orbit within the interval of measurement $|t - t_e| \lt 15$ minutes. The accelerations due solar and lunar gravitational perturbations are also given.
Table 1: GLONASS broadcast ephemeris and clock message parameters.
In order to compute PZ-90 GLONASS satellite coordinates from the navigation message, the following algorithm must be used [GLONASS ICD, 1998] [1].
## Computation equations and algorithm
• 1. Coordinates transformation to an inertial reference frame:
- The initial conditions $(x(t_e),y(t_e),z(t_e),v_x(t_e),v_y(t_e),v_z(t_e))$, as broadcast in the GLONASS navigation message, are in the ECEF Greenwich coordinate system PZ-90. Therefore, and previous to orbit integration, they must be transformed to an absolute (inertial) coordinate system using the following expressions [footnotes 1]:
Position:
$\begin{array}{l} x_a(t_e)=x(t_e) \cos(\theta_{G_e}) - y(t_e) \sin (\theta_{G_e}) \\ y_a(t_e)=x(t_e) \sin(\theta_{G_e}) + y(t_e) \cos(\theta_{G_e}) \\ z_a(t_e)=z(t_e) \\ \end{array} \qquad \mbox{(1)}$
Velocity:
$\begin{array}{l} v_{x_a}(t_e)=v_x(t_e) \cos(\theta_{G_e}) - v_y(t_e) \sin (\theta_{G_e})- \omega_E \; y_a(t_e) \\ v_{y_a}(t_e)=v_x(t_e) \sin(\theta_{G_e}) + v_y(t_e) \cos(\theta_{G_e})+\omega_E \; x_a(t_e) \\ v_{z_a}(t_e)=v_z(t_e) \\ \end{array} \qquad \mbox{(2)}$
- The $\left( X''(t_e),Y''(t_e),Z''(t_e) \right)$ acceleration components broadcast in the navigation message are the projections of luni-solar accelerations to axes of the ECEF Greenwich coordinate system. Thence, these accelerations must be transformed to the inertial system by:
$\begin{array}{l} (Jx_am+Jx_as)=X''(t_e) \cos(\theta_{G_e}) -Y''(t_e) \sin(\theta_{G_e})\\ (Jx_am+Jx_as)=X''(t_e) \sin(\theta_{G_e}) +Y''(t_e) \cos(\theta_{G_e})\\ (Jx_am+Jx_as)=Z''(t_e)\\ \end{array} \qquad \mbox{(3)}$
Where $(\theta_{G_e})$ is the sidereal time at epoch $t_e$, to which are referred the initial conditions, in Greenwich meridian:
$\theta_{G_e}= \theta_{G_0} + \omega_E (t_e-3\, hours) \qquad \mbox{(4)}$
being:
- $\omega_E$: earth's rotation rate ($0.7292115\, 10^{-4}\; rad/s$)).
- $\theta_{G_0}$: the sidereal time in Greenwich at midnight GMT of a date at which the epoch $t_e$ is specified. (Notice: GLONASS_time = UTC(SU) + $3$ hours).
• 2. Numerical integration of differential equations that describe the motion of the satellites.
According to GLONASS-ICD, the re-calculation of ephemeris from epoch $t_e$ to epoch $t_i$ within the measurement interval ($|t_i-t_e|\lt 15 min$) shall be performed by a numerical integration of the differential equations (5) describing the motion of the satellites. These equations shall be integrated in a direct absolute geocentric coordinate system OXa, OYa, OZa, connected with current equator and vernal equinox, using the 4th order Runge-Kutta technique:
$\left\{ \begin{array}{l} \frac{dx_a}{dt}=v_{x_a}(t)\\ \frac{dy_a}{dt}=v_{y_a}(t)\\ \frac{dz_a}{dt}=v_{z_a}(t)\\ \frac{dv_{x_a}}{dt}=-\bar{\mu} \bar{x}_a +\frac{3}{2}C_{20}\bar{\mu} \bar{x}_a \rho^2(1-5 \bar{z}_a^2)+ Jx_am+Jx_as\\ \frac{dv_{y_a}}{dt}=-\bar{\mu} \bar{y}_a +\frac{3}{2}C_{20}\bar{\mu} \bar{y}_a \rho^2(1-5 \bar{z}_a^2)+ Jx_am+Jx_as\\ \frac{dv_{z_a}}{dt}=-\bar{\mu} \bar{z}_a +\frac{3}{2}C_{20}\bar{\mu} \bar{z}_a \rho^2(3-5 \bar{z}_a^2)+ Jx_am+Jx_as\\ \end{array} \qquad \mbox{(5)} \right .$
where:
$\bar{\mu}=\frac{\mu}{r^2}$,$\bar{x_a}=\frac{x_a}{r}$, $\bar{y_a}=\frac{y_a}{r}$, $\bar{z_a}=\frac{x_a}{r}$, $\bar{\rho}=\frac{a_E}{r}$, $r=\sqrt{x_a^2+y_a^2+z_a^2}$
$a_E= 6\,378.136\; km$ Equatorial radius of the Earth (PZ-90).
$\mu= 398\,600.44\; km^3/s^2$ Gravitational constant (PZ-90).
$C_{20}=-1\,082.63\cdot 10^{-6}$ Second zonal coefficient of spherical harmonic expression.
Note: In the above differential equations system (5), the term $C_{20}=-J_2=+\sqrt{5}\bar{C}_{20}$ is used instead of $J_2$ in equations $V(r,\phi,\lambda)=\frac{\mu}{r}\left[1+\frac{1}{2}\left(\frac{a_e}{r}\right)^2 J_2\;\;(1-3\sin^2 \phi) \right]$ and $\mathbb{\mathbf {\ddot r}}=\nabla V+\mathbb{\mathbf k}_{sun\_moon}$ to keep the same expressions as in the GLONASS-ICD (please refer to Perturbed Motion and GNSS Broadcast Orbits)
The right-hand side of the previous equation system (5) takes into account the accelerations determined by the central body gravitational constant $\mu$, the second zonal coefficient $C_{20}$ (that characterises polar flattening of the Earth), and the accelerations due to the luni-solar gravitational perturbation.
Runge-Kutta integration algorithm
• Given the following initial value problem:
$\left\{ \begin{array}{c} \frac{dy_1}{dt}=f_1(t,y_1,\cdots,y_n)\\ \vdots\\ \frac{dy_n}{dt}=f_1(t,y_1,\cdots,y_n)\\ \end{array} \right . \Longleftrightarrow \mathbb{\mathbf Y}'(t)=\mathbb{\mathbf F}(t,\mathbb{\mathbf Y}(t)) \qquad \mbox{(6)}$
$\mathbb{\mathbf Y}(t_0)=[y_1(t_0), \cdots, y_n(t_0)]^T$, $\mathbb{\mathbf Y'}(t_0)=[y'_1(t_0), \cdots, y'_n(t_0)]^T$
It is desired to find the $\mathbb{\mathbf Y}(t_f)$ at some final time $t_f$, or $\mathbb{\mathbf Y}(t_k)$ at some discrete list of points $t_k$ (for example, at tabulated intervals).
• The Runge-Kutta method is based in the following algorithm:
$\begin{array}{l} \mathbb{\mathbf K}_1= \mathbb{\mathbf F}(t_n,\mathbb{\mathbf Y}_n)\\ \mathbb{\mathbf K}_2= \mathbb{\mathbf F}(t_n+h/2,\mathbb{\mathbf Y}_n+h \mathbb{\mathbf K}_1/2)\\ \mathbb{\mathbf K}_3= \mathbb{\mathbf F}(t_n+h/2,\mathbb{\mathbf Y}_n+h \mathbb{\mathbf K}_2/2)\\ \mathbb{\mathbf K}_4= \mathbb{\mathbf F}(t_n+h,\mathbb{\mathbf Y}_n+h \mathbb{\mathbf K}_3)\\ \mathbb{\mathbf Y}_{n+1}=\mathbb{\mathbf Y}_n+h/6(\mathbb{\mathbf K}_1+2\mathbb{\mathbf K}_2+2\mathbb{\mathbf K}_3+\mathbb{\mathbf K}_4+ O(h^5)\\ \end{array} \qquad \mbox{(7)}$
The method is initialised with the initial conditions $\mathbb{\mathbf Y}(t_0)$ and $\mathbb{\mathbf Y}'(t_0)$. For the numerical integration of GLONASS satellite orbits, the function $\mathbb{\mathbf F}(t,\mathbb{\mathbf Y})$ is given by (7).
• 3. Coordinates transformation back to the PZ-90 reference system:
The coordinates $(x(t), y(t), z(t))$, obtained from the motion equations numerical integration, shall be transformed back to the Earth fixed reference frame PZ-90 with the following equations:
$\begin{array}{l} x(t)= x_a(t) cos(\theta_G) + y_a(t) sin (\theta_G)\\ y(t)=- x_a(t) sin(\theta_G) + y_a(t) cos(\theta_G)\\ z(t)= z_a(t) \\ \end{array} \qquad \mbox{(8)}$
where $\theta_G$ is the sidereal time at Greenwich meridian at time $t$, where $t$ is in GLONASS time, see equation (4) :
$\theta_G= \theta_{G_0} + \omega_E (t - 3~hours) \qquad \mbox{(9)}$
$GLONASS\_time= UTC(SU)-3~hours \qquad \mbox{(10)}$
Note that GLONASS satellite coordinates are computed in PZ-90 reference system, instead of WGS-84 where the GPS coordinates have been calculated. To bring the PZ-90 coordinate system in coincidence with WGS-84 the transformation given by equation (11) must be applied (see Reference Frames in GNSS):
$\left [ \begin{array}{c} x'\\ y'\\ z'\\ \end{array} \right ] = \left [ \begin{array}{c} x\\ y\\ z\\ \end{array} \right ] + \left [ \begin{array}{ccc} -3\,ppb & -353\,mas & -4\,mas\\ 353\,mas & -3\,ppb & 19\,mas\\ 4\,mas & -19\,mas & -3\,ppb\\ \end{array} \right ] \left [ \begin{array}{c} x\\ y\\ z\\ \end{array} \right ] + \left [ \begin{array}{c} 0.07\,m\\ -0.0\,m\\ -0.77\,m\\ \end{array} \right ] \qquad \mbox{(11)}$
The transformation from PZ-90.02 to WGS-84 (actually ITRF2000) is given by $\Delta x= -0.36\,m$, $\Delta y= +0.08\, m$, $\Delta z= +0.18\, m$, with no rotation, i.e., equation (12)[footnotes 2]:
$\left [ \begin{array}{c} x\\ y\\ z\\ \end{array} \right ]_{ITRF2000} = \left [ \begin{array}{c} x\\ y\\ z\\ \end{array} \right ]_{PZ-90.02} + \left [ \begin{array}{r} -0.36\,m\\ 0.08 \,m\\ 0.18 \,m\\ \end{array} \right ] \qquad \mbox{(12)}$
## References
1. ^ [GLONASS ICD, 1998] GLONASS ICD, 1998. Technical report. v.4.0.
## Notes
1. ^ Note: Over a small integration intervals, a simple rotation of $\theta_{G_e}$ angle around Z-axis is enough to perform this transformation. Nutation and precession of the earth and polar motion are a very slow processes and will not introduce significant deviations on such short integration time intervals (see Transformation between Celestial and Terrestrial Frames).
2. ^ The PZ-90.02 was implemented in September 20th, 2007 at 18:00. (refer to Reference Frames in GNSS). | 2019-02-18 23:34:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7692899703979492, "perplexity": 1879.9029101081514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247488490.40/warc/CC-MAIN-20190218220415-20190219002415-00383.warc.gz"} |
https://www.reconlearn.org/post/simulated-evd-early.html | # Ebola simulation part 1: early outbreak assessment
/ [practicals] / #simulation #response #ebola #epicurve #reproduction number
This practical simulates the early assessment and reconstruction of an Ebola Virus Disease (EVD) outbreak. It introduces various aspects of analysis of the early stage of an outbreak, including contact tracing data, epicurves, growth rate estimation from log-linear models, and more refined estimates of transmissibility. A follow-up practical will provide an introduction to transmission chain reconstruction using outbreaker2.
# A novel EVD outbreak in Ankh, Republic of Morporkia
A new EVD outbreak has been notified in the small city of Ankh, located in the Northern, rural district of the Republic of Morporkia. Public Health Morporkia (PHM) is in charge of coordinating the outbreak response, and have contracted you as a consultant in epidemics analysis to inform the response in real time.
## Required packages
The following packages are needed for this case study:
For some of these packages, we recommend using the github (development) version, either because it is more up-to-date, or because they have not been released on CRAN yet.
We provide installation commands for each package; not that you only need to install these packages if they are not already present on your system.
## instructions for CRAN packages
install.packages("rio")
install.packages("ggplot2")
install.packages("dplyr")
install.packages("magrittr")
install.packages("outbreaks")
install.packages("incidence")
install.packages("distcrete")
install.packages("epitrix")
remotes::install_github("reconhub/epicontacts")
remotes::install_github("reconhub/linelist")
remotes::install_github("reconhub/earlyR")
remotes::install_github("reconhub/projections")
What if you can use a Deployer to install packages?
The RECON deployer should contain all the right versions of the different packages; if you are using a deployer, you can install all needed packages by:
1. go to the deployer, double-click on the .Rproj file to open a new Rstudio session in the right folder (no need to close the session you have opened for the practical)
2. type the instructions below:
## activate the deployer
source("activate.R")
## install packages from the deployer
install.packages("rio")
install.packages("ggplot2")
install.packages("dplyr")
install.packages("magrittr")
install.packages("outbreaks")
install.packages("incidence")
install.packages("distcrete")
install.packages("epitrix")
install.packages("epicontacts")
install.packages("linelist")
install.packages("earlyR")
install.packages("projections")
library(rio)
library(ggplot2)
library(dplyr)
library(magrittr)
library(outbreaks)
library(incidence)
library(epicontacts)
library(linelist)
library(distcrete)
library(epitrix)
library(earlyR)
library(projections)
## Importing data
While a new data update is pending, you have been given the following linelist and contact data, from the early stages of the outbreak:
• phm_evd_linelist_2017-10-27.xlsx: a linelist containing case information up to the 27th October 2017
• phm_evd_contacts_2017-10-27.xlsx: a list of contacts reported between cases up to the 27th October 2017, where from indicates a potential source of infection, and to the recipient of the contact.
To read into R, download these files and use the function read_xlsx() from the readxl package to import the data. Each import will create a data table stored as a tibble object. Call the first one linelist, and the second one contacts. For instance, you first command line could look like:
linelist <- rio::import("phm_evd_linelist-2017-10-27.xlsx")
Note that for further analyses, you will need to make sure all dates are stored as Date objects. This could be done manually using as.Date, but will be taken care of here using linelist’s clean_data:
## Data cleaning
Once imported, the raw data should look like:
## linelist: one line per case
linelist_raw
## id Date of Onset Date.Report. SEX. Âge location
## 1 39e9dc 43018 43030 Female 62 Pseudopolis
## 2 664549 43024 43032 Male 28 peudopolis
## 3 B4D8AA 43025 “23/10/2017” male 54 Ankh-Morpork
## 4 51883d “18// 10/2017” 22-10-2017 male 57 PSEUDOPOLIS
## 5 947e40 43028 “2017/10/25” f 23 Ankh Morpork
## 6 9aa197 43028 “2017-10-23” f 66 AM
## 7 e4b0a2 43029 “2017-10-24” female 13 Ankh Morpork
## 8 AF0AC0 “2017-10-21” “26-10-2017” M 10 ankh morpork
## 9 185911 “2017-10-21” “26-10-2017” female 34 AM
## 10 601D2E “2017/10/22” “28-10-2017” <NA> 11 AM
## 11 605322 43030 “28 / 10 / 2017” FEMALE 23 Ankh Morpork
## 12 E399B1 23 / 10 / 2017 “28/10/2017” female 23 Ankh Morpork
## contacts: pairs of cases with reported contacts
contacts_raw
## Source Case #ID case.id
## 1 51883d 185911
## 2 b4d8aa e4b0a2
## 3 39e9dc b4d8aa
## 4 39E9DC 601d2e
## 5 51883d 9AA197
## 6 39e9dc 51883d
## 7 39E9DC e399b1
## 8 b4d8aa AF0AC0
## 9 39E9DC 947e40
## 10 39E9DC 664549
## 11 39e9dc 605322
Examine these data and try to list existing issues.
What is wrong with the raw data?
The raw data, whilst very small in size, have a number of issues, including:
• capitalisation: mixture of upper case and lower cases
• dates: their format is very heterogeneous, so that dates are not readily useable
• special characters: accents, various separators and trailing / heading white spaces
• typos / inconsistent coding: gender is sometimes indicated in full words, sometimes abbreviated; location has some obvious abbreviations and typos
We will use the function clean_data() from the linelist package to clean these data. This function will:
• set all characters to lower case
• use _ as universal separator
• remove all heading and trailing separators
• replace all accentuated characters by their closest ASCII match
• detect date formats and convert entries to Date when relevant (see argument guess_dates)
• replace typos and recode variables (see argument wordlists)
For this last part, we need to load a separate file containing cleaning rules: phm_evd_cleaning_rules.xlsx.
Examine this file (and read the ‘explanations’ tab), import it as the other files, and save the resulting data.frame as and object called cleaning_rules. The output should look like:
cleaning_rules
## 1 f female sex
## 2 m male sex
## 3 .missing unknown sex
## 4 am ankh_morpork location
## 5 peudopolis pseudopolis location
## 6 .missing unknown location
We can now clean our data using clean_data; execute and interprete the following commands:
## clean linelist
linelist <- linelist_raw %>%
clean_data(wordlist = cleaning_rules) %>%
mutate_at(vars(contains("date")), guess_dates)
linelist
## id date_of_onset date_report sex age location
## 1 39e9dc 2017-10-10 2017-10-22 female 62 pseudopolis
## 2 664549 2017-10-16 2017-10-24 male 28 pseudopolis
## 3 b4d8aa 2017-10-17 2017-10-23 male 54 ankh_morpork
## 4 51883d 2017-10-18 2017-10-22 male 57 pseudopolis
## 5 947e40 2017-10-20 2017-10-25 female 23 ankh_morpork
## 6 9aa197 2017-10-20 2017-10-23 female 66 ankh_morpork
## 7 e4b0a2 2017-10-21 2017-10-24 female 13 ankh_morpork
## 8 af0ac0 2017-10-21 2017-10-26 male 10 ankh_morpork
## 9 185911 2017-10-21 2017-10-26 female 34 ankh_morpork
## 10 601d2e 2017-10-22 2017-10-28 unknown 11 ankh_morpork
## 11 605322 2017-10-22 2017-10-28 female 23 ankh_morpork
## 12 e399b1 2017-10-23 2017-10-28 female 23 ankh_morpork
## clean contacts
contacts <- clean_data(contacts_raw)
contacts
## source_case_id case_id
## 1 51883d 185911
## 2 b4d8aa e4b0a2
## 3 39e9dc b4d8aa
## 4 39e9dc 601d2e
## 5 51883d 9aa197
## 6 39e9dc 51883d
## 7 39e9dc e399b1
## 8 b4d8aa af0ac0
## 9 39e9dc 947e40
## 10 39e9dc 664549
## 11 39e9dc 605322
# Descriptive analyses
## A first look at contacts
Contact tracing is at the centre of an Ebola outbreak response. Using the function make_epicontacts in the epicontacts package, create a new epicontacts object called x. The result should look like:
x
##
## /// Epidemiological Contacts //
##
## // class: epicontacts
## // 12 cases in linelist; 11 contacts; directed
##
## // linelist
##
## # A tibble: 12 x 6
## id date_of_onset date_report sex age location
## <chr> <date> <date> <chr> <dbl> <chr>
## 1 39e9dc 2017-10-10 2017-10-22 female 62 pseudopolis
## 2 664549 2017-10-16 2017-10-24 male 28 pseudopolis
## 3 b4d8aa 2017-10-17 2017-10-23 male 54 ankh_morpork
## 4 51883d 2017-10-18 2017-10-22 male 57 pseudopolis
## 5 947e40 2017-10-20 2017-10-25 female 23 ankh_morpork
## 6 9aa197 2017-10-20 2017-10-23 female 66 ankh_morpork
## 7 e4b0a2 2017-10-21 2017-10-24 female 13 ankh_morpork
## 8 af0ac0 2017-10-21 2017-10-26 male 10 ankh_morpork
## 9 185911 2017-10-21 2017-10-26 female 34 ankh_morpork
## 10 601d2e 2017-10-22 2017-10-28 unknown 11 ankh_morpork
## 11 605322 2017-10-22 2017-10-28 female 23 ankh_morpork
## 12 e399b1 2017-10-23 2017-10-28 female 23 ankh_morpork
##
## // contacts
##
## # A tibble: 11 x 2
## from to
## <chr> <chr>
## 1 51883d 185911
## 2 b4d8aa e4b0a2
## 3 39e9dc b4d8aa
## 4 39e9dc 601d2e
## 5 51883d 9aa197
## 6 39e9dc 51883d
## 7 39e9dc e399b1
## 8 b4d8aa af0ac0
## 9 39e9dc 947e40
## 10 39e9dc 664549
## 11 39e9dc 605322
You can easily plot these contacts, but with a little bit of tweaking (see ?vis_epicontacts) you can customise shapes by gender:
p <- plot(x, node_shape = "sex",
shapes = c(male = "male", female = "female", unknown = "question-circle"),
selector = FALSE)
## p
What can you say about these contacts?
## Looking at incidence curves
The first question PHM asks you is simply: how bad is it?. Given that this is a terrible disease, with a mortality rate nearing 70%, there is a lot of concern about this outbreak getting out of control. The first step of the analysis lies in drawing an epicurve, i.e. an plot of incidence over time.
Using the package incidence, compute daily incidence based on the dates of symptom onset. Store the result in an object called i; the result should look like (pending some fine tweaking, which you should feel free to ignore if this is your first time using ggplot2):
i
## <incidence object>
## [12 cases from days 2017-10-10 to 2017-10-23]
##
## $counts: matrix with 14 rows and 1 columns ##$n: 12 cases in total
## $dates: 14 dates marking the left-side of bins ##$interval: 1 day
## $timespan: 14 days ##$cumulative: FALSE
we provide two items large_txt and rotate_x_txt which you can add to your ggplot object to mimic the graph above:
large_txt <- ggplot2::theme(text = ggplot2::element_text(size = 16),
axis.text = ggplot2::element_text(size = 12))
rotate_x_txt <- theme(axis.text.x = element_text(angle = 45,
hjust = 1))
If you pay close attention to the dates on the x-axis, you may notice that something is missing. Indeed, the graph stops right after the last case, while the data should be complete until the 27th October 2017. You can remedy this using the argument last_date in the incidence function:
database_date <- as.Date("2017-10-27")
i <- incidence(linelist$date_of_onset, last_date = database_date) i ## <incidence object> ## [12 cases from days 2017-10-10 to 2017-10-27] ## ##$counts: matrix with 18 rows and 1 columns
## $n: 12 cases in total ##$dates: 18 dates marking the left-side of bins
## $interval: 1 day ##$timespan: 18 days
## $cumulative: FALSE plot(i, show_cases = TRUE) + theme_bw() + large_txt + rotate_x_txt + labs(title = sprintf("Epidemic curve as of the %s", database_date)) # Statistical analyses ## Log-linear model The simplest model of incidence is probably the log-linear model, i.e. a linear regression on log-transformed incidences. In the incidence package, the function fit will estimate the parameters of this model from an incidence object (here, i). Apply it to the data and store the result in a new object called f. You can print f to derive estimates of the growth rate r and the doubling time, and add the corresponding model to the incidence plot: f <- fit(i) ## Warning in fit(i): 10 dates with incidence of 0 ignored for fitting f ## <incidence_fit object> ## ##$model: regression of log-incidence over time
##
## $info: list containing the following items: ##$r (daily growth rate):
## [1] 0.05352107
##
## $r.conf (confidence interval): ## 2.5 % 97.5 % ## [1,] -0.0390633 0.1461054 ## ##$doubling (doubling time in days):
## [1] 12.95092
##
## $doubling.conf (confidence interval): ## 2.5 % 97.5 % ## [1,] 4.744158 -17.74421 ## ##$pred: data.frame of incidence predictions (8 rows, 5 columns)
plot(i, show_cases = TRUE, fit = f) +
theme_bw() +
large_txt +
rotate_x_txt +
labs(title = "Epidemic curve and log-linear fit")
## the argument show_cases requires the argument stack = TRUE
How would you interpret this result?What criticism would you make on this model?
## Estimation of transmissibility (R)
### Branching process model
The transmissibility of the disease can be assessed through the estimation of the reproduction number R, defined as the number of expected secondary cases per infected case. In the early stages of an outbreak, and assuming no immunity in the population, this quantity is also the basic reproduction number R0, i.e. R in a fully susceptible population.
The package earlyR implements a simple maximum-likelihood estimation of R, using dates of onset of symptoms and information on the serial interval distribution. It is a simpler but less flexible version of the model by Cori et al (2013, AJE 178: 1505–1512) implemented in EpiEstim.
Briefly, earlyR uses a simple model describing incidence on a given day as a Poisson process determined by a global force of infection on that day:
xt ∼ 𝒫(λt)
where xt is the incidence (based on symptom onset) on day t and λt is the force of infection. Noting R the reproduction number and w() the discrete serial interval distribution, we have:
$$\lambda_t = R * \sum_{s=1}^t x_s w(t - s)$$
The likelihood (probability of observing the data given the model and parameters) is defined as a function of R:
$$\mathcal{L}(x) = p(x | R) = \prod_{t=1}^T F_{\mathcal{P}}(x_t, \lambda_t)$$
where F𝒫 is the Poisson probability mass function.
### Looking into the past: estimating the serial interval from older data
As current data are insufficient to estimate the serial interval distribution, some colleague recommends using data from a past outbreak stored in the outbreaks package, as the dataset ebola_sim_clean. Load this dataset, and create a new epicontacts object as before, without plotting it (it is a much larger dataset). Store the new object as old_evd; the output should look like:
old_evd
##
## /// Epidemiological Contacts //
##
## // class: epicontacts
## // 5,829 cases in linelist; 3,800 contacts; directed
##
## // linelist
##
## # A tibble: 5,829 x 11
## id generation date_of_infecti… date_of_onset date_of_hospita…
## <chr> <int> <date> <date> <date>
## 1 d1fa… 0 NA 2014-04-07 2014-04-17
## 2 5337… 1 2014-04-09 2014-04-15 2014-04-20
## 3 f5c3… 1 2014-04-18 2014-04-21 2014-04-25
## 4 6c28… 2 NA 2014-04-27 2014-04-27
## 5 0f58… 2 2014-04-22 2014-04-26 2014-04-29
## 6 4973… 0 2014-03-19 2014-04-25 2014-05-02
## 7 f914… 3 NA 2014-05-03 2014-05-04
## 8 881b… 3 2014-04-26 2014-05-01 2014-05-05
## 9 e66f… 2 NA 2014-04-21 2014-05-06
## 10 20b6… 3 NA 2014-05-05 2014-05-06
## # … with 5,819 more rows, and 6 more variables: date_of_outcome <date>,
## # outcome <fct>, gender <fct>, hospital <fct>, lon <dbl>, lat <dbl>
##
## // contacts
##
## # A tibble: 3,800 x 3
## from to source
## <chr> <chr> <fct>
## 1 d1fafd 53371b other
## 2 cac51e f5c3d8 funeral
## 3 f5c3d8 0f58c4 other
## 4 0f58c4 881bd4 other
## 5 8508df 40ae5f other
## 6 127d83 f547d6 funeral
## 7 f5c3d8 d58402 other
## 8 20b688 d8a13d other
## 9 2ae019 a3c8b8 other
## 10 20b688 974bc1 other
## # … with 3,790 more rows
The function get_pairwise can be used to extract pairwise features of contacts based on attributes of the linelist. For instance, it could be used to test for assortativity, but also to compute delays between connected cases. Here, we use it to extract the serial interval:
old_si <- get_pairwise(old_evd, "date_of_onset")
summary(old_si)
## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
## 1.00 5.00 9.00 11.06 15.00 99.00 1684
old_si <- na.omit(old_si)
summary(old_si)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 1.00 5.00 9.00 11.06 15.00 99.00
hist(old_si, xlab = "Days after symptom onset", ylab = "Frequency",
main = "Serial interval (empirical distribution)",
col = "grey", border = "white")
What do you think of this distribution? Make the adjustments you deem necessary, and then use the function fit_disc_gamma from the package epitrix to fit a discretised Gamma distribution to these data. Your results should approximately look like:
si_fit <- fit_disc_gamma(old_si)
si_fit
## $mu ## [1] 11.48373 ## ##$cv
## [1] 0.6429561
##
## $sd ## [1] 7.383534 ## ##$ll
## [1] -6905.588
##
## $converged ## [1] TRUE ## ##$distribution
## A discrete distribution
## name: gamma
## parameters:
## shape: 2.41900859135364
## scale: 4.74728799288241
si_fit contains various information about the fitted delays, including the estimated distribution in the $distribution slot. You can compare this distribution to the empirical data in the following plot: si <- si_fit$distribution
si
## A discrete distribution
## name: gamma
## parameters:
## shape: 2.41900859135364
## scale: 4.74728799288241
## compare fitted distribution to data
hist(old_si, xlab = "Days after symptom onset", ylab = "Frequency",
main = "Serial interval: fit to data", col = "salmon", border = "white",
nclass = 50, ylim = c(0, 0.07), prob = TRUE)
points(0:60, si$d(0:60), col = "#9933ff", pch = 20) points(0:60, si$d(0:60), col = "#9933ff", type = "l", lty = 2)
Would you trust this estimation of the generation time? How would you compare it to actual estimates from the West African EVD outbreak (WHO Ebola Response Team (2014) NEJM 371:1481–1495) with a mean of 15.3 days and a standard deviation 9.3 days?
### Back to the future: estimation of R0 in the current outbreak
Now that we have estimates of the serial interval based on a previous outbreak, we can use this information to estimate transmissibility of the disease (as measured by R0) in the current outbreak.
Using the estimates of the mean and standard deviation of the serial interval you just obtained, use the function get_R to estimate the reproduction number, specifying a maximum R of 10 (see ?get_R) and store the result in a new object R:
You can visualise the results as follows:
R
##
## /// Early estimate of reproduction number (R) //
## // class: earlyR, list
##
## // Maximum-Likelihood estimate of R ($R_ml): ## [1] 2.042042 ## ## ## //$lambda:
## NA 0.04088711 0.05343783 0.06188455 0.06668495 0.06850099...
##
## // $dates: ## [1] "2017-10-10" "2017-10-11" "2017-10-12" "2017-10-13" "2017-10-14" ## [6] "2017-10-15" ## ... ## ## //$si (serial interval):
## A discrete distribution
## name: gamma
## parameters:
## shape: 2.41900859135364
## scale: 4.74728799288241
plot(R) +
geom_vline(xintercept = 1, color = "#ac3973") +
theme_bw() +
large_txt
plot(R, "lambdas") +
theme_bw() +
large_txt
## Warning: Removed 1 rows containing missing values (position_stack).
The first figure shows the distribution of likely values of R, and the Maximum-Likelihood (ML) estimation. The second figure shows the global force of infection over time, with dashed bars indicating dates of onset of the cases.
Interpret these results: what do you make of the reproduction number?What does it reflect? Based on the last part of the epicurve, some colleagues suggest that incidence is going down and the outbreak may be under control. What is your opinion on this?
## Short-term forecasting
The function project from the package projections can be used to simulate plausible epidemic trajectories by simulating daily incidence using the same branching process as the one used to estimate R0 in earlyR. All that is needed is one or several values of R0 and a serial interval distribution, stored as a distcrete object.
Here, we illustrate how we can simulate 5 random trajectories using a fixed value of R0 = 2.04, the ML estimate of R0:
library(projections)
project(i, R = R\$R_ml, si = si, n_sim = 5, n_days = 10, R_fix_within = TRUE)
##
## /// Incidence projections //
##
## // class: projections, matrix
## // 10 dates (rows); 5 simulations (columns)
##
## // first rows/columns:
## [,1] [,2] [,3] [,4] [,5]
## 2017-10-28 1 1 0 1 0
## 2017-10-29 1 3 1 5 0
## 2017-10-30 1 0 6 0 0
## 2017-10-31 2 1 1 0 0
## .
## .
## .
##
## // dates:
## [1] "2017-10-28" "2017-10-29" "2017-10-30" "2017-10-31" "2017-11-01"
## [6] "2017-11-02" "2017-11-03" "2017-11-04" "2017-11-05" "2017-11-06"
Using the same principle, generate 1,000 trajectories for the next 2 weeks, using a range of plausible values of R0. Note that you can use sample_R to obtain these values from your earlyR object. Store your results in an object called proj. Plotting the results should give something akin to:
plot(i) %>%
theme_bw() +
scale_x_date() +
large_txt +
rotate_x_txt +
labs(title = "Short term forcasting of new cases")
## Scale for 'x' is already present. Adding another scale for 'x', which
## will replace the existing scale.
Interpret the following summary:
apply(proj, 1, summary)
## 2017-10-28 2017-10-29 2017-10-30 2017-10-31 2017-11-01 2017-11-02
## Min. 0.000 0.000 0.000 0.000 0.000 0.000
## 1st Qu. 1.000 0.750 1.000 1.000 1.000 1.000
## Median 1.000 1.000 1.000 1.000 2.000 2.000
## Mean 1.584 1.541 1.693 1.796 1.884 2.073
## 3rd Qu. 2.000 2.000 2.000 3.000 3.000 3.000
## Max. 8.000 9.000 8.000 8.000 10.000 9.000
## 2017-11-03 2017-11-04 2017-11-05 2017-11-06 2017-11-07 2017-11-08
## Min. 0.000 0.000 0.000 0.000 0.000 0.000
## 1st Qu. 1.000 1.000 1.000 1.000 1.000 1.000
## Median 2.000 2.000 2.000 2.000 3.000 3.000
## Mean 2.356 2.469 2.715 3.104 3.402 3.869
## 3rd Qu. 3.000 3.000 4.000 4.000 5.000 5.000
## Max. 10.000 16.000 19.000 20.000 24.000 34.000
## 2017-11-09 2017-11-10
## Min. 0.000 0.000
## 1st Qu. 1.000 2.000
## Median 3.000 3.000
## Mean 4.265 4.834
## 3rd Qu. 6.000 6.000
## Max. 33.000 40.000
apply(proj, 1, function(x) mean(x>0))
## 2017-10-28 2017-10-29 2017-10-30 2017-10-31 2017-11-01 2017-11-02
## 0.787 0.750 0.802 0.790 0.795 0.810
## 2017-11-03 2017-11-04 2017-11-05 2017-11-06 2017-11-07 2017-11-08
## 0.853 0.840 0.847 0.881 0.885 0.896
## 2017-11-09 2017-11-10
## 0.892 0.926
According to these results, what are the chances that more cases will appear in the near future?Is this outbreak being brought under control? Would you recommend scaling up / down the response?
## Follow-up…
For a follow-up on this outbreak, have a look at the second part of this simulated response, which includes a data update, genetic sequences, and the use of outbreak reconstruction tools. | 2021-12-09 07:37:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2916508913040161, "perplexity": 13702.763985225234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363689.56/warc/CC-MAIN-20211209061259-20211209091259-00431.warc.gz"} |
http://openstudy.com/updates/4de4d0154c0e8b0be513afd8 | ## anonymous 5 years ago simplify sq root of 40 and 200
1. anonymous
Square root of 40 is 2 root 10 and square root of 200 is 10 root 2
2. anonymous
does "and" mean add, multiply or are they two different problems?
3. anonymous
6.324 and 14.14
4. anonymous
i assume when it says "simplify" it means to write in simplest radical form. in which case lagrange has it
5. anonymous
$\sqrt{20}=\sqrt{4\times 10}=\sqrt{4}\times \sqrt{10}=2\sqrt{10}$
6. anonymous
it probably does not mean give a decimal approximation
7. anonymous
my mistake..sorry
8. anonymous
likewise $\sqrt{200}=\sqrt{100\times 2}=\sqrt{100}\times \sqrt{2}=10\sqrt{2}$
9. anonymous
$\sqrt{40}= \sqrt{2 x 2 x 2 x 5} = 2 \sqrt{10}$ $\sqrt{200}= \sqrt{2x2x2x5x5}$ | 2016-10-26 11:37:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7815436720848083, "perplexity": 4511.458456218428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720941.32/warc/CC-MAIN-20161020183840-00044-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://zbmath.org/?q=an:1230.26003 | # zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
On nonlocal fractional boundary value problems. (English) Zbl 1230.26003
Summary: We study a new class of non-local boundary value problems of nonlinear differential equations of fractional order. We extend the idea of a three-point non-local boundary condition $\left(x\left(1\right)=\alpha x\left(\eta \right),\alpha \in ℝ,0<\eta <1\right)$ to a non-local strip condition of the form: $x\left(1\right)=\eta {\int }_{\nu }^{\tau }x\left(s\right)ds,0<\nu <\tau <1$. In fact, this strip condition corresponds to a continuous distribution of the values of the unknown function on an arbitrary finite segment of the interval. In the limit $\nu \to 0,\tau \to 1$, this strip condition takes the form of a typical integral boundary condition. Some new existence and uniqueness results are obtained for this class of non-local problems by using standard fixed point theorems and Leray-Schauder degree theory. Some illustrative examples are also discussed.
##### MSC:
26A33 Fractional derivatives and integrals (real functions) 34A12 Initial value problems for ODE, existence, uniqueness, etc. of solutions 34A40 Differential inequalities (ODE) | 2014-03-09 15:05:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8371149301528931, "perplexity": 4984.170886561379}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999679238/warc/CC-MAIN-20140305060759-00041-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://rcd.ics.org.ru/authors/detail/134-galliano%20_valent | 0
2013
Impact Factor
# Galliano Valent
2 Place Jussieu, 75251 Paris Cedex 05, France
Laboratoire de Physique Mathématique de Provence
## Publications:
Valent G. Superintegrable Models on Riemannian Surfaces of Revolution with Integrals of any Integer Degree (I) 2017, vol. 22, no. 4, pp. 319-352 Abstract We present a family of superintegrable (SI) systems which live on a Riemannian surface of revolution and which exhibit one linear integral and two integrals of any integer degree larger or equal to 2 in the momenta. When this degree is 2, one recovers a metric due to Koenigs. The local structure of these systems is under control of a linear ordinary differential equation of order $n$ which is homogeneous for even integrals and weakly inhomogeneous for odd integrals. The form of the integrals is explicitly given in the so-called “simple” case (see Definition 2). Some globally defined examples are worked out which live either in $\mathbb{H}^2$ or in $\mathbb{R}^2$. Keywords: superintegrable two-dimensional systems, differential systems, ordinary differential equations, analysis on manifolds Citation: Valent G., Superintegrable Models on Riemannian Surfaces of Revolution with Integrals of any Integer Degree (I), Regular and Chaotic Dynamics, 2017, vol. 22, no. 4, pp. 319-352 DOI:10.1134/S1560354717040013
Valent G. Global Structure and Geodesics for Koenigs Superintegrable Systems 2016, vol. 21, no. 5, pp. 477-509 Abstract We present a new derivation of the local structure of Koenigs metrics using a framework laid down by Matveev and Shevchishin. All of these dynamical systems allow for a potential preserving their superintegrability (SI) and most of them are shown to be globally defined on either ${\mathbb R}^2$ or ${\mathbb H}^2$. Their geodesic flows are easily determined thanks to their quadratic integrals. Using Carter (or minimal) quantization, we show that the formal SI is preserved at the quantum level and for two metrics, for which all of the geodesics are closed, it is even possible to compute the classical action variables and the point spectrum of the quantum Hamiltonian. Keywords: superintegrable two-dimensional systems, analysis on manifolds, quantization Citation: Valent G., Global Structure and Geodesics for Koenigs Superintegrable Systems, Regular and Chaotic Dynamics, 2016, vol. 21, no. 5, pp. 477-509 DOI:10.1134/S1560354716050014
Valent G. On a Class of Integrable Systems with a Quartic First Integral 2013, vol. 18, no. 4, pp. 394-424 Abstract We generalize, to some extent, the results on integrable geodesic flows on two dimensional manifolds with a quartic first integral in the framework laid down by Selivanova and Hadeler. The local structure is first determined by a direct integration of the differential system which expresses the conservation of the quartic observable and is seen to involve a finite number of parameters. The global structure is studied in some detail and leads to a class of models on the manifolds $\mathbb{S}^2$, $\mathbb{H}^2$ or $\mathbb{R}^2$. As special cases we recover Kovalevskaya’s integrable system and a generalization of it due to Goryachev. Keywords: integrable Hamiltonian systems, quartic polynomial integral, manifolds for Riemannian metrics Citation: Valent G., On a Class of Integrable Systems with a Quartic First Integral, Regular and Chaotic Dynamics, 2013, vol. 18, no. 4, pp. 394-424 DOI:10.1134/S1560354713040060
Valent G. Superintegrable models on riemannian surfaces of revolution with integrals of any integer degree (II) , , pp. | 2020-03-29 15:03:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7477772235870361, "perplexity": 583.8449814660838}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370494349.3/warc/CC-MAIN-20200329140021-20200329170021-00106.warc.gz"} |
http://hackage.haskell.org/package/phonetic-languages-simplified-common-0.2.1.0/docs/Phonetic-Languages-Simplified-StrictVG.html | phonetic-languages-simplified-common-0.2.1.0: A simplified version of the phonetic-languages-functionality
Phonetic.Languages.Simplified.StrictVG
Description
Simplified version of the phonetic-languages-common package.
Synopsis
# Working with vectors
Arguments
:: (Eq a, Foldable t, InsertLeft t a, Monoid (t a), Monoid (t (t a))) => a The first most common element in the "whitespace symbols" structure -> (t a -> Vector a) The function that is used internally to convert to the boxed Vector of a so that the function can process further the permutations -> (t (t a) -> Vector (Vector a)) The function that is used internally to convert to the boxed Vector of Vector of a so that the function can process further -> (Vector a -> t a) The function that is used internally to convert from the boxed Vector of a so that the function can process further -> Vector (Vector Int) The permutations of Int indices starting from 0 and up to n (n is probably less than 8). -> t (t a) Must be obtained as subG whspss xs -> Vector (t a)
Arguments
:: (Eq a, Foldable t, InsertLeft t a, Monoid (t a), Monoid (t (t a))) => t a -> t a -> a The first most common element in the whitespace symbols structure -> (t a -> Vector a) The function that is used internally to convert to the boxed Vector of a so that the function can process further the permutations -> (t (t a) -> Vector (Vector a)) The function that is used internally to convert to the boxed Vector of Vector of a so that the function can process further -> (Vector a -> t a) The function that is used internally to convert from the boxed Vector of a so that the function can process further -> Vector (Vector Int) The permutations of Int indices starting from 0 and up to n (n is probably less than 7). -> t (t a) Must be obtained as subG whspss xs -> Vector (t a)
# Working with lists
Arguments
:: (Eq a, Foldable t, InsertLeft t a, Monoid (t a), Monoid (t (t a))) => a The first most common element in the "whitespace symbols" structure -> (t a -> [a]) The function that is used internally to convert to the [a] so that the function can process further the permutations -> (t (t a) -> [[a]]) The function that is used internally to convert to the [[a]] so that the function can process further -> ([a] -> t a) The function that is used internally to convert to the needed representation so that the function can process further -> [Vector Int] The permutations of Int indices starting from 0 and up to n (n is probably less than 8). -> t (t a) Must be obtained as subG whspss xs -> [t a]
Arguments
:: (Eq a, Foldable t, InsertLeft t a, Monoid (t a), Monoid (t (t a))) => t a -> t a -> a The first most common element in the whitespace symbols structure -> (t a -> [a]) The function that is used internally to convert to the [a] so that the function can process further the permutations -> (t (t a) -> [[a]]) The function that is used internally to convert to the [[a]] so that the function can process further -> ([a] -> t a) The function that is used internally to convert to the needed representation that the function can process further -> [Vector Int] The permutations of Int indices starting from 0 and up to n (n is probably less than 7). -> t (t a) Must be obtained as subG whspss xs -> [t a] | 2021-01-16 09:50:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5562817454338074, "perplexity": 2169.410275512613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703505861.1/warc/CC-MAIN-20210116074510-20210116104510-00008.warc.gz"} |
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Heyting_algebra | # All Science Fair Projects
## Science Fair Project Encyclopedia for Schools!
Search Browse Forum Coach Links Editor Help Tell-a-Friend Encyclopedia Dictionary
# Science Fair Project Encyclopedia
For information on any area of science that interests you,
enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.).
Or else, you can start by choosing any of the categories below.
# Heyting algebra
In mathematics, Heyting algebras are special partially ordered sets that constitute a generalization of Boolean algebras. Heyting algebras arise as models of intuitionistic logic, a logic in which the law of excluded middle does not in general hold. Complete Heyting algebras are a central object of study in pointless topology.
Contents
## Formal definitions
A Heyting algebra H is a bounded lattice such that for all a and b in H there is a greatest element x of H such that $a \wedge x \le b$. This element is called the relative pseudo-complement of a with respect to b, and is denoted $a \Rightarrow b$ (or $a \rightarrow b$).
An equivalent definition can be given by considering the mappings $f_a: H \to H$ defined by $f_a(x)=a\wedge x$, for some fixed a in H. A bounded lattice H is a Heyting algebra iff all mappings fa are the lower adjoint of a monotone Galois connection. In this case the respective upper adjoints ga are given by $g_a(x)= a \Rightarrow x$, where $\Rightarrow$ is defined as above.
A complete Heyting algebra is a Heyting algebra that is a complete lattice.
In any Heyting algebra, one can define the pseudo-complement $\lnot x$ of some element x by setting $\lnot x = x \Rightarrow 0$, where 0 is the least element of the Heyting algebra.
An element x of a Heyting algebra is called regular if $x=\lnot\lnot x$. An element x is regular if and only if $x=\lnot y$ for some element y of the Heyting algebra.
## Properties
Heyting algebras are always distributive. This is sometimes stated as an axiom, but in fact it follows from the existence of relative pseudo-complements. The reason is that, being the lower adjoint of a Galois connection, $\wedge$ preserves all existing suprema. Distributivity in turn is just the preservation of binary suprema by $\wedge$.
Furthermore, by a similar argument, the following infinite distributive law holds in any complete Heyting algebra:
$x\wedge\bigvee Y = \bigvee \{x\wedge y : y \in Y\}$
for any element x in H and any subset Y of H.
Not every Heyting algebra satisfies the two De Morgan laws. However, the following statements are equivalent for all Heyting algebras H:
1. H satisfies both De Morgan laws.
2. $\lnot(x \wedge y)=\lnot x \vee \lnot y$, for all x, y in H.
3. $\lnot x \vee \lnot\lnot x = 1$, for all x in H.
4. $\lnot\lnot (x \vee y) = \lnot\lnot x \vee \lnot\lnot y$, for all x, y in H.
The pseudo-complement of an element x of H is the supremum of the set $\{ y : y \wedge x = 0\}$ and it belongs to this set (i.e. $x \wedge \lnot x = 0$ holds).
Boolean algebras are exactly those Heyting algebras in which $x = \lnot\lnot x$ for all x, or, equivalently, in which $x\vee\lnot x=1$ for all x. In this case, the element $a \Rightarrow b$ is equal to $\lnot a \vee b$.
In any Heyting algebra, the least and greatest elements 0 and 1 are regular.
The regular elements of any Heyting algebra constitute a Boolean algebra. Unless all elements of the Heyting algebra are regular, this Boolean algebra will not be a sublattice of the Heyting algebra, because its join operation will be different.
## Examples
• Every totally ordered set that is a bounded lattice is also a Heyting algebra, where $\lnot 0 = 1$ and $\lnot a = 0$ for all a other than 0.
• Every topology provides a complete Heyting algebra in the form of its open set lattice. In this case, the element $A \Rightarrow B$ is the interior of the union of Ac and B, where Ac denotes the complement of the open set A. Not all complete Heyting algebras are of this form. These issues are studied in pointless topology, where complete Heyting algebras are also called frames or locales.
• The Lindenbaum algebra of propositional intuitionistic logic is a Heyting algebra. It is defined to be the set of all propositional logic formulae, ordered via logical entailment: for any two formulae F and G we have $F \le G$ iff $F \models G$. At this stage $\le$ is merely a preorder that induces a partial order which is the desired Heyting algebra.
## Heyting algebras as applied to intuitionistic logic
Arend Heyting (1898-1980) was himself interested in clarifying the foundational status of intuitionistic logic, in introducing this type of structures. The case of Peirce's law illustrates the semantic role of Heyting algebras. No simple proof is known that Peirce's law cannot be deduced from the basic laws of intuitionistic logic.
A Heyting algebra, from the logical standpoint, is essentially a generalization of the usual system of truth values. Amongst other properties, the largest element, called in logic $\top$, is analogous to 'true'. The usual two-valued logic system is the simplest example of a Heyting algebra, one in which the elements of the algebra are $\top$ (true) and $\bot$ (false). That is, in abstract terms, the two-element Boolean algebra is also a Heyting algebra.
Classically valid formulas are those formulas that have a value of $\top$ in this Boolean algebra under any possible assignment of true and false to the formula's variables — that is, they are formulas which are tautologies in the usual truth-table sense. Intuitionistically valid formulas are those formulas that have a value of $\top$ in any Heyting algebra under any assignment of values to the formula's variables.
One can construct a Heyting algebra in which the value of Pierce's law is not always $\top$. From what has just been said, this does show that it cannot be derived. See Curry-Howard isomorphism for the general context, of what this implies in type theory. | 2013-05-25 05:58:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 35, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8853592872619629, "perplexity": 305.443147132189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00067-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://eprint.iacr.org/2014/727 | ## Cryptology ePrint Archive: Report 2014/727
The Q-curve Construction for Endomorphism-Accelerated Elliptic Curves
Benjamin Smith
Abstract: We give a detailed account of the use of $\mathbb{Q}$-curve reductions to construct elliptic curves over $\mathbb{F}_{p^2}$ with efficiently computable endomorphisms, which can be used to accelerate elliptic curve-based cryptosystems in the same way as Gallant--Lambert--Vanstone (GLV) and Galbraith--Lin--Scott (GLS) endomorphisms. Like GLS (which is a degenerate case of our construction), we offer the advantage over GLV of selecting from a much wider range of curves, and thus finding secure group orders when $p$ is fixed for efficient implementation. Unlike GLS, we also offer the possibility of constructing twist-secure curves. We construct several one-parameter families of elliptic curves over $\mathbb{F}_{p^2}$ equipped with efficient endomorphisms for every $p > 3$, and exhibit examples of twist-secure curves over $\mathbb{F}_{p^2}$ for the efficient Mersenne prime $p = 2^{127}-1$.
Category / Keywords: implementation / elliptic curve cryptosystem, implementation, number theory | 2018-10-16 14:00:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45812806487083435, "perplexity": 1869.008685704748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510754.1/warc/CC-MAIN-20181016134654-20181016160154-00154.warc.gz"} |
https://www.bayer-moebel.de/index.php/2022/07/elden-ring-patch-full-version-dlc-x64-updated/ | 089 / 641 1370 info@bayer-moebel.de
Seite wählen
Name Elden Ring glemar File 4.50 / 5 ( 9693 votes ) (11 days ago)
## Elden Ring Features Key:
• THE NEW FANTASY ACTION RPG with world exploration and combat using the increased graphics and real-time processing that Black Chocobo provides.
• An Epic Drama Born from a Myth There are 16,000 possible stories that occur just in the Lands Between. Join the adventure!
• An Eve Online Type Online Game You can direct your party as you wish by controlling their location. You can create and participate in duels.
• Free Update system The game can be continually added to without expensiveness as the game data is separate from the components such as sound and music.
• A “Play Anywhere” Experience for Your PC, Android, and Browser The game is designed to work both on PC, Android, and a browser environment.
• ## STORY:
A new generation of Elden Ring participants arrives, but the next generation is still at its infancy and is not prepared. However, it is only when the time of the vessels to the Sun comes that the power of the Elden Ring will be fortified, and the fourth generation will be able to enter. The next generation is an era of a new age, and it is now more than ever that the Elden Ring is a symbol of perfection. However, young Elden Ring participants are very different from the past. They talk differently and their minds are different. It is a time of confusion. As the battle has begun, there was a great change… The next chapter begins in the Lands Between, and a recurring rumor has entered the lands, “The heart of a certain dragon lies before us”. Starmax, a young Elden Ring leader, accompanied by his bodyguard and his trusted companions goes into the Lands Between and begins his quest. He sees a strange dragon and leaves to go to the Brothers’-in-arms side in the southeast. He is given the old tale which says “just now, an attack on the Children of the Sun begins…” However, Starmax’s party is critically wounded. Because of the shadow of the attack on the Children of the Sun that day, Starmax sets out on a journey to find the Lords’-in-arms and demand their help. With the help of the Blood Moon’s personified spirit, the second planet’s lord�
5/5 – GameRevolution Like an age-old legend in its own right, the Elden Ring’s epic fantasy RPG will have you marking time in line since December 2018 and some of us are still not there yet. This game is everything an RPG should be: large, awe inspiring, gorgeous, and expansive. It’s a game that’ll have you strolling from town to town and entering massive, gargantuan dungeons. The only real downside is the massive grind that it takes to level up, but if you can get into that grind, you’ll be rewarded in spades. Replay Value Evolving Skills The main issue with this game isn’t the grind, it’s the game’s age. Trying to play this today is like trying to play a game from 1998 and all the types of rewards that await you are the epic in-game rewards that only the game is capable of giving you. Gameplay in this game is designed to make leveling go by a lot more quickly, but it’s hidden in plain sight. These power-ups exist entirely under the hood and new, more powerful skills and armors can be learned through the quest system and relationships in the game. When the game first came out, players didn’t really have a good handle on what they were actually doing other than the basic fact that new skills and armors were lying in wait for them. This has evolved substantially over the years and instead of having to hunt for these new rewards, they can be crafted in the game’s randomly generated worlds. Instead of being compared to a game from 1998, this game can be compared to the original PS2 generation of RPGs. Evolving Skills, Evolving Skills, Evolving Skills, Community One of the things that makes an RPG so special is the community. During my time with the game, I’ve seen many peoples‘ creations and conversations after the fact. Most of the game’s community centers on Reddit, though Discord has been picking up steam. There are also many games being made in the ARPG genre that borrow from this game. Most notable of those is Fate of the Norns and its continued success after almost two years of being in alpha. Out of all of the ARPG’s I’ve played, this game has one of the most active communities. As for the best ARPG for 2018, this game won’t be it, but it’s definitely the best one of 2018. bff6bb2d33
## What’s new:
Eaten Earth is the English localization of Arc the Lad (2006) for the Nintendo DS, available in Japan. At E3 2016, it was announced that the game would be available outside Japan for the Nintendo Switch.
Visit the Official Website.
Nintendo Labo – The Official Site.
Trailer.
0 CommentsSun, 16 Aug 2017 21:17:50 +0000>Q: Determine a variable from string in python say I have a file called „hello.py“ and I would like to determine a variable called a_name with from the string a_text in the same file. How would I do that? So it is something like: from hello_world import a_name a_name = „hello“ #variable # This question is quite cross-posted at mathoverflow.net, as I couldn’t find a suitable answer there. A:
Characterization of a dodecapeptide from the rat prostate gland that binds to the stromelysin-3 (pro-MMP-11) and inhibits its enzymatic activity. A dodecapeptide (YQNGLYGDN) derived from the N-terminus of the major metalloproteinase bone morphogenetic protein-1 (BMP-1) has been synthesized. The inhibition of rat stromelysin-3 (rMMP-11) enzymatic activity by this dodecapeptide is competitive with that of the peptide CGS27023 [cyclic (GGGGS)6], used as a reference in the inhibition assays. The minimal requirement for binding and inhibition of enzymatic activity is Y, suggesting the critical role of the homocysteine in the active site of the enzyme. Peptide YQNGLYGDN protected the peptide CGS27023 and collagen fibrils against degradation by proMMP-11, since no MMP-mediated degradations of CGS27023 and collagen were observed. Dodecapeptide YQNGLYGDN, but not its linear analogue YQGDN, competes with different peptides from BMP-1 for binding to purified recombinant rMMP-11. This result suggests that the N-terminus of BMP-1 adopted an MMP-11 binding site that is different from that of proMMP-13 and rMMP-1. Since both peptides YQNGLYGDN and YQGDN bind to rMMP-11 and inhibit proMMP-11 enzymatic activity, our data support a role of the N-terminus of BMP-1 in the regulation of MMP-11 proteolytic activity.Q: Mapping from a flat connection to a non-flat connection? I have a previous experience of working with flat connections in math. Given a flat connection $c$ I know how to get an associated non-flat connection $\tilde{c}$ (and vice versa). Now, I’m thinking what if I have an arbitrary curved connection $a$. How to get an associated non-curved connection $\tilde{a}$ (and vice versa)? Do any progress has been made in mapping from $c$ to $a$? I tried to google, but didn’t find
## How To Crack:
• Pixelsoft
• MOD DB
• Desura
• dota-game.forumactif.com
• THE NEW FANTASY ACTION RPG. Rise, Tarnished, and be guided by grace to brandish the power of the Elden Ring and become an Elden Lord in the Lands Between. • A Vast World Full of Excitement A vast world where open fields with a variety of situations and huge dungeons with complex and three-dimensional designs are seamlessly connected. As you explore, the joy of discovering unknown and overwhelming threats await you, leading to a high sense of accomplishment. • Create your Own Character In addition to customizing the appearance of your character, you can freely combine the weapons, armor, and magic that you equip. You can develop your character according to your play style, such as increasing your muscle strength to become a strong warrior, or mastering magic. • An Epic Drama Born from a Myth A mult
https://wakelet.com/wake/P2XM9scLZT39zReQLvtGH
https://wakelet.com/wake/XZdvRIuHwU6XnSjroaj4b
https://wakelet.com/wake/ftmeFvWIr2PQ7SPgrDLOM
https://wakelet.com/wake/MeA7aQ9rxv3hfREnw0HqH
https://wakelet.com/wake/61H_247PzLRv3lzZE2AIC
## System Requirements:
– Windows XP SP2 or newer – 1 GB RAM – Video card with 4MB or higher texture mode and 1GB RAM (example: ATI X800 XT 512 MB, NVIDIA 7600 GT 512 MB) – Running Crystal Space, Windows XP SP2 or newer – DVD Drive: Region 1 NTSC Only – DVD Drive: Region 2 PAL Only – DVD Drive: Region 3 PAL Only – DVD Drive: Region 4 PAL Only – DVD Drive: Region 5 PAL Only – DVD Drive:
http://www.publicpoetry.net/2022/07/elden-ring-deluxe-edition-mem-patch-dlc-for-pc-march-2022/ | 2022-08-09 14:28:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17112527787685394, "perplexity": 3432.2086801679093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570977.50/warc/CC-MAIN-20220809124724-20220809154724-00530.warc.gz"} |
https://dsp.stackexchange.com/questions/55328/potential-issues-arising-from-too-stable-discretization | # Potential issues arising from too stable discretization
When numerically simulating a system, usually some kind of discretization is necessary, obtained by some kind of z-transform, such as, for instance, the bilinear transform $$s\mapsto \frac{2}{\triangle t}\frac{z-1}{z+1}$$, which is kind of the same as using the Trapezoidal rule to approximate integration. It also has the very nice property that it preserves BIBO-(in)stability from the s into the z-domain. However, when it is implicitly used as a differentiator, there is a very real possibility of unwanted numerical oscillations.
Now take, for example, the Euler backwards method, corresponding to the substitution $$s\mapsto \frac{z-1}{\triangle t}$$. This method reduces numerical oscillations quite effectively, which would (ignoring for now the slower convergence) seem to be quite a big advantage. However, if we do look at the BIBO-stability areas before and after the z-transformation, this method leads to an effectively increased stability region, i.e. there are some transfer functions that are unstable in the s-, but stable in the z-domain.
1) That the latter method is able to dampen numerical oscillations seems intuitively connected to a greater "numerical stability". However, as I understand, not all systems where numerical oscillations occur after using the bilinear transform are inherently unstable. A simple RL-circuit with transfer function $$\frac{U}{I}=\frac{1}{R+Ls}$$ will a current source attached will, for instance, show numerical oscillations when the source is suddenly switched on or off. So are these phenomena - increased stability domain after the transformation, and damping of numerical oscillations - connected?
2) In literature, I do find a lot about choosing a suitable z-transformation such that the resulting discrete system is stable. However, there was no regard given to possible cases where the original system might have been unstable. To me, this seems like something of an oversight - when simulating an unstable system, I would expect the discrete system to also be unstable. Because, well, the output signals for stable and unstable systems would possibly differ a lot. Have I misunderstood something there? Or is there a reason why one can safely assume the systems in question (namely, electrical power systems) to be stable?
I do know that many algorithms (say, eg, EMTP and all related) usea mix of the above, but while this decreases the difference in stability regions, it is still there.
Edit: By numerical oscillations using the Trapezoidal rule I mean phenomena such as those outlined in http://www.ece.uidaho.edu/ee/power/ECE524/spring14/Lectures/L39/numerosc.pdf , or similar (found using google, there seem to be different texts about the same).
• I have honestly no idea what you are talking about and what do you mean by "numerical oscillations". When you discretize you need to pick a sample rate, which also means that you need to pick a maximum bandwidth and some low pass filter, otherwise you get aliasing. The exact nature of the low pass may add some ringing that you wouldn't necessarily see in a time continuous system, but that's not numerical and a principle limitation of sampling analog systems. – Hilmar Feb 6 at 14:35
• @Hilmar I have added a link to a document outlining one example for such oscillations before. These oscillations occur in the discrete, but obviously not in the continuous model. – Some Math Student Feb 6 at 15:03 | 2019-04-25 03:00:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7556565403938293, "perplexity": 520.167116368659}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578678807.71/warc/CC-MAIN-20190425014322-20190425040322-00434.warc.gz"} |
https://forums.bohemia.net/forums/topic/105749-battleye-installers-a2oa-a2-rft-a2/?page=8 | # BattlEye Installers (A2:OA, A2: RFT, A2)
## Recommended Posts
it's ok now and thanks for your help:yay:
##### Share on other sites
Ok I'm getting a bit angry. And I never do. here's my rant.
I have A2 CO+PMC+BAF
Tonight (after patching the game to 1.59) I had some time and try to catch a public MP game (I never do). But right after loading the mission, the gamespy list comes up again, without any message. I thought: might be the server or mission. I tried maybe 10 other servers, with the same result. Booted to the gamespy list without any message. I had no clue what could be happening. Even the RPT didnt state anyting fatal. Then, when one server was setting up a new game I asked the players to watch for a message when the game starts. When re-joining they tell me it says "battleye initialisation failed". Ok, so I know a bit more. I fire up google, and end up at the battleye site. I download the dll's and overwrite the one in my Arma2/Battleye folder as instructed. On joining the game I get a message Im kicked for corrupt memory #1. Sigh.
I find the battleye folder in Arma2/Expansion, and overwrite that file too. No luck.
Then I end up in this thread. Ran the installer for OA. No luck.
Found the dll's in the appdata folder. removed them. No luck.
Then I decided to uninstall and remove anything regarding battleye: Uninstalling from the remove programs menu gave some errors. On Inspection, nothing changed in the Arma2 folder. I hit the uninstaller exes manually, and removed the folder in appdata.
Then I ran the installer from this thread again (now it's not silent?) , and guess what: I'ts fixed!
So I fixed the problem you think, whats the problem?
Well the problem is the way I had to struggle trough cryptic messages and apparently wrong error messages (my memory isn't corrupted in the end).
Why do I have to ask people on the server to watch MY error message? I'm the one having the problem, not them?
Why does it throw a message that my system is broken, while the installer just didn't install it in the right way?
All in all resolving this took my complete evening, and left me with 15 min of playing until bed time. On top of that I consider myself an advanced user, and I can't imagine how an average joe should have figured this out. He might end up giving up and never bother with the game again...
Now I know BIS and I know gui's and user-friendlyness is not your stongest point, but please, think about the user experience for a change. For example: a silent installer ? How hard is it to make a pop up that says "Update finished, pess OK". You save yourself from a lot of questions here. And I can go on like this for a while...
Greetings,
rundll.exe
##### Share on other sites
the issue with not working uninstall from OS software panel is known and will be resolved in future update of BE installers ...
you need use the uninstaller.exe directly
sorry for this trouble
##### Share on other sites
Yep the ingame error handling needs to be improved.
It is a no go that you don't even get to know why you are booted.
##### Share on other sites
the issue with not working uninstall from OS software panel is known and will be resolved in future update of BE installers ...
you need use the uninstaller.exe directly
sorry for this trouble
Ok, thats good to know. And while you're on it, maybe let the user know what has actually changed after running the instaler. (updated, installed or nothing)
And maybe make the installer replace the files, even if they are the same version. It seems wrong installs are not fixed by a re-install, but they are when you uninstall first.
By the way, the 1.59 patch seemed to update the BE files too (looking at the file date), but didnt fix the problem. How is that possible?
Yep the ingame error handling needs to be improved.
It is a no go that you don't even get to know why you are booted.
This is my main point indeed.
##### Share on other sites
rundll.exe are you using -profile= commandline ?
got games installed in same directory ?
what distribution?
which BE you were inpecting?
A2 one in \BattlEye
or
OA one in \Expansion\BattlEye
?
##### Share on other sites
Im running the DVD (505) version of A2, and sprocket version of OA, insalled in the same directories. I don't use the -profiles command line
First I was inspecting the normal Arma2/Battleye folder because I didnt see there was a separate dll for OA on the battleye website. I copied the A2 battleye dll to there. Later I copied the dll form the Arma2/battleye to Arma2/expansion/battleye, wich obviously didn't work either. (my error)
After that I found this tread, and tried the installer, which only worked after uninstalling everything manually.
In my view the installer should overwrite the files in any case, except for when a valid (OA and A2 in correct folder) newer version is found.
anyway, thx for taking the time to look into this, I hope a more "agressive" installer can help other people fix the Battleye mess more quickly.
Edited by rundll.exe
##### Share on other sites
In my view the installer should overwrite the files in any case, except for when a valid (OA and A2 in correct folder) newer version is found.
In fact, it does that already. There is no check on the files at all, they are just overwritten.
I think the problem in your case was having the wrong BEClient.dll version (the one for ArmA 2) in the OA appdata folder, which isn't fixed by simply uninstalling and reinstalling. As you said, you also removed the appdata BE folder and that eventually fixed your problem.
##### Share on other sites
In fact, it does that already. There is no check on the files at all, they are just overwritten.
I think the problem in your case was having the wrong BEClient.dll version (the one for ArmA 2) in the OA appdata folder, which isn't fixed by simply uninstalling and reinstalling. As you said, you also removed the appdata BE folder and that eventually fixed your problem.
You might be right on that one. But then I wonder how it ended up there. I don't remember fooling around with the BEclient.dll before.
So why doesn't the game overwrite that dll from the install folder each time you launch it?
##### Share on other sites
So why doesn't the game overwrite that dll from the install folder each time you launch it?
##### Share on other sites
Ah so the updates only get applied to that appdata one. Something with user rights I guess?
Anyway, thx for clearing that up.
Maybe the installer should even overwrite that one? Next MP game it gets updated anyway.
##### Share on other sites
I had my system user that run the A2 client belonging to Administrators group. Now this seems no more required, but would it be when the server will requests my client to upgrade its BEClient.dll ?
##### Share on other sites
Known issue:
- uninstall from OS panel don't work, run directly "uninstall.exe" in \BattlEye\ directory
##### Share on other sites
Known issue:
- uninstall from OS panel don't work, run directly "uninstall.exe" in \BattlEye\ directory
and for Linux???
##### Share on other sites
and for Linux???
seriously since when You need BE installer for linux :confused:
##### Share on other sites
seriously since when You need BE installer for linux :confused:
never, but I was asking if a new version is as well available for linux... All the communication is always windows but at least 50% of the servers are linux based...
##### Share on other sites
BE dll have always been available directly on the BE page.
The windows installer are only convenience for users.
##### Share on other sites
never, but I was asking if a new version is as well available for linux... All the communication is always windows but at least 50% of the servers are linux based...
363 Linux servers, 6751 windows. Hardly seems 50%, more like 5%.
Only taking deddies: 363 Linux and 3341 windows. About 10%
Those numbers are high because it caches servers for 1 week.
Edited by Sickboy
##### Share on other sites
363 Linux servers, 6751 windows. Hardly seems 50%, more like 5%.
Only taking deddies: 363 Linux and 3341 windows. About 10%
Those numbers are high because it caches servers for 1 week.
and it caches each windows server raised by a kid only for playing around...concentrate on the TOP 100 always available servers and your figures will look different.
##### Share on other sites
Calm down lads - we have already got an SQF vs SQS fight going on in the editing thread. We don't need a Linux/Windows fight as well! If you don't stop I'll tell you why Coldfusion is better than PHP ;-)
##### Share on other sites
This post has been updated with the outcome to all the suggestions from those trying to help me. This should save anyone who may want to help me from trolling through the posts that follow
Updated date 29/8/2011 @ 13:28 UK time
I am getting kicked from OA game servers with the message "Battle eye client not responding"
There isn't a lot of vanilla OA game servers to test on, however I have tried 6 so far (Only those displaying a green Icon in gamespy
Failed on 4 and was okay on 2, which did have Battleye initialised according to the Gamespy filter
I am able to connect to the servers okay, download the missions, enter the briefing and play the game for approximately 30 seconds before getting kicked
Battleye client when intilialised reports running version 1.146
My Ping to my server is 27ms
The bandwidth reported ingame for my client is 330 to 350
My desync is reported as 0
I have no connection problems for web browsing, download speeds and upload speeds are good for the UK
My router is a Netgear N150 DGN1000. which I don't run using the wireless connection
*******************************************************
Some history
I am returning to play Arrowhead having taken 6 months out from ther game
My ISP and connection have remained the same
I'm not sure if i was running with the same router as i was 6 months ago
However I am running a new system
• Windows 7 64 bit
Asus P8P67standard Revision B3 m/board (running latest BIOS Beta 1850)
I7 processor
NVidia GTX580 GPU
8GB Ram
Vertex 3 ssd for my O/S drive running the latest firmware (v2.11)
Western Digi WD7502AAEX SATA drive for my game
The system is running at stock speeds, is stable with no overclocking etc
*******************************************************
Steps I have taken to try and solve the issue, which have all failed unless otherwise stated
*******************************************************
Clientside
• Reinstalled the entire system twice (O/S, applications etc)
All drivers are fully up to date
Windows is fully updated
reinstalled ArmA2 and Op Arrowhead 3 times (DVD versions)
Uninstalled and reinstalled Battleye using the latest installers
Run with UAC turned off
Run with Windows Firewall OFF
Run with Router firewall OFF
Run with no Anti Virus agent installed
Run ArmA2 v1.10 on a clean ArmA2 installed server
Run OA as a clean vanilla game
Run OA using the latest BETA (83780)
Run OA with and without my BAF DLC installed
Preloaded the island before connecting to the server
Connected to an empty server
Replaced the network cable between the client machine and the router
Attempted to play on standard BIS created missions
Manually copied over the Battle .dll clients from the Battleye downloads page to their respective folders
Uninstalled and reinstalled Battleye using the latest Battleye Installers listed on the BI forums
Ran on a different client with different specs from my home connection
Ran a Lan server at home on one machine then connected to it from another client, (This tested succesful)
• I did notice 2 additional Battleye text messages once the mission started on the Lan client, which I didnt notice when trying to play online.
• 1) Server client version check message
2) GUID information
Manually patched the BE .dll files
• ArmA2 BE Client for Windows (32-bit) : to the root ArmA2 battleye folder
Operation Arrowhead BE Client for Windows (32-bit) : to the root ArmA2 \Expansions \ battleye folder
Operation Arrowhead BE Client for Windows (32-bit) : to my custom profile\ battleye folder
*******************************************************
Serverside
I have my own dedicated server, so have full access to the backend
• 1) Overwrote the Server battleye .dll files with the latest from the Battleye download page
2) added the following line to the BEserver.cfg
MaxPing 1000
Edited the OA server.cfg with the following attempts
a) regularCheck = "{}";
b) //regularCheck = "{}";
NB>> No other members have Battleye issues connecting to the server
I have 2.5 minute serverside wireshark log available of my connection and then battleye kick from the zeus server
*******************************************************
*******************************************************
Some logs to view
*******************************************************
Here is my client .RPT log
Shows my startup shortcuts etc
=====================================================================
== D:\GAMES\ArmA 2\Expansion\beta\arma2oa.exe
== "D:\GAMES\ArmA 2\Expansion\beta\arma2oa.exe" -mod=Expansion\beta;Expansion\beta\Expansion -nosplash -profiles=d:\games\armaprofiles
=====================================================================
Exe timestamp: 2011/08/27 15:11:55
Current time: 2011/08/27 16:16:32
Version 1.59.83780
Item str_disp_server_control listed twice
Warning: looped for animation: ca\wheeled\data\anim\uaz_cargo01_v0.rtm differs (looped now 0)! MoveName: kia_uaz_cargo02
Warning: looped for animation: ca\wheeled\data\anim\uaz_cargo01_v0.rtm differs (looped now 1)! MoveName: uaz_cargo02
Here is the server log
Displays the battleye server version and my "kick" log
This server was specially set up for V2 vanilla signature checking
******** is omitted text for security reasons)
16:16:24 BattlEye Server: Initialized (v1.122)
16:16:24 Host identity created.
16:18:36 Terox uses modified data file - Arma 2;Arma 2: Operation Arrowhead;Arma 2: British Armed Forces;Arma 2: Private
16:18:36 Player Terox connecting.
16:18:37 Player Terox connected (id=**********).
16:19:00 Roles assigned.
16:19:50 BattlEye Server: Player #0 Terox (**********) - GUID: **********************
16:19:50 Game started.
16:20:13 Player Terox kicked off by BattlEye: Client not responding
16:20:14 Player Terox disconnected.
16:22:18 Game finished.
class Session
{
mission="COOP 04 Scud Busters";
island="Desert_E";
gameType="COOP";
duration=148.028;
class Player1
{
name="__SERVER__";
killsInfantry=0;
killsSoft=0;
killsArmor=0;
killsAir=0;
killsPlayers=0;
customScore=0;
killsTotal=0;
killed=0;
};
class Player2
{
name="Terox";
killsInfantry=0;
killsSoft=0;
killsArmor=0;
killsAir=0;
killsPlayers=0;
customScore=0;
killsTotal=0;
killed=0;
};
};
16:22:18 All users disconnected, waiting for users.
Here is my Tracert report
Microsoft Windows [Version 6.1.7601]
C:\Users\Terox>tracert ********** obscured for security
Tracing route to no.rdns-yet.ukservers.com ********** obscured for security
over a maximum of 30 hops:
1 1 ms 1 ms 1 ms ********** obscured for security
2 24 ms 24 ms 24 ms lo98.sc-acc-sip-2.as9105.net [212.74.102.15]
3 23 ms 22 ms 22 ms 10.72.4.49
4 24 ms 24 ms 22 ms 10.72.9.223
5 22 ms 22 ms 22 ms xe-8-3-0.bragg002.loh.as13285.net [80.40.155.19]
6 26 ms 25 ms 24 ms xe-7-3-0.scr001.loh.as13285.net [80.40.155.66]
7 27 ms 25 ms 24 ms xe-10-2-0-scr010.sov.as13285.net [78.144.0.216]
8 27 ms 24 ms 24 ms ae1.rt0.sov.uk.goscomb.net [195.66.226.226]
9 25 ms 24 ms 24 ms xe-0-0-1.rt0.the.uk.goscomb.net [46.17.60.33]
10 26 ms 24 ms 24 ms uk-dedicated-servers-85.gw.goscomb.net [77.75.10
4.85]
11 26 ms 26 ms 27 ms no.rdns-yet.ukservers.com [77.74.193.230]
12 28 ms 28 ms 27 ms bsd1.ukservers.com [78.110.166.122]
13 28 ms 27 ms 27 ms ********** obscured for security
Trace complete.
C:\Users\Terox>
*******************************************************
I am completely at a loss, I have a dedicated server worth in excess of £2000 and i cant play on the damned thing unless i disable Battleye which isnt an ideal solution
So any help here would be greatly appreciated
UPDATE
I submitted the server wireshark log and the client wireshark log to Donut (aka Mangoo)
His findings were
SERVER LOG
No issues found
CLIENT LOG
Issues found between the client and the router.
Following errors many times a second
Header checksum: 0x0000 [incorrect, should be 0xef38 (maybe caused by "IP checksum offload"?)]
Terox, could it be that your DGN1000 router has the same problem that several others have had with the same model? See this thread
The suggested solution there was to return it and get a different router.
I shall be replacing the router and then will report back on whether that fixed the issue or not
Edited by Terox
forgot to state "Windows is fully updated"
##### Share on other sites
Does it work for Arma2 only? (no OA)
Try by loading an A2 mission on the server and run the A2 exe.
##### Share on other sites
Try to disable regularCheck on the ZEUS server:
regularCheck = "{}";
It might help/be related.
##### Share on other sites
Did you try connecting directly to the internet, without router etc? Perhaps some types of the traffic doesn't pass (properly)?
##### Share on other sites
Thanks for the feedback so far.
Does it work for Arma2 only? (no OA)
Try by loading an A2 mission on the server and run the A2 exe.
Ran an ArmA2server.exe on zeus at v1.10
Client running 1.10, same problem
Try to disable regularCheck on the ZEUS server:
It might help/be related.
Server ran with following configs
a) regularCheck = "{}";
b) //regularCheck = "{}";
Did you try connecting directly to the internet, without router etc? Perhaps some types of the traffic doesn't pass (properly)?
How can I connect to the internet without the use of my router/modem?
I have run the router with all security disabled and allowing any/all inbound and outbound traffic
This includes disabling "SIP ALG" whatever that is and Port Scan and DOS Protection
I've also tried using an older backup of the ArmA server bandwidth settings that I know was okay 6 months ago.
I'm still well and truly stuck.
I have a 2.5 minute wireshark log of my connection to the server available. I don't know what I am looking at, but perhaps \$able may be able to decipher it and tell me what's going on
Edited by Terox | 2022-11-28 15:49:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18211722373962402, "perplexity": 8760.616276958588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710533.96/warc/CC-MAIN-20221128135348-20221128165348-00078.warc.gz"} |
https://www.physicsforums.com/threads/confusion-about-how-kcl-and-kvl-are-used-for-diode-circuits.901746/ | # Confusion about how KCL and KVL are used for diode circuits
## Homework Statement
This is just one of the example problems in my book which is already solved for me, but I don't really understand their solution which I'll post here:
They start out with assuming Vin is very negative, which makes D1 turn on and makes Vout=VD,on+Vin. THen they solve for the currents.
KCL/KVL
## The Attempt at a Solution
What I am confused on is that if the diode is turned on, then the branch with the diode is a short circuit, so why is there current going in the R2 branch?
The book solves all the problems like this one the same way, but I don't really get it. Are we assuming the diode is actually on the brink of turning on/off, so it allows current through it, but it's actually not a short circuit yet?
Related Introductory Physics Homework Help News on Phys.org
gneill
Mentor
What I am confused on is that if the diode is turned on, then the branch with the diode is a short circuit, so why is there current going in the R2 branch?
What does VD.on represent? Is the diode treated as ideal or non-ideal in these problems?
timnswede
cnh1995
Homework Helper
Gold Member
if the diode is turned on, then the branch with the diode is a short circuit,
No. You are asked to use the constant voltage model for the diode.
timnswede
Oh wow, not sure why I didn't realize it until you guys said it, but now it makes sense, thanks! | 2020-11-23 16:49:36 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8328641057014465, "perplexity": 519.5701382264434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141163411.0/warc/CC-MAIN-20201123153826-20201123183826-00442.warc.gz"} |
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition/chapter-4-inverse-exponential-and-logarithmic-functions-4-3-logarithmic-functions-4-3-exercises-page-446/76 | ## Precalculus (6th Edition)
Can not be simplified $\log _{6}\left( 7m+3q\right)$
$\log _{6}\left( 7m+3q\right)$ Cannot be simplified | 2019-01-21 05:09:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9103595614433289, "perplexity": 13543.00090727759}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583763149.45/warc/CC-MAIN-20190121050026-20190121072026-00182.warc.gz"} |
https://pyro.ai/examples/csis.html | # Compiled Sequential Importance Sampling¶
Compiled sequential importance sampling [1], or inference compilation, is a technique to amortize the computational cost of inference by learning a proposal distribution for importance sampling.
The proposal distribution is learned to minimise the KL divergence between the model and the guide, $$\rm{KL}\!\left( p({\bf z} | {\bf x}) \lVert q_{\phi, x}({\bf z}) \right)$$. This differs from variational inference, which would minimise $$\rm{KL}\!\left( q_{\phi, x}({\bf z}) \lVert p({\bf z} | {\bf x}) \right)$$. Using this loss encourages the approximate proposal distribution to be broader than the true posterior (mass covering), whereas variational inference typically learns a narrower approximation (mode seeking). Guides for importance sampling are usually desired to have heavier tails than the model (see this stackexchange question). Therefore, the inference compilation loss is usually more suited to compiling a guide for importance sampling.
Another benefit of CSIS is that, unlike many types of variational inference, it has no requirement that the model is differentiable. This allows it to be used for inference on arbitrarily complex programs (e.g. a Captcha renderer [1]).
This example shows CSIS being used to speed up inference on a simple problem with a known analytic solution.
[1]:
import torch
import torch.nn as nn
import torch.functional as F
import pyro
import pyro.distributions as dist
import pyro.infer
import pyro.optim
import os
smoke_test = ('CI' in os.environ)
n_steps = 2 if smoke_test else 2000
## Specify the model:¶
The model is specified in the same way as any Pyro model, except that a keyword argument, observations, must be used to input a dictionary with each observation as a key. Since inference compilation involves learning to perform inference for any observed values, it is not important what the values in the dictionary are. 0 is used here.
[2]:
def model(prior_mean, observations={"x1": 0, "x2": 0}):
x = pyro.sample("z", dist.Normal(prior_mean, torch.tensor(5**0.5)))
y1 = pyro.sample("x1", dist.Normal(x, torch.tensor(2**0.5)), obs=observations["x1"])
y2 = pyro.sample("x2", dist.Normal(x, torch.tensor(2**0.5)), obs=observations["x2"])
return x
## And the guide:¶
The guide will be trained (a.k.a. compiled) to use the observed values to make proposal distributions for each unconditioned sample statement. In the paper [1], a neural network architecture is automatically generated for any model. However, for the implementation in Pyro the user must specify a task-specific guide program structure. As with any Pyro guide function, this should have the same call signature as the model. It must also encounter the same unobserved sample statements as the model. So that the guide program can be trained to make good proposal distributions, the distributions at sample statements should depend on the values in observations. In this example, a feed-forward neural network is used to map the observations to a proposal distribution for the latent variable.
pyro.module is called when the guide function is run so that the guide parameters can be found by the optimiser during training.
[3]:
class Guide(nn.Module):
def __init__(self):
super().__init__()
self.neural_net = nn.Sequential(
nn.Linear(2, 10),
nn.ReLU(),
nn.Linear(10, 20),
nn.ReLU(),
nn.Linear(20, 10),
nn.ReLU(),
nn.Linear(10, 5),
nn.ReLU(),
nn.Linear(5, 2))
def forward(self, prior_mean, observations={"x1": 0, "x2": 0}):
pyro.module("guide", self)
x1 = observations["x1"]
x2 = observations["x2"]
v = torch.cat((x1.view(1, 1), x2.view(1, 1)), 1)
v = self.neural_net(v)
mean = v[0, 0]
std = v[0, 1].exp()
pyro.sample("z", dist.Normal(mean, std))
guide = Guide()
## Now create a CSIS instance:¶
The object is initialised with the model; the guide; a PyTorch optimiser for training the guide; and the number of importance-weighted samples to draw when performing inference. The guide will be optimised for a particular value of the model/guide argument, prior_mean, so we use the value set here throughout training and inference.
[4]:
optimiser = pyro.optim.Adam({'lr': 1e-3})
csis = pyro.infer.CSIS(model, guide, optimiser, num_inference_samples=50)
prior_mean = torch.tensor(1.)
## Now we ‘compile’ the instance to perform inference on this model:¶
The arguments given to csis.step are passed to the model and guide when they are run to evaluate the loss.
[5]:
for step in range(n_steps):
csis.step(prior_mean)
## And now perform inference by importance sampling:¶
The compiled guide program should now be able to propose a distribution for z that approximates the posterior, $$p(z | x_1, x_2)$$, for any $$x_1, x_2$$. The same prior_mean is entered again, as well as the observed values inside observations.
[6]:
posterior = csis.run(prior_mean,
observations={"x1": torch.tensor(8.),
"x2": torch.tensor(9.)})
marginal = pyro.infer.EmpiricalMarginal(posterior, "z")
## We now plot the results and compare with importance sampling:¶
We observe $$x_1 = 8$$ and $$x_2 = 9$$. Inference is performed by taking 50 samples using CSIS, and 50 using importance sampling from the prior. We then plot the resulting approximations to the posterior distributions, along with the analytic posterior.
[7]:
import numpy as np
import scipy.stats
import matplotlib.pyplot as plt
# Draw samples from empirical marginal for plotting
csis_samples = [marginal().detach() for _ in range(1000)]
# Calculate empirical marginal with importance sampling
is_posterior = pyro.infer.Importance(model, num_samples=50).run(prior_mean,
observations={"x1": torch.tensor(8.),
"x2": torch.tensor(9.)})
is_marginal = pyro.infer.EmpiricalMarginal(is_posterior, "z")
is_samples = [is_marginal().detach() for _ in range(1000)]
# Calculate true prior and posterior over z
true_posterior_z = np.arange(-10, 10, 0.05)
true_posterior_p = np.array([np.exp(scipy.stats.norm.logpdf(p, loc=7.25, scale=(5/6)**0.5)) for p in true_posterior_z])
prior_z = true_posterior_z
prior_p = np.array([np.exp(scipy.stats.norm.logpdf(z, loc=1, scale=5**0.5)) for z in true_posterior_z])
plt.rcParams['figure.figsize'] = [30, 15]
plt.rcParams.update({'font.size': 30})
fig, ax = plt.subplots()
plt.plot(prior_z, prior_p, 'k--', label='Prior')
plt.plot(true_posterior_z, true_posterior_p, color='k', label='Analytic Posterior')
plt.hist(csis_samples, range=(-10, 10), bins=100, color='r', normed=1, label="Inference Compilation")
plt.hist(is_samples, range=(-10, 10), bins=100, color='b', normed=1, label="Importance Sampling")
plt.xlim(-8, 10)
plt.ylim(0, 5)
plt.xlabel("z")
plt.ylabel("Estimated Posterior Probability Density")
plt.legend()
plt.show()
Using $$x_1 = 8$$ and $$x_2 = 9$$ gives a posterior far from the prior, and so using the prior as a guide for importance sampling is inefficient, giving a very small effective sample size. By first learning a suitable guide function, CSIS has a proposal distribution much more closely matched to the true posterior. This allows samples to be drawn with far better coverage of the true posterior, and greater effective sample size, as shown in the graph above.
For other examples of inference compilation, see [1] or https://github.com/probprog/anglican-infcomp-examples.
### References¶
[1] Inference compilation and universal probabilistic programming, Tuan Anh Le, Atilim Gunes Baydin, and Frank Wood | 2020-04-05 23:11:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43610680103302, "perplexity": 4547.2733901807605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371611051.77/warc/CC-MAIN-20200405213008-20200406003508-00424.warc.gz"} |
https://collegemathteaching.wordpress.com/2013/10/ | # College Math Teaching
## October 25, 2013
### A Laplace Transform of a function of non-exponential order
Many differential equations textbooks (“First course” books) limit themselves to taking Laplace transforms of functions of exponential order. That is a reasonable thing to do. However I’ll present an example of a function NOT of exponential order that has a valid (if not very useful) Laplace transform.
Consider the following function: $n \in \{1, 2, 3,...\}$
$g(t)= \begin{cases} 1,& \text{if } 0 \leq t \leq 1\\ 10^n, & \text{if } n \leq t \leq n+\frac{1}{100^n} \\ 0, & \text{otherwise} \end{cases}$
Now note the following: $g$ is unbounded on $[0, \infty)$, $lim_{t \rightarrow \infty} g(t)$ does not exist and
$\int^{\infty}_0 g(t)dt = 1 + \frac{1}{10} + \frac{1}{100^2} + .... = \frac{1}{1 - \frac{1}{10}} = \frac{10}{9}$
One can think of the graph of $g$ as a series of disjoint “rectangles”, each of width $\frac{1}{100^n}$ and height $10^n$ The rectangles get skinnier and taller as $n$ goes to infinity and there is a LOT of zero height in between the rectangles.
Needless to say, the “boxes” would be taller and skinnier.
Note: this is an example can be easily modified to provide an example of a function which is $l^2$ (square integrable) which is unbounded on $[0, \infty)$. Hat tip to Ariel who caught the error.
It is easy to compute the Laplace transform of $g$:
$G(s) = \int^{\infty}_0 g(t)e^{-st} dt$. The transform exists if, say, $s \geq 0$ by routine comparison test as $|e^{-st}| \leq 1$ for that range of $s$ and the calculation is easy:
$G(s) = \int^{\infty}_0 g(t)e^{-st} dt = \frac{1}{s} (1-e^{-s}) + \frac{1}{s} \sum^{\infty}_{n=1} (\frac{10}{e^s})^n(1-e^{\frac{-s}{100^n}})$
Note: if one wants to, one can see that the given series representation converges for $s \geq 0$ by using the ratio test and L’Hoptial’s rule.
### Why the sequence cos(n) diverges
We are in the sequences section of our Freshman calculus class. One of the homework problems was to find whether the sequence $a_n = cos(\frac{n}{2})$ converged or diverged. This sequence diverges, but it isn’t easy for a freshman to see.
I’ll discuss this problem and how one might go about explaining it to a motivated student. To make things a bit simpler, I’ll discuss the sequence $a_n = cos(n)$ instead.
Of course $cos(x)$ is periodic with a fundamental region $[0, 2\pi]$ so we will work with that region. Now we notice the following:
$n (mod 2 \pi)$ is a group with the usual operation of addition.
By $n (mod 2 \pi)$, I mean the set $n + k*2\pi$ where $k \in \{..-2, -1, 0, 1, 2, 3,...\}$; one can think of the analogue of modular arithmetic, or one might see the elements of the group $\{ r| r \in [0, 2 \pi), r = n - k 2\pi \}$.
Of course, to get additive inverses, we need to include the negative integers, but ultimately that won’t matter. Example: $1, 2, 3, 4, 5, 6$ are just equal to themselves $mod 2 \pi.$ $7 = 7 - 2\pi (mod 2\pi), 13 = 13 - 4 \pi (mod 2\pi)$, etc. So, I’ll denote the representative of $n (mod 2\pi)$ by $[n]$.
Now if $n \ne m$ then $[n] \ne [m]$; for if $[n]=[m]$ then there would be integers $j, k$ so that $n + j2\pi = m +k2\pi$ which would imply that $|m-n|$ is a multiple of $\pi$. Therefore there are an infinite number of $[n]$ in $[0, 2\pi]$ which means that the set $\{[n]\}$ has a limit point in the compact set $[0, 2\pi]$ which means that given any positive integer $m$ there is some interval of width $\frac{2\pi}{m}$ that contains two distinct $[i], [j]$ (say, $j$ greater than $i$.)
This means that $[j-i] \in (0, \frac{2\pi}{m})$ so there is some integers $k_2, k_3,$ so that $k_2[j-i] \in (\frac{2\pi}{m}, \frac{2*2\pi}{m}), k_3[j-i] \in (\frac{2*2\pi}{m}, \frac{3*2\pi}{m})$, etc. Therefore there is some multiple of $[j-i]$ in every interval of width $\frac{2\pi}{m}$. But $m$ was an arbitrary positive integer; this means that the $[n]$ are dense in $[0,2\pi]$. It follows that $cos([n]) = cos(n)$ is dense in $[-1,1]$ and hence $a_n = cos(n)$ cannot converge as a sequence.
Frankly, I think that this is a bit tough for most Freshman calculus classes (outside of, say those at MIT, Harvard, Cal Tech, etc.).
## October 22, 2013
### Training mathematics teachers
Filed under: academia, pedagogy — Tags: — collegemathteaching @ 10:05 pm
“What I hadn’t realized was that setting a high bar at the beginning of the profession sends a signal to everyone else that you are serious about education and teaching is hard,” Ripley told me. “When you do that, it makes it easier to make the case for paying teachers more, for giving them more autonomy in the classroom. And for kids to buy into the premise of education, it helps if they can tell that the teachers themselves are extremely well educated.”
Once they are admitted, critics say, prospective teachers need more rigorous study, not just of the science and philosophy of education but of the contents, especially in math and the sciences, where America trails the best systems in Asia and Europe. A new study by the Education Policy Center at Michigan State, drawing on data from 17 countries, concluded that while American middle school math teachers may know a lot about teaching, they often don’t know very much about math. Most of them are not required to take the courses in calculus and probability that are mandatory in the best-taught programs.
Now, of course, there is the debate about “content” and “taking a course” (e. g. some states allow for a course to qualify as a “content check off” without actually being a course in the subject).
There will be pushback; I’ve been frequently distressed to hear students complain “why do I have to learn X…all I want to do is to teach math.”
### The worst kind of paper to referee
Filed under: academia, advanced mathematics, research — Tags: — collegemathteaching @ 9:59 pm
Of course, refereeing journal articles is an expected duty; I’ve published a few and therefore benefited from the service of referees.
And it is very important that referees do their jobs responsibly.
I’ve refereed a few articles and some were very easy to reject: they either contained gross errors OR contained proofs of items that were already well known…and the existing “known” proofs were simpler (e. g. appeared in widely read textbooks).
But the most difficult articles to referee are those that are both
1. Poorly written and
2. contain some content that might have mathematical value.
These sorts of articles are time-sinks; one has to read them carefully because those ideas might well be worth seeing in print…but my goodness they are painful to read.
## October 16, 2013
### Convincing calculus students that the symbols MEAN SOMETHING
Filed under: applications of calculus, calculus, integrals — collegemathteaching @ 12:04 am
On a recent exam, the first 5 questions were as follows:
Given the region bounded by $x = \frac{1}{2}, x = 1, y = 0, y = \frac{1}{x}$
1. Find the AREA enclosed by the region.
2. Find the volume obtained by revolving this area about the $y$ axis (the line $x = 0$).
3. Find the volume obtained by revolving this area about the $x$ axis (the line $y = 0$).
4. Find $\bar{x}$. (constant density lamina).
5. Find $\bar{y}$.
Many students did fine, though there were a couple who literally blanked out on how to integrate $\frac{1}{x}$.
But some…well expect a few errors. But there were some who put a factor of $\pi$ in their answers to 4, 5, and…yes, even 1.
Evidently, I’ll have to give my “these symbols actually have MEANING” speech again.
Note: yes, there were some interesting symmetries here; perhaps some students didn’t believe their answers.
## October 15, 2013
### What do you mean “you don’t know”???
Filed under: calculus, integrals, student learning — Tags: — collegemathteaching @ 6:38 pm
I am grading calculus II exams and the above photo looks a bit like me. I’ll calm down before I hand the exams back to the students.
But I swear: I had one student DURING THE EXAM say “hey, you can’t do $\int^1_{\frac{1}{2}} \frac{1}{x} dx$ because $\frac{1}{x} = x^{-1}$ and $x^{-1+1} \frac{1}{-1+1}$ is undefined. Yes, this person passed calculus I and yes, we did that integral in calculus I (some schools hold off and develop $ln(x) = \int^x_1 \frac{1}{t} dt$ using the Fundamental Theorem of Calculus). And yes, previously THIS SEMESTER we did integrals like $\int \frac{1}{(x-1)(x+1)} dx$.
(facepalm).
### Hydrostatic force and work problems and …topology?
I just gave my second “calculus two” exam; the final two problems involved the following:
Suppose a trough with semi-circular ends is filled with water (say, length = 40 feet, radius = 5 feet).
1. How much work does one do in pumping all of the water to the top of the tank and out? (work against gravity only)
2. How much hydrostatic force is there against one of the semi-circular end plates?
Assuming water is 62.5 pounds per gallon (I gave that to them):
1. Work = $62.5 \int^5_0 40x \sqrt{25 - x^2} dx$; of course, $x$ is the distance a molecule of water is lifted and $40 \sqrt{25 - x^2} dx$ is the cross sectional volume of water.
2. Force = $62.5 \int^5_0 x \sqrt{25 - x^2} dx$; of coure, $x$ is the depth of the water (hence $62.5 x$ is the pressure at that depth) and $\sqrt{25 - x^2} dx$ is the area at depth $x$ that the pressure is applied to.
The student can easily notice that the two answers differ by a factor of 40, which is the length of the trough.
So, what is the lesson here? Well, for one, I always envisioned a wall of a tank holding back a long mass of water. That isn’t correct; ONLY THE DEPTH matters. If the tank were a mile long or, say, an inch long, the pressure on the semi-circular ends would be the same. That runs counter to my intuition (which is clearly bad).
This lead me to think about the following: what if one were to put in a baffle between the two ends and the baffle was the same shape as the semi-circular ends. What would be the force on that plate?
Of course, the net force would be zero; there are two sides of the plate.
Now drop an open cylinder into the tank (think: a can with the top and bottom cut away). Clearly: zero force on the sides, right? Two sides, right?
Now, drop a Mobius band into the tank. A Mobius band has but one side. What is the force on it?
The key here: the Mobius band is one sided, but it is LOCALLY two sided; one can break the surface into tiny rectangles and note that the net force on each rectangle is zero as it is locally two sided. Hence zero total force on the one sided object.
## October 9, 2013
### Fun with complex numbers and particular solutions
I was fooling around with $y''+by'+cy = e^{at}cos(bt)$ and thought about how to use complex numbers in the case when $e^{at}cos(bt)$ is not a solution to the related homogenous equation. It then hit me: it is really quite simple.
First notes the following: $e^{rt}cos(st) = \frac{1}{2}(e^{(r + si)t} + e^{(r - si)t})$ and $e^{rt}sin(st) = \frac{1}{2i}(e^{(r + si)t} - e^{(r - si)t})$.
Then it is a routine exercise to see the following: given that $z = r+si, \bar{z} = r-si$ are NOT solutions to $p(m)= m^2 + bm + c = 0$ $p(m)$ is the characteristic equation of the differential equation. Then: attempt $y_p = Ae^{zt} + Be^{\bar{z}t}$ Put into the differential equation to see $y''_p + by'_p + cy_p = A(z^2+bz+c)e^{zt} + B(\bar{z}^2 + b\bar{z} + c)e^{\bar{z}t}$.
Then: if the forcing function is $e^{rt}cos(st)$, a particular solution is $y_p = Ae^{zt} + Be^{\bar{z}t}$ where $A = \frac{1}{2(p(z))}, B = \frac{1}{2(p(\bar{z}))}$. If the forcing function is $e^{rt}cos(st)$, a particular solution is $y_p = Ae^{zt} - Be^{\bar{z}t}$ where $A = \frac{1}{2i(p(z))}, B = \frac{1}{2i(p(\bar{z}))}$.
That isn’t profound, but it does lead to the charming exercise: if $z, \bar{z}$ are NOT roots to the quadratic with real coefficients $p(x)$, then $\frac{1}{p(z)} + \frac{1}{p(\bar{z})}$ is real as is $\frac{i}{p(z)} - \frac{i}{p(\bar{z})}$.
Let’s check this out: $\frac{1}{p(z)} + \frac{1}{p(\bar{z})} = \frac{p(\bar{z})+p(z)}{p(z)p(\bar{z})}$. Now look at the numerator and the denominator separately. The denominator: $p(z)p(\bar{z})= (z^2 + bz +c)(\bar{z}^2 + b\bar{z} + c) = (z^2 \bar(z)^2) + b(b (z \bar{z}) + (z(z \bar{z}) +\bar{z}(z \bar{z})) + c (\bar{z} + z) + (z^2 + \bar{z}^2)) + (c^2)$ Now note that every term inside a parenthesis is real.
The numerator: $z^2 + \bar{z}^2 + b (z + \bar{z}) + 2c$ is clearly real.
What about $\frac{i}{p(z)} - \frac{i}{p(\bar{z})}$? We need to only check the numerator: $i (z^2 - \bar{z}^2 + b(z - \bar{z}) + c-c)$ is indeed real.
Yeah, this is elementary but this might appear as an exercise for my next complex variables class.
## October 1, 2013
### I’ve lost any reason to live…
Filed under: student learning — Tags: , — collegemathteaching @ 7:45 pm
Today, I was grading differential equation papers. Some were really good, others were not in that category.
A calculus 2 student came in (we teach techniques of integration, applications of integration, and infinite series).
She, in a very polite manner, complained that the course was. too. easy.
Too. Easy.
I wish that I drank.
But there is no denying it: we have strong freshmen and some very weak, “take the class multiple times” students in the same class, and heaven help you if your flunk out rate is too high.
I might encourage her to take her complaint to the dean and put it in writing.
#\$#@!!! | 2017-06-24 00:10:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 106, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7918409705162048, "perplexity": 594.3795874804476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320206.44/warc/CC-MAIN-20170623235306-20170624015306-00534.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-11-rational-expressions-and-functions-11-5-solving-rational-equations-practice-and-problem-solving-exercises-page-683/31 | ## Algebra 1
The entire equation would be multiplied by $(x-2)(x+6)$ to get rid of the denominators. | 2020-02-18 12:27:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47507309913635254, "perplexity": 311.4471501389999}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143695.67/warc/CC-MAIN-20200218120100-20200218150100-00340.warc.gz"} |
http://www.zentralblatt-math.org/zmath/en/advanced/?q=an:1120.03312 | Language: Search: Contact
Zentralblatt MATH has released its new interface!
For an improved author identification, see the new author database of ZBMATH.
Query:
Fill in the form and click »Search«...
Format:
Display: entries per page entries
Zbl 1120.03312
Şahin, Mehmet; Yapar, Ziya
Measure in generalized fuzzy sets.
(English)
[J] Far East J. Math. Sci. (FJMS) 24, No. 2, 217-226 (2007). ISSN 0972-0871
Summary: In this article, using a construction of fuzzy sets without depending on a membership function, algebraic properties of a family of fuzzy sets, three notions, including rings of generalized fuzzy sets $GF(X)$ of $X$, complete Heyting algebras (cHa) which contain the power set $P(X)$ of $X$, extension lattices $\overline{B(L)}$, where $B=P(X)$, and sets of $L$-fuzzy sets, where $L=\{L_x\mid x\in X\}$, definitions of fuzzy $\sigma$-algebra and fuzzy measure are generalized. We obtain some results using these definitions, which include a generalization of Proposition 2 in [{\it E. P. Klement} and {\it W. Schwyhla}, Fuzzy Sets Syst. 7, 57--70 (1982; Zbl 0478.28006)].
MSC 2000:
*03E72 Fuzzy sets (logic)
28E10 Fuzzy measures
06D20 Heyting algebras
Citations: Zbl 0478.28006
Highlights
Master Server | 2013-05-22 05:20:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9533028602600098, "perplexity": 4925.4654937113955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701370254/warc/CC-MAIN-20130516104930-00098-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/star-core-temperatures.558593/ | # Star core temperatures
1. Dec 9, 2011
### goldsax
does it follow that the larger the star the higher the core temperature?
if so what would be the highest temperature attained.
thanks
2. Dec 9, 2011
### QuArK21343
Yes, the larger the star the highest the core temperature must be. This is because in main-sequence stars a state of hydrostatic equilibrium is attained: the gravitational forces that tend to make the star collapse on itself and the forces due to radiation pressure that tend to make the star expand, on each infinitesimal volume, balanced themselves. The greater the mass, the greater the pressure gradient has to be to make the star stable. Temperatures inside a star are of the order of 10-15 million kelvin at least. The pressure gradient is due to the processes of nuclear fusion that occur in the stellar plasmas (protons need huge kinetic energies to overcome the potential barrier due to electrostatic repulsion and to actually fuse into helium).
I hope it is clear.
3. Dec 9, 2011
### goldsax
Thanks for the clear explanation ..
I should have realised that the only way to counteract the mass would need an increase in heat(kinetic energy).
Just as a matter of interest .. What would be the core temp in the largest stars? Thanks again
4. Dec 9, 2011
### Nabeshin
Given that we know massive stars fuse up to iron, which requires core temperatures of ~billion K, this is at least a pretty decent lower limit on the hottest core temperature achieved in the most massive stars.
5. Dec 9, 2011
### QuArK21343
There are at least three known mechanisms for nuclear fusion.
Stars with mass comparable to that of the sun rely almost completely on the proton-proton chain. This requires temperatures of about 15 million kelvins. From a classical point of view, even such temperatures would not be sufficient to overcome the coulomb potential between protons in the stellar plasma. Only quantum mechanics, and in particular so called tunnel effect, fully accounts for the entire process. Also electroweak forces intervene: they are responsible for the process that takes a proton into a neutron and an electric neutrino. This stage produces deuterium; deuterium can then fuse to produce helium-3 particles and finally, alpha-particles.
For stars with about 10 solar masses, carbon cycle is the dominant mechanism.
For even more massive stars (red giants and supergiants) with temperature above 100 million kelvins, helium can fuse directly to form beryllium and then carbon (triple alpha process).
With still more massive and older star, hydrogen fuel begins to run out. Neon and oxygen burning take place. Then, the star approaches the peak of the binding curve, the so called iron peak: when iron is produced the star is doomed, since this process is endothermic and so takes energy from the outside: the star will collapse and finally explode in a type II supernova. A very brief phase before this moment is the so called silicon burning process: here massive stars can reach temperatures of even 3.5 billions K!
6. Dec 9, 2011
### goldsax
3+billion k ..... Wow ..
Thanks again for the explanation
7. Dec 9, 2011
### Drakkith
Staff Emeritus
Does the core temperature vary with the type of fuel being burnt at the time? Or does it simply depend on the mass? Are the most massive stars burning Hydrogen at a temperature of 3 billion k or more?
8. Dec 10, 2011
### QuArK21343
A very rough approximation of the temperature of a stellar core is given by
$$T=\frac{GMm_p}{RK_{B}}$$
where $M$ is the mass of the star, $R$ its radius and $m_p$ tha mass of the proton. This is an order of magnitude relation and can be derived from a condition of hydrostatic equilibrium (just equate the force due to pressure gradient to the gravitational force and use the equation of state for a perfect gas to get $\partial p/\partial r=-GM_r\rho_r/r^2$). So you can see that the greater the mass the higher the temperature and the more compact the star is the higher the temperature. Of course, the type of fuel being burnt determines the temperature. No star burns hydrogen at temperature of billions of degrees. Only stars with masses greater than 10 solar masses and approaching the iron peak can reach such temperatures (if hydrogen runs out, gravitational collapse sets in and the temperature increases till fusion of silicon can take place).
9. Dec 11, 2011
### Drakkith
Staff Emeritus
Alright, so when Hydrogen burning stars a more massive star simply burns alot more of it in a bigger core than a star like the Sun? | 2018-06-19 07:19:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6450291275978088, "perplexity": 921.7518331463389}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267861980.33/warc/CC-MAIN-20180619060647-20180619080647-00556.warc.gz"} |
http://www.ssmathematics.com/2018/12/straight-line-equations-cie.html | ## Sunday, December 23, 2018
### Straight Line Equations (CIE)
The table shows values of variables $\D x$ and $\D y.$
$\D \vicol{x& 1& 3& 6& 10& 14}{y& 2.5& 4.5& 0 &–20 &–56}$
(i) By plotting a suitable straight line graph, show that $\D y$ and $\D x$ are related by the equation $\D y = Ax + Bx^2,$ where $\D A$ and $\D B$ are constants. [4]
(ii) Use your graph to find the value of $\D A$ and of $\D B.$ [4]
2 (CIE 2012, s, paper 22, question 7)
The table shows experimental values of variables $\D x$ and $\D y.$
$\D \vcol{x& 5& 30& 150& 400}{y& 8.9& 21.9& 48.9& 80.6}$
(i) By plotting a suitable straight line graph, show that $\D y$ and $\D x$ are related by the equation $\D y = ax^b,$ where $\D a$ and $\D b$ are constants. [4]
(ii) Use your graph to estimate the value of $\D a$ and of $\D b.$ [4]
3 (CIE 2012, w, paper 11, question 10)
The table shows values of the variables $\D x$ and $\D y.$
$\D\vicol{ x^{\circ}& 10& 30 &45 &60 &80}{ y &11.2& 16 &19.5& 22.4& 24.7}$
(i) Using the graph paper below, plot a suitable straight line graph to show that, for 10° $\D \le x\le$ 80°, $\D \sqrt{y} = A \sin x + B,$ where $\D A$ and $\D B$ are positive constants. [4]
(ii) Use your graph to find the value of $\D A$ and of $\D B.$ [3]
(iii) Estimate the value of $\D y$ when $\D x = 50.$ [2]
(iv) Estimate the value of $\D x$ when $\D y = 12.$ [2]
4 (CIE 2012, w, paper 22, question 8)
The variables $\D x$ and $\D y$ are related in such a way that when $\D \lg y$ is plotted against $\D \lg x$ a straight line graph is obtained as shown in the diagram. The line passes through the points (2, 4) and (8, 7).
(i) Express $\D y$ in terms of $\D x,$ giving your answer in the form $\D y = ax^b,$ where $\D a$ and $\D b$ are constants. [5]
Another method of drawing a straight line graph for the relationship $\D y = ax^b,$ found in part (i), involves plotting $\D \lg x$ on the horizontal axis and $\D \lg(y^2)$ on the vertical axis. For this straight line graph what is
(iii) the intercept on the vertical axis? [1]
5 (CIE 2012, w, paper 23, question 9)
The table shows experimental values of two variables $\D x$ and $\D y.$
$\D \vcol{ x& 1& 2& 3& 4}{y& 9.41 &1.29& – 0.69& – 1.77}$
It is known that $\D x$ and $\D y$ are related by the equation $\D y = \frac{a}{x^2}+bx,$ where $\D a$ and $\D b$ are constants.
(i) A straight line graph is to be drawn to represent this information. Given that $\D x^2y$ is plotted on the vertical axis, state the variable to be plotted on the horizontal axis. [1]
(ii) On the grid opposite, draw this straight line graph. [3]
(iii) Use your graph to estimate the value of $\D a$ and of $\D b.$ [3]
(iv) Estimate the value of $\D y$ when $\D x$ is 3.7. [2]
6 (CIE 2013, s, paper 11, question 2)
Variables $\D x$ and $\D y$ are such that $\D y= Ab^x,$ where $\D A$ and $\D b$ are constants. The diagram shows the graph of $\D \ln y$ against $\D x,$ passing through the points (2, 4) and (8, 10). Find the value of $\D A$ and of $\D b.$ [5]
7 (CIE 2013, s, paper 22, question 1)
Variables $\D x$ and $\D y$ are such that when $\D \sqrt{y}$ is plotted against $\D x^2$ a straight line graph passing through the points (1, 3) and (4, 18) is obtained. Express $\D y$ in terms of $\D x.$ [4]
8 (CIE 2013, w, paper 13, question 10)
The variables $\D s$ and $\D t$ are related by the equation $\D t= ks^n,$ where $\D k$ and $\D n$ are constants. The table below shows values of variables $\D s$ and $\D t.$
$\D \vcol{s& 2& 4& 6& 8}{t& 25.00& 6.25& 2.78& 1.56}$
(i) A straight line graph is to be drawn for this information with $\D \lg t$ plotted on the vertical axis. State the variable which must be plotted on the horizontal axis. [1]
(ii) Draw this straight line graph on the grid below. [3]
(iii) Use your graph to find the value of $\D k$ and of $\D n.$ [4]
(iv) Estimate the value of $\D s$ when $\D t = 4.$ [2]
9 (CIE 2013, w, paper 21, question 8)
The table shows experimental values of two variables $\D x$ and $\D y.$
$\D \vcol{x& 2 &4& 6& 8}{y& 9.6& 38.4& 105& 232}$
It is known that $\D x$ and $\D y$ are related by the equation $\D y= ax^3+ bx,$ where $\D a$ and $\D b$ are constants.
(i) A straight line graph is to be drawn for this information with $\D \frac{y}{x}$ on the vertical axis. State the variable which must be plotted on the horizontal axis. [1]
(ii) Draw this straight line graph on the grid below. [2]
(iii) Use your graph to estimate the value of $\D a$ and of $\D b.$ [3]
(iv) Estimate the value of $\D x$ for which $\D 2y = 25x.$ [2]
10 (CIE 2014, s, paper 11, question 8)
The table shows values of variables $\D V$ and $\D p.$
$\D \vcol{ V &10& 50& 100& 200}{p& 95.0& 8.5& 3.0& 1.1}$
(i) By plotting a suitable straight line graph, show that $\D V$ and $\D p$ are related by the equation $\D p = kV^n ,$
where $\D k$ and $\D n$ are constants. [4]
(ii) the value of $\D n,$ [2]
(iii) the value of $\D p$ when $\D V = 35.$ [2]
11 (CIE 2014, s, paper 13, question 10)
The table shows experimental values of $\D x$ and $\D y.$
$\D \vcol{x& 1.50 &1.75& 2.00& 2.25}{y& 3.9& 8.3 &19.5& 51.7}$
(i) Complete the following table.
$\D \vcol{x^2&\qquad &\qquad &\qquad &\qquad}{\lg y&&&&}$
[1]
(ii) By plotting a suitable straight line graph on the graph paper, show that $\D x$ and $\D y$ are related by the equation $\D y= Ab^{x^2},$ where $\D A$ and $\D b$ are constants. [2]
(iii) Use your graph to find the value of $\D A$ and of $\D b.$ [4]
(iv) Estimate the value of $\D y$ when $\D x = 1.25.$ [2]
12 (CIE 2014, s, paper 22, question 10)
Two variables $\D x$ and $\D y$ are connected by the relationship $\D y = Ab^x ,$ where $\D A$ and $\D b$ are constants.
(i) Transform the relationship $\D y = Ab^x$ into a straight line form. [2]
An experiment was carried out measuring values of $\D y$ for certain values of $\D x.$ The values of $\D \ln y$ and $\D x$ were plotted and a line of best fit was drawn. The graph is shown on the grid below.
(ii) Use the graph to determine the value of $\D A$ and the value of $\D b,$ giving each to 1 significant figure. [4]
(iii) Find $\D x$ when $\D y = 220.$ [2]
13 (CIE 2014, w, paper 11, question 9)
The table shows experimental values of variables $\D x$ and $\D y.$
$\D \vicol{x& 2& 2.5& 3 &3.5& 4}{y& 18.8& 29.6& 46.9& 74.1 &117.2}$
(i) By plotting a suitable straight line graph on the grid below, show that $\D x$ and $\D y$ are related by the equation $\D y = ab^x ,$ where $\D a$ and $\D b$ are constants. [4]
(ii) Use your graph to find the value of $\D a$ and of $\D b.$ [4]
14 (CIE 2014, w, paper 23, question 6)
Variables $\D x$ and $\D y$ are such that, when $\D \ln y$ is plotted against $\D 3^x ,$ a straight line graph passing through (4, 19) and (9, 39) is obtained.
(i) Find the equation of this line in the form $\D \ln y= m3^x+ c,$ where $\D m$ and $\D c$ are constants to be found. [3]
(ii) Find $\D y$ when $\D x = 0.5.$ [2]
(iii) Find $\D x$ when $\D y = 2000.$ [3]
1. (i) $\D y/x = A + Bx$
$\D \vicol{x& 1& 3& 6& 10& 14}{y/x& 2.5& 1.5& 0& -2& -4}$
(ii) $\D B = -0.5;A = 3$
2. (i) $\D \ln y = ln a + b ln x$
(ii) $\D b = 0.5; a = 4$
(iii) 32 to 49
3. (i) $\D \vicol{\sin x& 0.17& 0.5& 0.71& 0.87& 0.98}{\sqrt{y}& 3.35& 4 &4.42& 4.73& 4.97}$
(ii) $\D A = 2;B = 3$
(iii) $\D y = 20.5$
(iv) $\D x = 14.5$
4. (i) $\D y = 1000\sqrt{x}$
(ii) $\D m = 1$
(iii) $\D c = 6$
5. (i) $\D x^3$
(ii) $\D \vcol{x^3& 1& 8& 27& 64}{x^2y& 9.41 &5.16& -6.21& -28.32}$
(iii) $\D a = 10; b = -0.6$
(iv) $\D -1.48$
6. $\D b = e;A = e^2$
7. $\D y = (5x^2 - 2)^2$
8. (i) $\D \lg s$
(ii) $\D \vcol{\lg s& 0.3 &0.6& 0.78& 0.9}{lg t& 1.4& 0.8& 0.44& 0.19}$
(iii) $\D n = -2; k = 100$
(iv) $\D s = 4.9$
9. (i) $\D x^2$
(ii) $\D \vcol{x^2& 4& 16& 36& 64}{\frac{y}{x}& 4.8& 9.6& 17.5& 29}$
(iv) $\D 4.8$
10. (i)
(ii) $\D n = 1.5$
(iii) $\D 15$
11. (i) $\D \vcol{x^2& 2.25& 3.06& 4& 5.06}{\lg y& 0.59& 0.92 &1.29& 1.71}$
(ii)
(iii) $\D b = 2.5;A = 0.5$
(iv) $\D 2.1$
12. (i) $\D \log y = \log A + x \log b$
(ii) $\D 0.5$ (iii) $\D 4.4$
13. (i)
(ii) $\D b = 2.5; a = 3$
14. (i) $\D \ln y = 4(3^x) + 3$
(ii) $\D y = 20500$
(iii) $\D x = 0.127$ | 2020-01-20 08:39:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7020418643951416, "perplexity": 344.93403485050493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250598217.23/warc/CC-MAIN-20200120081337-20200120105337-00136.warc.gz"} |
http://sci.physics.strings.narkive.com/lr8LoRp2/weakened-nonabelian-bundle-gerbes-and-2-bundles | Discussion:
weakened nonabelian bundle gerbes and 2-bundles
(too old to reply)
Urs Schreiber
2005-04-14 10:42:49 UTC
Permalink
Raw Message
Please note, the kernel of the idea here comes from a comment in an
email from B Jurco to my supervisor, Michael Murray.
In Baez and Schreiber's paper 2-connections on 2-bundles', they talk
about the automorphism 2-group AUT(H) corresponding to the crossed
module t:H Aut(H). I've been looking at non-abelian bundle gerbes
(NABG) and one way to define them is to look at an H-bitorsor (or
principal H-bibundle) P defined on the fibre product Y^[2] of a
submersion Y \to M. In that case there is a local automorphism (comes
from a section, and here i indexes a cover of Y^[2] = \coprod U_i)
u_i : U_i \to Aut(H)
(called in Aschieri-Cantini-Jurco (hep-th/0312154) \varphi_e - I've
locally trivialised' my bitorsor) which obeys
h.s_i = s_i.u_i(h).
However, an automorphism is just a self-diffeomorphism respecting the
group structure of the Lie group H. When we look at the contracted
product of two H-bitorsors
However, for a general diffeomorphism f:H \to H (here's the point,
finally), this is not so.
And this is what we would _like_, considering that for general
bitorsors (thinking more alg. geom here),
As far as I am aware this idea arose in a discussion when I was visiting
Branislav Jurco and Paolo Aschieri in Torino/Italy last year. I was
mentioning how it seemed to me that 2-bundles with weak coherent structure
2-groups (as opposed to strict 2-groups), whose product is not associative
on the nose, would capture the idea of a base-space-dependent group product
in some sense and hence account for the 'algebra bundle'-freedom that Andrew
Neitzke identified as a plausible candidate for the n^3-scaling behaviour on
5-branes:
http://golem.ph.utexas.edu/string/archives/000461.html
Branislav Jurco and Paolo Aschieri noted that this idea might correspond to
the "weak automorphisms" in a NABG that you are discussing in your post.
the product (where defined,
if we want H-G-bitorsors) is _not_ associative. Also, dealing with a
\in Aut(H) which satisfies
a(h_1).a(h_2) = a(h_1.h_2)
^
|
|
bad!
seems very uncategorylike.
Right, and the way to do it is to go to coherent 2-groups instead. But
coherent 2-groups are much less well understood than strict ones. Since we
know that a strict one is just a crossed module, we would want to know which
weak form of a crossed modules describes a coherent 2-group. I have once
started working that out
(http://golem.ph.utexas.edu/string/archives/000471.html), but it's not
really finished yet.
One snag' - H \to Diff(H) won't give us a Lie crossed module, but
something weaker. This is probably where the coherent
Lie-2-group/?something like a crossed module? correspondence comes
in.
Yes, that's what I am talking about above.
Can this concept be made a bit less hand-wavy?
There is a precise way to define a coherent 2-group, a 2-bundle with a
coherent 2-group as structure 2-group as well as what connection and curving
on such a 2-bundle would be. There is also a known way how to get a
nonabelian gerbe from a strict 2-bundle.
What is hard is to fill the definition of a 2-connection for a coherent
2-group with life by given concrete realizations in terms of local data. One
possible approach I am discussing here:
http://golem.ph.utexas.edu/string/archives/000542.html. There is also a
complementary approach using connections on path space which is more
directly related with Hofman's ideas
I thought I'd have lots of time to work this out more completely. But now
that Adelaide is also working on this... :-)
If these 2-bundle ideas help you to work out the construction of a weakened
nonabelian bundle gerbe I don't know. But since all this is really just
different ways to look at the same thing it should really be related.
Urs Schreiber
2005-04-14 23:42:07 UTC
Permalink
Raw Message
Please note, the kernel of the idea here comes from a comment in an
email from B Jurco to my supervisor, Michael Murray.
In Baez and Schreiber's paper 2-connections on 2-bundles', they talk
about the automorphism 2-group AUT(H) corresponding to the crossed
module t:H Aut(H). I've been looking at non-abelian bundle gerbes
(NABG) and one way to define them is to look at an H-bitorsor (or
principal H-bibundle) P defined on the fibre product Y^[2] of a
submersion Y \to M. In that case there is a local automorphism (comes
from a section, and here i indexes a cover of Y^[2] = \coprod U_i)
u_i : U_i \to Aut(H)
(called in Aschieri-Cantini-Jurco (hep-th/0312154) \varphi_e - I've
locally trivialised' my bitorsor) which obeys
h.s_i = s_i.u_i(h).
However, an automorphism is just a self-diffeomorphism respecting the
group structure of the Lie group H. When we look at the contracted
product of two H-bitorsors
However, for a general diffeomorphism f:H \to H (here's the point,
finally), this is not so.
And this is what we would _like_, considering that for general
bitorsors (thinking more alg. geom here),
As far as I am aware this idea arose in a discussion when I was visiting
Branislav Jurco and Paolo Aschieri in Torino/Italy last year. I was
mentioning how it seemed to me that 2-bundles with weak coherent structure
2-groups (as opposed to strict 2-groups), whose product is not associative
on the nose, would capture the idea of a base-space-dependent group product
in some sense and hence account for the 'algebra bundle'-freedom that Andrew
Neitzke identified as a plausible candidate for the n^3-scaling behaviour on
5-branes:
http://golem.ph.utexas.edu/string/archives/000461.html
Branislav Jurco and Paolo Aschieri noted that this idea might correspond to
the "weak automorphisms" in a NABG that you are discussing in your post.
the product (where defined,
if we want H-G-bitorsors) is _not_ associative. Also, dealing with a
\in Aut(H) which satisfies
a(h_1).a(h_2) = a(h_1.h_2)
^
|
|
bad!
seems very uncategorylike.
Right, and the way to do it is to go to coherent 2-groups instead. But
coherent 2-groups are much less well understood than strict ones. Since we
know that a strict one is just a crossed module, we would want to know which
weak form of a crossed modules describes a coherent 2-group. I have once
started working that out
(http://golem.ph.utexas.edu/string/archives/000471.html), but it's not
really finished yet.
One snag' - H \to Diff(H) won't give us a Lie crossed module, but
something weaker. This is probably where the coherent
Lie-2-group/?something like a crossed module? correspondence comes
in.
Yes, that's what I am talking about above.
Can this concept be made a bit less hand-wavy?
There is a precise way to define a coherent 2-group, a 2-bundle with a
coherent 2-group as structure 2-group as well as what connection and curving
on such a 2-bundle would be. There is also a known way how to get a
nonabelian gerbe from a strict 2-bundle.
What is hard is to fill the definition of a 2-connection for a coherent
2-group with life by given concrete realizations in terms of local data. One
possible approach I am discussing here:
http://golem.ph.utexas.edu/string/archives/000542.html. There is also a
complementary approach using connections on path space which is more
directly related with Hofman's ideas
I thought I'd have lots of time to work this out more completely. But now
that Adelaide is also working on this... :-)
If these 2-bundle ideas help you to work out the construction of a
weakened nonabelian bundle gerbe I don't know. But since all this is
really just different ways to look at the same thing it should really be
related.
DM Roberts
2005-04-20 14:36:57 UTC
Permalink
Raw Message
Further to the above:
All the work I've seen so far on 2-bundles with connection has been in
terms of local data - Lie(G) 1-forms etc. Could we instead define a
connection as in the 1-bundle case as a sort of "bundle of subspaces"?
(heuristic definition only) That is, a splitting into horizontal and
vertical parts T = H \oplus V. We have a concept of 2-vector space from
HDA VI (math.QA/0307263) and there is the concept of a sub-2-vector
space (I think one could work backward from the direct sum of two
2-vector spaces to get apropriate definitions, or else in terms of
"images" and "kernels" of "linear transformations").
Then we have the more geometric image as we do for principal bundles,
the only trouble would be to show equivalence of the two definitions.
Ah, I see where the nonabelian surface parallel transport rears its
ugly head - how can we generalise the proof as per bundles without a
decent definition of this?
We could work from a position of physical insight perhaps. But as the
"physics" of this (H-flux in string theory, say) is not yet complete
(the fault of the mathematicians, physicists or mathematical
physicists? Which came first, the chicken or the egg?) I don't know if
that will help.
David
Urs Schreiber
2005-04-20 15:45:13 UTC
Permalink
Raw Message
Post by DM Roberts
All the work I've seen so far on 2-bundles with connection
Is there any other work on that than hep-th/0412325 ?
Post by DM Roberts
has been in terms of local data - Lie(G) 1-forms etc.
More precisely, in hep-th/0412325 this is given in terms of a local holonomy
2-functor which is then decoded to yield local p-form data.
Post by DM Roberts
Could we instead define a
connection as in the 1-bundle case as a sort of "bundle of subspaces"?
(heuristic definition only) That is, a splitting into horizontal and
vertical parts T = H \oplus V. We have a concept of 2-vector space from
HDA VI (math.QA/0307263) and there is the concept of a sub-2-vector
space (I think one could work backward from the direct sum of two
2-vector spaces to get apropriate definitions, or else in terms of
"images" and "kernels" of "linear transformations").
I expect that this should work and should be equivalent to the existing
definition. But as far as I know so far nobody has tried to spell that out
in detail.
Post by DM Roberts
Ah, I see where the nonabelian surface parallel transport rears its
ugly head - how can we generalise the proof as per bundles without a
decent definition of this?
There is in fact a decent definition of nonabelian surface parallel
transport in strict G-2-bundles. This is unfortunately only hinted at in
hep-th/0412325, but I have reported on more details here:
http://golem.ph.utexas.edu/string/archives/000503.html
and, upon request, have clarified the context here:
http://golem.ph.utexas.edu/string/archives/000547.html#c002194 .
A more detailed exposition is underway:
http://www-stud.uni-essen.de/~sb0264/2NCG.pdf .
What I haven't shown yet, though, is indepence of this construction on the
choice of cover. I expect the proof to be completely analogous to the well
known abelian case.
Post by DM Roberts
We could work from a position of physical insight perhaps. But as the
"physics" of this (H-flux in string theory, say)
H-flux gives rise to _abelian_ gerbes coupled to F-strings. Holonomy for
abelian 2-gerbes is well understood, parallel transport has recently been
studied by Picken. This is a special case of the nonabelian surface
transport mentioned above.
The challenge is to identify the physics that gives rise to _non_abelian
gerbes/2-bundles. The ordinary F-string in 10D does not couple to any
nonabelian 2-form, so it must be something else.
Several people expect this to be related to theories on stacks of N
M5-branes, where we have end-strings of open membranes on the 5-branes. For
N>1 these should sort of carry Chan-Paton-like degrees of freedom and couple
to nonabelian 2-forms which are known to be part of the spectrom on these
branes.
Edward Witten called the effective field theories for these branes once
tentatively "nonabelian gerbe theories":
http://www.maths.ox.ac.uk/notices/events/special/tgqfts/photos/witten/71.bmp
.
But I was being told that he has given up on making this precise. (?)
Hisham Sati is still arguing for this, e.g. in
I. Kriz and H. Sati
M-Theory, Type IIA Superstrings and Elliptic Cohomology
hep-th/0404013
H. Sati
M-theory and characteristic classes
hep-th/0501245
The most direct argument that this must be true that I know of is that in
section 5 of
P. Aschieri & B. Jurco,
Gerbes, M5-Brane Anomalies and E_8 Gauge Theory
hep-th/0409200 .
Recall that they argue as follows:
The M2 brane couples to the SUGRA 3-form. There seems to be no choice but
that this coupling is globally described by an abelian 2-gerbe/3-bundle,
just like in 1-dimension lower the coupling of the string to the KR 2-form
is globally described by an abelian 1-gerbe/2-bundle.
For the string we can derive from the fact alone that its bulk couples to an
abelian 1-gerbe the fact that its boundary couples to a nonabelian
0-gerbe/1-bundle, namely that living on the D-brane that the string ends on.
Schematically this works by noting that every abelian 1-gerbe G can be
written as a trivial gerbe G0 plus a lifting gerbe D(B) of a twisted
nonabelian 0-gerbe/1-bundle:
G = D(B) + G0.
B is the nonabelian 0-gerbe/bundle on the D-brane.
A similar relation holds for abelian 2-gerbes. They can be realized as a
lifting 2-gerbe of a twisted nonabelian 1-gerbe plus something else.
By analogy it is to be expected that this possibly twisted nonabelian
1-gerbe is that living on the 5-branes that the membrane ends on.
But what is interesting is that one can say more: The abelian 2-gerbe
coupled to the M2 brane is in fact a Chern-Simons 2-gerbe classified by the
Pontryagin class. These 2-gerbes are known to be the lifting 2-gerbes for
lifting an (Omega G)-gerbe to a \hat(Omega G)-gerbe, where Omega G is the
loop group of G and \hat(Omega G) its Kac-Moody central extension.
Incidentally, the \PG-2-bundles that we find in
Baez,Crans,Schreiber&Stevenson
Post by DM Roberts
From Loop Groups to 2-Groups
math.QA/0504123
to be related to the group String(n) are known (not rigorously proven yet,
though) to be the same as these \hat(Omega G)-1-gerbes.
Combined with the argument by Aschieri&Jurco this would say that what lives
on a stack of M5-branes are these \PG-2-bundles. Since they also seem to be
related to elliptic cohomology (due to the appearance of String(n), for
one), this gives precisely the picture that Hisham Sati is arguing for in
the above papers.
But the details here still need to be written down.
Post by DM Roberts
(the fault of the mathematicians, physicists or mathematical
physicists? Which came first, the chicken or the egg?) I don't know if
that will help.
Understanding the physical setups that give rise to nonabelian
gerbes/2-bundles would certainly help the general understanding.
Loading... | 2018-04-19 09:47:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8474754691123962, "perplexity": 2992.878493966779}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125936833.6/warc/CC-MAIN-20180419091546-20180419111546-00788.warc.gz"} |
https://number.subwiki.org/w/index.php?title=Poulet_number&oldid=617 | # Poulet number
## Definition
A Poulet number or Sarrus number is an odd composite number such that:
.
In other words, divides . Equivalently, is a Fermat pseudoprime modulo .
## Occurrence
### Initial examples
The first few Poulet numbers are .
These include, for instance:
### Infinitude
Further information: Infinitude of Poulet numbers
There are infinitely many Poulet numbers. This can be proved in many ways. For instance, Mersenne number for prime or Poulet implies prime or Poulet. This shows that if we find one Poulet number, we can iterate the operation of taking the Mersenne number and obtain infinitely many Poulet numbers.
## Facts
Statement Kind of numbers it says are Poulet numbers Proof idea
Mersenne number for prime or Poulet implies prime or Poulet where itself is a Poulet number OR where is prime and isn't Use that divides as a polynomial with integer coefficients.
Composite Fermat number implies Poulet number where is not prime Use that divides as a polynomial with integer coefficients, and also that divides because .
Infinitude of Poulet numbers Use the Mersenne number iteratively, after having found at least one Poulet number. | 2022-08-11 21:38:00 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 24, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8648442029953003, "perplexity": 2150.5844492264137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571502.25/warc/CC-MAIN-20220811194507-20220811224507-00544.warc.gz"} |
http://www.gamedev.net/page/resources/_/technical/math-and-physics/point-in-a-poly-r421 | • Create Account
Like
0Likes
Dislike
Point in a poly
By Ken McElvain | Published Jul 16 1999 11:58 AM in Math and Physics
point winding number lastpt int orig thispt wind quad
If you find this article contains errors or problems rendering it unreadable (missing images or files, mangled code, improper text formatting, etc) please contact the editor so corrections can be made. Thank you for helping us improve this resource
Robert's suggestion is a good one. The sum of the angles about the test point is known as the winding number. For non intersecting polygons if the winding number is non-zero then the test point is inside the polygon. It works just fine for convex and concave polygon's. Intersecting polygon's give reasonable results too. A figure 8 will give a negitive winding number for a test point in one of the loops and a positive winding number for the other loop, with all points outside having a winding number of 0. Other advantages of the winding number calculation are that it does not suffer from the confusion of the infinite ray algorithm when the ray crosses a vertex or is colinear with an edge.
Here is my version of a point in poly routine using a quadrant granularity winding number. The basic idea is to total the angle changes for a wiper arm with its origin at the point and whos end follows the polygon points. If the angle change is 0 then you are outside, otherwise you are in some sense inside. It is not necessary to compute exact angles, resolution to a quadrant is sufficient, greatly simplifying the calculations.
I pulled this out of some other code and hopefully didn't break it in doing so. There is some ambiguity in this version as to whether a point lying on the polygon is inside or out. This can be fairly easily detected though, so you can do what you want in that case.
-----------------------------------------------------------------
/*
* 1 | 0
* -----
* 2 | 3
*/
typedef struct {
int x,y;
} point;
pointinpoly(pt,cnt,polypts)
point pt; /* point to check */
int cnt; /* number of points in poly */
point *polypts; /* array of points, */
/* last edge from polypts[cnt-1] to polypts[0] */
{
point thispt,lastpt;
int a,b;
int wind; /* current winding number */
wind = 0;
lastpt = polypts[cnt-1];
for(i=0;i
thispt = polypts[i];
/*
* use mod 4 comparsions to see if we have
*/
else {
/*
* upper left to lower right, or
* upper right to lower left. Determine
* direction of winding by intersection
* with x==0.
*/
a = lastpt.y - thispt.y;
a *= (pt.x - lastpt.x);
b = lastpt.x - thispt.x;
a += lastpt.y * b;
b *= pt.y;
if(a > b) wind += 2;
else wind -= 2;
}
}
lastpt = thispt;
}
return(wind); /* non zero means point in poly */
}
/*
* Figure out which quadrent pt is in with respect to orig
*/
point pt;
point orig;
{
if(pt.x < orig.x) {
if(pt.y < orig.y) quad = 2;
} else {
if(pt.y < orig.y) quad = 3;
}
}
Ken McElvain
decwrl!sci!kenm | 2016-05-27 01:07:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45093750953674316, "perplexity": 2803.0359780147423}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276415.60/warc/CC-MAIN-20160524002116-00127-ip-10-185-217-139.ec2.internal.warc.gz"} |
https://jdhao.github.io/2020/11/22/python_list_of_empty_list_pitfall/ | TL;DR: do not use list multiplication to initialize an empty list of list, or you will end up wasting hours debugging your program.
I wrote some Python code for my project and found that the result isn’t correct. So I spent quite some time debugging the whole working process of this program, and found sadly that the culprit is a list of empty list, which is wrongly initialized.
What is the resulting list, if we do the following in Python?
x = [[]] * 3
x[0].append(1.0)
I expect x now becomes [[1.0], [], []]. Instead, it becomes [[1.0], [1.0], [1.0]]. This is because when we use multiplication to create list x, we actually created 3 references to an empty list. List is a mutable object in Python. When we append values to a list, we haven’t changed its identity. As a result, changing either one of the 3 sub-list will change the others, since they all refer to the same list.
Note that things are different when we create a list of same immutable objects using multiplication. For example, if we create a list of same int, and then change one of the them, the other elements will not change, because we can not change the value of immutable types, assigning a new int to a list element will make it point to another address.
In [18]: a = [1] * 2
In [19]: a
Out[19]: [1, 1]
In [20]: print(id(a[0]), id(a[1]))
4438635584 4438635584
In [21]: a[0] = 2
In [22]: print(id(a[0]), id(a[1]))
4438635616 4438635584
To create a list 3 empty lists, we may use list comprehension instead:
x = [[] for _ in range(3)]
In each iteration, a new empty list is created. So the 3 sub-list are independent of each other. Changing one won’t affect the others. | 2023-03-24 06:49:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19630128145217896, "perplexity": 1114.7785837919564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945248.28/warc/CC-MAIN-20230324051147-20230324081147-00657.warc.gz"} |
http://tex.stackexchange.com/questions/96276/vspace-or-line-space-in-text-of-wrap-fig/96610 | # vspace or line space in text of wrap fig
I am using wrap fig and want to insert a line break within the text. However, \vspace or \linebreak or \break don't lead to any change.
Here is a sample of my LaTeX code
\begin{figwindow}[0,r,%
{\includegraphics[height=2in]{PascoSynthesizer.eps}},%
{\label{fig:label} Caption
}]
\textbf{Electronic Output}:\\
A sum of nine harmonics of sine waves with a\\ fundamental of 440 Hz plus a second output of the fundamental. \\
\vspace{0.3in}
\textbf{Controls:}\\
\end{figwindow}
-
Please always post complete (small) document showing packages used the wrapfig package does not define a figwindow environment, so your title and tag do not match your example. – David Carlisle Feb 1 at 9:54
picinpar is quite an old package with several restrictions, a newer package for these kind of inserts is wrapfig however you can force a space by adding a blank line with a strut (or a rule of a specified height if you need finer control).
\documentclass[a4paper]{article}
\usepackage{picinpar}
\usepackage[demo]{graphicx}
\begin{document}
\begin{figwindow}[0,r,%
{\includegraphics[height=2in]{PascoSynthesizer.eps}},%
{\label{fig:label} Caption
}]
\noindent
\textbf{Electronic Output}:\\
A sum of nine harmonics of sine waves with a\\
fundamental of 440 Hz plus a second output of the fundamental.\\
\strut\\
\textbf{Controls:}
\end{figwindow}
\end{document}
-
Success: the line \strutt\\ worked to produce a linespace. the inclusion of [demo] caused problems and was omitted. GREAT GRATITUDE!! – Leon Gunther Feb 4 at 2:25 demo makes includegraphics just use a black square instead of the picture you should use it when posting examples because we have not got the file PascoSynthesizer.eps so can not run the code otherwise. Also in questions please post complete documents as in this answer not fragments that can not be run. – David Carlisle Feb 4 at 8:20 Thanks for the explanation of demo! – Leon Gunther Feb 7 at 2:23
I guess you have to check your preamble. If I use:
\documentclass[a4paper]{article}
\usepackage{picinpar}
\usepackage{graphicx}
and then your code, everything runs well, as you can see from the screenshot below (or isn't this the desired output?):
-
Thanks for your response. I do have what you show above. What I need is to add a line space between '... fundamental.' and 'Controls' – Leon Gunther Feb 1 at 13:59 | 2013-05-20 21:58:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8227673172950745, "perplexity": 2373.649591676531}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00099-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://textrecipes.tidymodels.org/reference/step_clean_levels.html | step_clean_levels creates a specification of a recipe step that will clean nominal data (character or factor) so the levels consist only of letters, numbers, and the underscore.
## Usage
step_clean_levels(
recipe,
...,
role = NA,
trained = FALSE,
clean = NULL,
skip = FALSE,
id = rand_id("clean_levels")
)
## Arguments
recipe
A recipe object. The step will be added to the sequence of operations for this recipe.
...
One or more selector functions to choose which variables are affected by the step. See recipes::selections() for more details.
role
Not used by this step since no new variables are created.
trained
A logical to indicate if the quantities for preprocessing have been estimated.
clean
A named character vector to clean and recode categorical levels. This is NULL until computed by recipes::prep.recipe(). Note that if the original variable is a character vector, it will be converted to a factor.
skip
A logical. Should the step be skipped when the recipe is baked by recipes::bake.recipe()? While all operations are baked when recipes::prep.recipe() is run, some operations may not be able to be conducted on new data (e.g. processing the outcome variable(s)). Care should be taken when using skip = FALSE.
id
A character string that is unique to this step to identify it.
## Value
An updated version of recipe with the new step added to the sequence of existing steps (if any).
## Details
The new levels are cleaned and then reset with dplyr::recode_factor(). When data to be processed contains novel levels (i.e., not contained in the training set), they are converted to missing.
## Tidying
When you tidy() this step, a tibble with columns terms (the selectors or variables selected), original (the original levels) and value (the cleaned levels) is returned.
step_clean_names(), recipes::step_factor2string(), recipes::step_string2factor(), recipes::step_regex(), recipes::step_unknown(), recipes::step_novel(), recipes::step_other()
Other Steps for Text Cleaning: step_clean_names()
## Examples
library(recipes)
library(modeldata)
data(Smithsonian)
smith_tr <- Smithsonian[1:15, ]
smith_te <- Smithsonian[16:20, ]
rec <- recipe(~., data = smith_tr)
if (requireNamespace("janitor", quietly = TRUE)) {
rec <- rec %>%
step_clean_levels(name)
rec <- prep(rec, training = smith_tr)
cleaned <- bake(rec, smith_tr)
tidy(rec, number = 1)
# novel levels are replaced with missing
bake(rec, smith_te)
}
#> # A tibble: 5 × 3
#> name latitude longitude
#> <fct> <dbl> <dbl>
#> 1 NA 38.9 -77.0
#> 2 NA 38.9 -77.0
#> 3 NA 38.9 -77.0
#> 4 NA 38.9 -77.0
#> 5 NA 38.9 -77.1 | 2022-06-30 22:07:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23347529768943787, "perplexity": 9835.897808576081}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103915196.47/warc/CC-MAIN-20220630213820-20220701003820-00292.warc.gz"} |
https://codereview.stackexchange.com/questions/155306/comparing-image-resolutions-to-ratios | Comparing image resolutions to ratios
I made this Python 3.6 program that generates a list of images that fail to compare against a given ratio. Each image has a resolution and with a bit of math, the program is supposed to calculate the width using the ratio and compare it with the actual width. If the calculated width falls from the range of (width-1, width+1), the image is outputted and considered "not respecting the ratio".
Important: The program uses the PIL library to get the resolutions. I used pip --user and it seems to work perfectly.
For example, an image with the resolution of 1920x1080 is in a directory. The program, imageres.py, takes the resolution, and uses this formula:
$$\text{cal_width} = \frac{\text{width_ratio} * \text{image_height}}{\text{height_ratio}}$$
If the ratio (default, actually) is 16:9, the formula turns out to be this:
$$\text{cal_width} = \frac{16 * 1080}{9}$$
$$\text{cal_width} = 1920$$
The program does the above and compares the final result to the image's actual resolution. Because both of them are equal, the image is not outputted and moves on to the next one.
Do you have any suggestions or methods I can use to make my code more robust? I noticed PyLint kept telling me on structure, so if anyone has any suggestion to make the code more readable would be awesome.
My biggest concern, though, is the mathematics and floating-point numbers behind the program. If you understand what I'm trying to do, I would like to hear your opinions! I have some reservations about this, as it seems more of a hack than an actual solution.
imageres.py
(I hope I was able to keep the PyLint stuff on.)
"""
imgres.py
this is a simple python image resoultion ratio checker. quite loaded
sentence, but basically it gets an image's width and height and calculates
the width using the hardcoded default ratio of 16:9, which is what some
monitors have (like 1920x1080 is a equal ratio to 16:9)
so far it only spits out the images that are not equal to the width,
however it is quite possible that calculations are a pixel above/below the
image width, due to precison error.
^- not exactly solved, but it checks by an offset of 1
requires the PIL (Python Image Library) locally via pip or else where
"""
from os import listdir
from os.path import isfile, join
from argparse import ArgumentParser
from math import trunc
import sys
from PIL import Image
# the parser so we can present a user-friendly interface
PARSER = ArgumentParser(description='Compare image resolutions to ratio.')
PARSER.add_argument('DIRECTORY', metavar='D', type=str, help="The folder where the images reside")
PARSER.add_argument('--wr', default=16, metavar='W', type=int, help="The width ratio. DEFAULT: 16")
PARSER.add_argument('--hr', default=9, metavar='H', type=int, help="The height ratio. DEFAULT: 9")
ARGS = PARSER.parse_args()
# debug; check that the args work
# print("%s %d %d" % (ARGS.DIRECTORY, ARGS.wr, ARGS.hr))
# validate directory is real and accessible
# found the error code list from here:
# http://www-numi.fnal.gov/offline_software/srt_public_context/WebDocs/Errors/unix_system_errors.html
try:
ONLYFILES = [
f for f in listdir(ARGS.DIRECTORY)
if isfile(join(ARGS.DIRECTORY, f)) and f.endswith(('jpg', 'png'))
]
# get only the files from the directory. does not transverse into
# sub-directories. uses a tuple to get only jpg and png extensioned files
for img_filepath in ONLYFILES:
img = Image.open(join(ARGS.DIRECTORY, img_filepath))
# debug; check it's possible to get the image resolution
# print(img.size)
img_width, img_height = img.size
# calculate the width of the image with the provided ratio
calculated_width = trunc((ARGS.wr * img_height) / float(ARGS.hr))
# check if the calculated_width falls between the range with +- offset
# if yes, output the filename with resolution and calculated_width
if not calculated_width in range(img_width-1, img_width+1):
print("%30s -- (%d,%d) ~ C_W %f" %
(img_filepath, img_width, img_height, calculated_width)
)
except FileNotFoundError as inst:
print("error: the specified argument does not exist.\n", inst)
sys.exit(2)
print("error: the specified argument is not an directory.\n", inst)
sys.exit(20)
Here are some of the notes about the code you've posted:
• I think pillow project is a much more active player than PIL, consider switching. There is also a highly-optimized Pillow-SIMD that can bring dramatically better performance (some benchmarks).
• variable naming - PARSER, ARGS, ONLYFILES are not constants (well, there are no constants in Python, it's just that it is recommended to name things that don't change in an upper case, PEP8 about constants) and should be defined in a lower case
• you don't need to define ONLY_FILES list and then iterate over it, you can use a single loop and, moreover, use a glob pattern with glob.iglob():
import glob
for filename in glob.iglob(args.directory + "/*.{jpg,png}", recursive=True):
• when opening the image files, use the with context manager:
with Image.open(filename) as img:
• when you check the calculated_width to be in a specified range, use comparison operators instead of creating a extra "range" (please check the following on the off-by-one errors):
if not(img_width - 1 <= calculated_width <= img_width):
• you can have a custom argparse directory type
• I don't see much sense in keeping a filename inside a "docstring" - the filename can change and you would easily forget to update it in the docstring
• the "requires the PIL (Python Image Library) locally via pip or else where" should be better handled by requirements.txt file accompanied by a "README" file (typically README.md or README.rst) with installation, usage and license instructions
• it might probably be a good idea to configure the --help of your CLI program, argparse would by default generate one, but adding more to the usage instructions and potential problems may be helpful
• the commented "debug" section of the code, should be replaced by a proper logging
• organize imports as per PEP8 recommendations
I am focusing on your question about the mathematics. In python 3.x, it is useless to apply float() to ARGS.hr since /does not match floor division any more (//` has been introduced) but real division.
I suppose your main concern is about trunc(). I don't know what is better. I would have used round() instead because I expect image converting softwares to handle the problem that way when performing a resize. However, I did not check and you can bet different softwares apply different policies... Did you try to shrink a 1920x1080 pic of, say, 30% to see what rule was used ? | 2020-06-02 10:23:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45300722122192383, "perplexity": 4010.0717037033496}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347424174.72/warc/CC-MAIN-20200602100039-20200602130039-00468.warc.gz"} |
http://randomizr.declaredesign.org/reference/block_and_cluster_ra.html | A random assignment procedure in which units are assigned as clusters and clusters are nested within blocks.
block_and_cluster_ra(block_var, clust_var, prob = NULL, prob_each = NULL,
m = NULL, block_m = NULL, block_m_each = NULL, block_prob = NULL,
block_prob_each = NULL, num_arms = NULL, condition_names = NULL,
check_inputs = TRUE)
Value
A vector of length N that indicates the treatment condition of each unit.
Examples
clust_var <- rep(letters, times=1:26)
block_var <- rep(NA, length(clust_var))
block_var[clust_var %in% letters[1:5]] <- "block_1"
block_var[clust_var %in% letters[6:10]] <- "block_2"
block_var[clust_var %in% letters[11:15]] <- "block_3"
block_var[clust_var %in% letters[16:20]] <- "block_4"
block_var[clust_var %in% letters[21:26]] <- "block_5"
table(block_var, clust_var)#> clust_var
#> block_var a b c d e f g h i j k l m n o p q r s t u v w
#> block_1 1 2 3 4 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
#> block_2 0 0 0 0 0 6 7 8 9 10 0 0 0 0 0 0 0 0 0 0 0 0 0
#> block_3 0 0 0 0 0 0 0 0 0 0 11 12 13 14 15 0 0 0 0 0 0 0 0
#> block_4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 16 17 18 19 20 0 0 0
#> block_5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 21 22 23
#> clust_var
#> block_var x y z
#> block_1 0 0 0
#> block_2 0 0 0
#> block_3 0 0 0
#> block_4 0 0 0
#> block_5 24 25 26
Z <- block_and_cluster_ra(block_var = block_var,
clust_var = clust_var)
table(Z, block_var)#> block_var
#> Z block_1 block_2 block_3 block_4 block_5
#> 0 8 16 24 37 72
#> 1 7 24 41 53 69table(Z, clust_var)#> clust_var
#> Z a b c d e f g h i j k l m n o p q r s t u v w x y
#> 0 0 0 3 0 5 6 0 0 0 10 11 0 13 0 0 0 17 0 0 20 0 0 23 24 25
#> 1 1 2 0 4 0 0 7 8 9 0 0 12 0 14 15 16 0 18 19 0 21 22 0 0 0
#> clust_var
#> Z z
#> 0 0
#> 1 26
Z <- block_and_cluster_ra(block_var = block_var,
clust_var = clust_var,
num_arms = 3)
table(Z, block_var)#> block_var
#> Z block_1 block_2 block_3 block_4 block_5
#> T1 4 19 26 19 43
#> T2 9 14 26 55 49
#> T3 2 7 13 16 49table(Z, clust_var)#> clust_var
#> Z a b c d e f g h i j k l m n o p q r s t u v w x y
#> T1 0 0 0 4 0 0 0 0 9 10 0 12 0 14 0 0 0 0 19 0 21 22 0 0 0
#> T2 1 0 3 0 5 6 0 8 0 0 11 0 0 0 15 0 17 18 0 20 0 0 23 0 0
#> T3 0 2 0 0 0 0 7 0 0 0 0 0 13 0 0 16 0 0 0 0 0 0 0 24 25
#> clust_var
#> Z z
#> T1 0
#> T2 26
#> T3 0
Z <- block_and_cluster_ra(block_var = block_var,
clust_var = clust_var,
prob_each = c(.2, .5, .3))
block_m_each <- rbind(c(2, 3),
c(1, 4),
c(3, 2),
c(2, 3),
c(5, 1))
Z <- block_and_cluster_ra(block_var = block_var,
clust_var = clust_var,
block_m_each = block_m_each)
table(Z, block_var)#> block_var
#> Z block_1 block_2 block_3 block_4 block_5
#> 0 5 9 40 37 118
#> 1 10 31 25 53 23table(Z, clust_var)#> clust_var
#> Z a b c d e f g h i j k l m n o p q r s t u v w x y
#> 0 1 0 0 4 0 0 0 0 9 0 0 12 13 0 15 0 0 18 19 0 21 22 0 24 25
#> 1 0 2 3 0 5 6 7 8 0 10 11 0 0 14 0 16 17 0 0 20 0 0 23 0 0
#> clust_var
#> Z z
#> 0 26
#> 1 0 | 2017-07-25 16:30:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17927995324134827, "perplexity": 379.2564384279268}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425339.22/warc/CC-MAIN-20170725162520-20170725182520-00325.warc.gz"} |
https://www.cs.colostate.edu/AlphaZ/wiki/doku.php?id=check_program&image=latex%3Ae8ddb617bf4292a0c819ae3efe5f2b40.png&ns=latex&tab_details=view&do=media&tab_files=files | # AlphaZ
check_program
## Check Program
This tutorial will explain 'Check Program' utility that checks the correctness of an Alphabets program through static analysis. The rules 'Check Program' uses are described in “Hierarchical Static Analysis of Structured Systems of Affine Recurrence Equations” by Dupont de Dinechin and Robert [ASAP 1996]
#### Usage
prog = ReadAlphabets("../../alphabets//matrix_product.ab");
CheckProgram(prog);
The main analyses performed by Check Program are described below. They exploit the algorithms for constructing domains and context domains of Alphabets expressions.
### Completeness Analysis
First, Check program ensures that each local or output variable has an equation, and that the variable is is defined for all points in its declared domain. In other words, each equation must completely define the variable on its left-hand-side.
affine OneDTwoD {N,TMAX | N>0 && TMAX>0}
given
float A {i| 0<=i<=N};
returns
float B {t, i| 0<=i<=N && t==TMAX-1};
using
float Local {t,i | 0<=i<=N && 0<=t<TMAX};
through
Local[t,i] =
case
{|t>0 && i==0}: Local[t-1,i];
{|t>0 && i==N}: Local[t-1,i];
{|t>0 && 0<i<N}: 0.5 * (Local[t, i-1] + Local[t-1,i+1]);
esac;
B[t,i] = Local[t,i];
.
In the above example, variable 'Local' is not defined over {t==0 && i >= 0}. Check Program reports this error as:
ERROR:: Variable Local is not defined over the domain : {t,i|t== 0 && N-i>= 0 && TMAX-1>= 0 && N-1>= 0 && i>= 0}
### Uniqueness Analysis
The second major property that Check Program tests is that at any point in the domain of a variable, there is a unique definition. This test manifests itself in many different ways, as described below.
#### Multiple Definitions for a Variable and Unused Variables
AlphaZ ensures that there is exactly one equation for each local and output variable. It also checks if any of the input or local variable is not used in the system.
affine OneDTwoD5 {N,TMAX | N>0 && TMAX>0}
given
float A {i| 0<=i<=N};
returns
float B {t, i| 0<=i<=N && t==TMAX-1};
using
float Local {t,i | 0<=i<=N && 0<=t<TMAX};
through
Local[t,i] =
case
{|t==0}: 0;
{|t>0 && i==0}: Local[t-1,i];
{|t>0 && i==N}: Local[t-1,i];
{|t>0 && 0<i<N}: 0.5 * (Local[t, i-1] + Local[t-1,i+1]);
esac;
B[t,i] = Local[t,i];
B[t,i] = Local[t,i];
.
In the above example, B is defined in multiple equations and variable 'A' is not used at all. Check program prints the following messages.
ERROR:: Variable B has multiple definitions (equations)
WARNING:: Variable A not used in any equation
#### Validity of the Case Statement
Ensures that multiple case subexpressions do not define same point in the domain of a variable.
affine OneDTwoD2 {N,TMAX | N>0 && TMAX>0}
given
float A {i| 0<=i<=N};
returns
float B {t, i| 0<=i<=N && t==TMAX-1};
using
float Local {t,i | 0<=i<=N && 0<=t<TMAX};
through
Local[t,i] =
case
{|t==0}: A[i];
{|t>0 && i>=0}: Local[t-1,i];
{|t>0 && i==N}: Local[t-1,i];
{|t>0 && 0<i<N}: 0.5 * (Local[t, i-1] + Local[t-1,i+1]);
esac;
B[t,i] = Local[t,i];
.
In the above example, variable 'Local' is defined for {i > 0} in multiple case subexpressions. Check Program reports these errors as:
ERROR:: in the case statement : ({t,i|t-1>= 0} , {t,i|-N+i== 0 && t-1>= 0}) domains of subexpressions overlap on : {t,i|-N+i== 0 && t-1>= 0}
ERROR:: in the case statement : ({t,i|t-1>= 0} , {t,i|i-1>= 0 && t-1>= 0 && N-i-1>= 0}) domains of subexpressions overlap on : {t,i|i-1>= 0 && t-1>= 0 && N-i-1>= 0}
#### Empty Sub Expressions
Expressions with empty domains can be the source of errors in the program.
affine OneDTwoD3 {N,TMAX | N>0 && TMAX>0}
given
float A {i| 0<=i<=N};
returns
float B {t, i| 0<=i<=N && t==TMAX-1};
using
float Local {t,i | 0<=i<=N && 0<=t<TMAX};
through
Local =
case
{t,i|t==0}: (t,i->N+1)@ A;
{t,i|t>0 && i==0}: (t,i -> t-1,i)@Local;
{t,i|t>0 && i==N}: (t,i -> t-1,i)@Local;
{t,i|t>0 && 0<i<N}: 0.5 * ((t,i -> t, i-1)@Local + (t,i -> t-1,i+1)@Local) + (t,i->N+1)@ A;
esac;
B[t,i] = Local[t,i];
.
In the above examples, domain of variable 'A' is {i| 0⇐i⇐N}, but in the equation for variable 'Local', cases {t,i|t==0} and {t,i|t>0 && 0<i<N} access A[N+1] which is not in the domain of variable 'A', thus resulting in expression with empty domain for both the cases.
WARNING:: This expression has empty domain : (t,i->N+1)@A
If any local and output variables are not defined for any values of the parameters, then they are flagged as warnings.
affine OneDTwoD4 {N,TMAX | TMAX>0}
given
float A {i| 0<=i<=N};
returns
float B {t, i| 0<=i<=N && t==TMAX-1};
using
float Local {t,i | 0<=i<=N && 0<=t<TMAX};
through
Local[t,i] =
case
{|t==0}: A[i];
{|t>0 && i==0}: Local[t-1,i];
{|t>0 && i==N}: Local[t-1,i];
{|t>0 && 0<i<N}: 0.5 * (Local[t, i-1] + Local[t-1,i+1]);
esac;
B[t,i] = Local[t,i];
.
In the above example, parameter 'N' has no restrictions, but none of the variables are declared for negative values of 'N'. Check Program reports these errors as:
WARNING:: Variable Local is not defined over the domain : {|TMAX-1>= 0 && -N-1>= 0}
WARNING:: Variable B is not defined over the domain : {|TMAX-1>= 0 && -N-1>= 0}
Not only declarations, but equations are also checked to ensure that they are defined for all values of parameters. | 2019-10-17 20:33:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3368457555770874, "perplexity": 13463.025120636095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986676227.57/warc/CC-MAIN-20191017200101-20191017223601-00475.warc.gz"} |
https://www.physicsforums.com/threads/powerseries-of-a-natural-logarithm.539069/ | # Powerseries of a natural logarithm
1. Oct 11, 2011
### center o bass
1. The problem statement, all variables and given/known data
Show that
$$\frac{1}{x} \ln \frac{x+1}{x-1} = \sum_0^\infty \frac{2x^{2n}}{2n+1}$$.
2. The attempt at a solution
I tried to use the relation
$$\frac{1}{1+x} = \frac{d}{dx}\ln (1+x)$$
and expand as a geometric series, but this did not lead anywhere since I then ended up with an alternating series.
Anyone got any ideas of where to start?
2. Oct 11, 2011
### Hootenanny
Staff Emeritus
HINT:
$$\log\left(\frac{a}{b}\right) = \log a - \log b$$
3. Oct 11, 2011
### center o bass
Yes, so I could subtract the two resulting series... and I see now that it will actually lead trough. Ofcourse I knew about the property, but I did not see how it would get rid of the 'alternation'. However I do see that now because of the cancellation of terms.
Thank you! | 2017-11-17 23:59:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.668941855430603, "perplexity": 741.2064860615446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804019.50/warc/CC-MAIN-20171117223659-20171118003659-00489.warc.gz"} |
https://collegephysicsanswers.com/openstax-solutions/calculate-displacement-and-velocity-times-0500-b-100-c-150-and-d-200-s-ball | Question
Calculate the displacement and velocity at times of (a) 0.500, (b) 1.00, (c) 1.50, and (d) 2.00 s for a ball thrown straight up with an initial velocity of 15.0 m/s. Take the point of release to be $y_o = 0$.
a) $x_a = 6.28 \textrm{ m}$, $v_a = 10.1 \textrm{ m/s}$
b) $x_b = 10.1 \textrm{ m}$, $v_b = 5.20 \textrm{ m/s}$
c) $x_c = 11.5 \textrm{ m}$, $v_c = 0.300 \textrm{ m/s}$
d) $x_d = 10.4 \textrm{ m}$, $v_d = -4.60 \textrm{ m/s}$
Solution Video | 2019-02-21 03:14:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8498968482017517, "perplexity": 597.437068278982}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247499009.48/warc/CC-MAIN-20190221031117-20190221053117-00194.warc.gz"} |
https://www.intel.com/content/www/us/en/docs/programmable/683325/18-1/syntax-and-comments.html | ## Intel® Quartus® Prime Standard Edition User Guide: Scripting
ID 683325
Date 9/24/2018
Public
Arguments to Tcl commands are separated by white space, and Tcl commands are terminated by a newline character or a semicolon. You must use backslashes when a Tcl command extends more than one line. The backslash (\) must be the last character in the line to designate line extension. If the backslash is followed by any other character including a space, that character is treated as a literal.
Tcl uses the hash or pound character (#) to begin comments. The # character must begin a comment. If you prefer to include comments on the same line as a command, be sure to terminate the command with a semicolon before the # character. The following example is a valid line of code that includes a set command and a comment.
set a 1;# Initializes a
Without the semicolon, the command is invalid because the set command does not terminate until the new line after the comment.
The Tcl interpreter counts curly braces inside comments, which can lead to errors that are difficult to track down. The following example causes an error because of unbalanced curly braces.
# if { $x > 0 } { if {$y > 0 } {
# code here
} | 2022-08-16 13:51:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5145271420478821, "perplexity": 2179.0990809948153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572304.13/warc/CC-MAIN-20220816120802-20220816150802-00775.warc.gz"} |
https://www.plob.org/tag/%E6%8B%89%E9%A9%AC%E5%85%8B | ## 进化理论及其发展
1、拉马克与进化论 "Do we not therefore perceive that by the action of the laws of organization . . . nature ... | 2020-02-29 12:24:33 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8641698956489563, "perplexity": 2156.253947922543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875149238.97/warc/CC-MAIN-20200229114448-20200229144448-00484.warc.gz"} |
https://math.stackexchange.com/questions/3827839/spherical-coordinates-of-a-regular-dodecahedron | # Spherical coordinates of a regular dodecahedron
It's well-known/documented that you can construct the vertices of a regular dodecahedron from the following Cartesian coordinates (with appropriate rotation/scaling):
$$(±1, ±1, ±1)\\ (0, ±\varphi, ±\frac{1}{\varphi})\\ (±\frac{1}{\varphi}, 0, ±\varphi)\\ (±\varphi, ±\frac{1}{\varphi}, 0)$$
It's also well-known that you can construct the regular icosahedron in spherical coordinates with one point at each of the latitudes $$±\pi/2$$, and 5 points at each of the latitudes $$±\arctan(1/2)$$, rotationally symmetric except with the top and bottom hemispheres offset by $$\pi/5$$ radians.
Since the icosahedron and dodecahedron are duals of each other, this approach clearly should extend to the dodecahedron. For two different latitudes $$±x$$ and $$±y$$, each will have 5 vertices evenly spaced around them, for a total of the 20 vertices. The upper hemisphere will be offset in longitude $$\pi/5$$ from the lower hemisphere. The question is, what are $$x$$ and $$y$$?
The orientation of the dodecahedron in Cartesian coordinates you cite is of course not the same as the orientation that would be produced by taking the dual of the icosahedron in spherical coordinates.
The desired latitude angles are, $$\pm \tan^{-1} \frac{3-\sqrt{5}}{4}, \quad \pm \tan^{-1} \frac{3+\sqrt{5}}{4}.$$ Equivalently, these are $$\pm \cot^{-1} (3 + \sqrt{5}), \quad \pm \cot^{-1} (3 - \sqrt{5}).$$ The computation is rather tedious to derive from the spherical coordinates of the icosahedron, but the way to do it is to observe that the centroid of an icosahedral face may be computed as the average of its three vertices in Cartesian coordinates, and after converting this to spherical coordinates, ignore the radius and longitudinal angle.
So instead of doing such a calculation, it may be better to use the Cartesian coordinates for the dodecahedron that you cited. Specifically, we want to calculate the angle at the origin between the centroid of a dodecahedral face to any of its vertices. If we call this angle $$\alpha$$, then $$\pi/2 - \alpha$$ will be the more extreme latitude of the two pairs of five vertices. Then if we calculate the angle at the origin between any two adjacent vertices, and call this $$\beta$$, then $$\pi/2 - (\alpha + \beta)$$ will be the less extreme latitude of the two pairs. Since for your given coordinates the dodecahedral circumradius is $$R = \sqrt{3}$$, in my opinion these calculations are much more tractable than the first approach.
• Great! I did in fact take the "tedious" approach of calculating the centroid of an icosahedronal face from the spherical coordinates, and got arctan((3+sqrt(5))/4), but then I tried to double-check it by computing the dot-product of the midpoint of the centroid of a dodecahedron face with one of its vertices, and got something totally different. I must have made a mistake somewhere, but it made me doubt my approach. Sep 16 '20 at 0:49
The dodecahedron's vertices are partitioned into 4 groups of 5, each of which forms a regular pentagon. We have two pentagonal faces, whose edges are the edges of the dodecahedron, and two "diagonal pentagons" whose edges are the edges of an inscribed cube.
If we take Euclid's construction, the inscribed cube has edge length 2, the pentagonal faces have side length $$\sqrt5-1$$, and the circumradius is $$\sqrt3$$.
Then, the circumradii of the two pentagons are $$\frac{\sqrt{5}-1}{2\sin(\pi/5)}$$ and $$\frac{2}{2\sin(\pi/5)}$$, corresponding to latitudes: $$x = \arccos(\frac{\sqrt{5}-1}{2\sqrt{3}\sin(\pi/5)})$$ $$y = \arccos(\frac{2}{2\sqrt{3}\sin(\pi/5)})$$
• Validated that these do line up with the arctan-based formulas, although simplifying them down is a pain... Sep 16 '20 at 1:04
• If it helps, these are essentially the same 4 planes as for the dual icosahedron (rotated a bit). See the gif on wikipedia: en.wikipedia.org/wiki/… Sep 16 '20 at 1:17
• The existence of the four planes is relatively obvious, it's their exact location that is tricky. :) Sep 16 '20 at 1:30
• Right, what I meant was that another approach is just to take the latitudes you have for the icosahedron and rotate them all to get the latitudes for the dodecahedron. Sep 16 '20 at 1:57
• Ah, I get what you're saying now. You're not rotating the latitudes themselves (because the latitudes define planes or circles, depending on which object you're talking about), but rather rotating a set of model points chosen from those latitudes, which is basically equivalent to saying "take the vertices of the icosahedron, line them up along a line of longitude, and rotate by a certain amount." This works, again, because of the dual relationship between the dodecahedron and the icosahedron, in particular that the face-vertex central angle is the same. Sep 16 '20 at 2:46 | 2021-10-18 16:50:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 22, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8910412192344666, "perplexity": 284.80605078682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585204.68/warc/CC-MAIN-20211018155442-20211018185442-00353.warc.gz"} |
https://www.danielmathews.info/2010/10/?cat=67 | ## Talks at Columbia University, Oct 2010
On 1 October, 2010 I gave two talks at Columbia University. | 2020-07-13 11:27:38 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8663678765296936, "perplexity": 7827.513058922164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657143365.88/warc/CC-MAIN-20200713100145-20200713130145-00091.warc.gz"} |
https://codegolf.stackexchange.com/questions/123251/is-it-a-cyclic-number | # Is it a Cyclic number?
A cyclic number is a number of "n" digits that when multiplied by 1, 2, 3,...n, results in the same digits but in a different order.
For example, the number 142,857 is a cyclic number since 142,857 x 2 = 285,714, 142,857 x 3 = 428,571, 142,857 x 4 = 571,428, and so on. Given an integer input, determine if it is a cyclic number by outputting a truthy value if it is, and a falsy value if not.
Also, to be clear, the input can contain leading 0's: e.g. 0344827586206896551724137931
This is because, if leading zeros are not permitted on numerals, then 142857 is the only cyclic number in decimal.
Since it is code-golf, shortest answer in bytes wins!
• Hi and welcome to PPCG. This is not a bad question, but if you take a look at some of the recently posted questions I think that you will see that it could be better. Specifically, it would be very beneficial for the community if you provided more test cases to work with. When posting future challenges, please consider using the sandbox. May 29, 2017 at 0:06
# Actually, 18 bytes
;;ru@≈*♂$♂S♂≈╔@S≈= Try it online! (expects quoted input) Explanation: ;;ru@≈*♂$♂S♂≈╔@S≈=
;; duplicate input twice
ru range(1, len(input)+1)
@≈ convert input to an integer
* multiply input by each element in range
♂$♂S♂≈ convert each product to a string, sort the digits, and convert back to int ╔ uniquify: remove duplicate elements @S≈ sort input and convert to int = compare equality • @tfbninja I posted this before the bit about leading zeroes. I have another 15-byte solution that will work with leading zeroes that I will edit in soon. – user45941 May 29, 2017 at 0:44 • What character encoding do you use to achieve 18 bytes? I tried UTF-8 and it weighed in at 32 bytes. EDIT: Oh, I see, it's code page 437. May 29, 2017 at 2:30 # 05AB1E, 9 6 bytes Thanks to Emigna for saving 3 bytes! ā*€{ïË Explanation: ā # Push range(1, len(input) + 1) * # Multiply by the input €{ # Sort each element ï # Convert to int to remove leading zeros Ë # Check if all elements are equal Uses the 05AB1E encoding. Try it online! • What is the reason for ¦‚˜? May 29, 2017 at 9:34 • @kalsowerus If the input has a leading zero, multiplying by 1 would make it disappear, which makes it not work for 0588235294117647. May 29, 2017 at 9:42 • @tfbninja Oh okay, is adding leading zeroes after the multiplication also something to take into account? These are the individual sorted results I get after multiplying, with some missing leading zeroes, which would probably indicate the problem here. May 29, 2017 at 13:20 • Consider the number 0212765957446808510638297872340425531914893617 as mentioned in the comments of another answer. Looking at the sorted numbers I would assume it to return false, but when removing zeroes it becomes true. May 29, 2017 at 14:16 • @tfbninja Is the output for Emigna's test case truthy or falsy? May 29, 2017 at 14:25 # Python, 86 bytes lambda n:all(sorted(n)==sorted(str(int(n)*i).zfill(len(n)))for i in range(2,len(n)+1)) Try it online! Input numbers as strings. • @tfbninja should work on any python (2 and 3) May 29, 2017 at 0:39 • why does it fail with 0212765957446808510638297872340425531914893617 ? May 29, 2017 at 1:50 • @Jenny_mathy now it doesn't. May 29, 2017 at 14:58 # PHP, 64 Bytes for(;$i++<9;)$r+=($c=count_chars)($argn)==$c($argn*$i);echo$r>1; Online Version # Haskell, 363332 45 bytes c n=let l=length n in(10^l-1)divread n==l+1 Example usage: *Main> c "142857" True I don't think this algorithm needs any explanation. TOL Thanks for suggestions: Generic Display Name, Laikoni. Thanks for correction: Antony Hatchkins. EDIT Nope, fails on "33". • does it work for 052631578947368421 ? May 29, 2017 at 1:55 • Yes, it returns True in that case. May 29, 2017 at 2:23 • Save some bytes by replacing ns with n May 29, 2017 at 2:26 • Can you use <1 instead of ==0? Also here is a TIO link: Try it online! May 29, 2017 at 8:35 • How about 111111? May 29, 2017 at 15:11 # dc, 24 25 bytes [1]sa0?dZd10r^1-r1+/rx=ap Prints "0" if the number is not cyclic, otherwise "1". Requires the number to be entered as a string. Example usage: $ echo "[052631578947368421]" | dc -e '[1]sa0?dZd10r^1-r1+/rx=ap'
1
\$ echo "[052631578947368422]" | dc -e '[1]sa0?dZd10r^1-r1+/rx=ap'
0
TOL
Explanation: Same algorithm as my Haskell submission.
EDIT Nope, fails on "33".
## Mathematica, 81 bytes
Length@Union@PadLeft[Sort/@IntegerDigits[ToExpression@#*Range@StringLength@#]]<2&
Try it online!
input string
Input
"010309278350515463917525773195876288659793814432989690721649484536082474226804123711340206185567"
Output
True
• FromDigits is shorter than ToExpression May 29, 2017 at 2:37
• because in this challenge you need to work with inputs like 034324... May 29, 2017 at 2:39 | 2022-05-23 12:04:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2199471890926361, "perplexity": 2057.9850507838046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662558015.52/warc/CC-MAIN-20220523101705-20220523131705-00449.warc.gz"} |
https://stats.stackexchange.com/questions/9324/check-of-r-command-and-output-of-unbalanced-repeated-measures-anova | # Check of R command and output of unbalanced repeated measures ANOVA
I'd love a check if anyone is willing!
I am trying to see if there is a statistical difference in female size between sites. Over the years females were repeatedly sampled within sites. I have sampled females opportunistically. Meaning that females were sampled a different number of times between and within sites.
My formula is:
> lmerfit1<-lmer(size ~ (1|FEMALE), data=Data)
> lmerfit2<-lmer(size ~ SITE+(1|FEMALE), data=Data)
> anova(lmerfit1, lmerfit2)
Data: Data
Models:
lmerfit1: size ~ (1 | FEMALE)
lmerfit2: size ~ SITE + (1 | FEMALE)
Df AIC BIC logLik Chisq Chi Df Pr(>Chisq)
lmerfit1 3 2167.8 2179.6 -1080.9
lmerfit2 4 2169.8 2185.5 -1080.9 0 1 **1**
A p value of 1 leaves me concerned. The other female traits I ran thru this same formula made sense.
thanks!
• Just a quick check: FEMALE is supposed to be factor with several levels (actually considered as your random effect), but I would have expected something like lmer(size ~ (1|id), data=Data, subset=gender=="female"), where id is subject number, gender codes for the sex (with two levels, male/female, the analysis being restricted here on females only). Could you explain a little bit more the structure of your data, or just put the results of head(Data)? – chl Apr 7 '11 at 20:58
• Here is the head(data) – MEL Apr 8 '11 at 3:22
• I am only concerned with females because I am doing a nesting study.head(data) FEMALE YEAR SITE FSCLmm FWTg FAGE NOEGGS NumH NumNOTHatch TEMP ONETRI TWOTRI THREETRI HUMIDITY PercHatch perchatch AVHWt 1 WDW1039 2010 SPSWA 155 537 7 4 0 4 18.83456 0.01372995 NaN NaN NaN 0.00000 0.0000000 NaN – MEL Apr 8 '11 at 3:29
Firstly, you need to set REML=F:
lmerfit1<-lmer(size ~ (1|FEMALE), data=Data, REML=F)
lmerfit2<-lmer(size ~ SITE+(1|FEMALE), data=Data, REML=F)
anova(lmerfit1, lmerfit2)
This will use MLE instead of REML, which is necessary because likelihoods from mixed models with different fixed effects are not comparable when REML is used.
Secondly, you could do the following quick checks:
summary(lmerfit2) # To see the size of the SITE coefficient
summary(lm(size ~ SITE, data=Data)) # To check the fixed effects estimates
plot(size ~ SITE, data=Data) # Box plot
dotplot(size ~ SITE, data=Data) # Another visual check
But given the non-significance of SITE in your reported test, and the lack of visual difference you reported in your comment, I'm guessing there is no significant main effect of SITE.
How many sites did you have? The models only differ by one df, so either you only have two sites or you treated site as a continuous variable when it should have been categorical. If it should have been a factor, use factor(SITE) instead of SITE.
Also, try plotting the data (always a good idea!) -- do you see any visual differences?
• It is true I have two sites. I have done the plots it's pretty messy but no real visual differences that I can tell. – MEL Apr 8 '11 at 3:26 | 2019-08-19 06:30:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24892745912075043, "perplexity": 5077.874414162878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314667.60/warc/CC-MAIN-20190819052133-20190819074133-00265.warc.gz"} |
https://chrisphan.com/2017/02/22/adventures-in-tikz-tkz-graph/ | # Adventures in TikZ: tkz-graph
The other day, I was writing some lecture notes for my linear algebra class, and wanted to create the following diagram (to illustrate the concept of a Markov chain):
I had a very limited time in which to finish these notes. Fortunately, I found the tkz-graph package, which made this a snap:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 \documentclass{standalone} \usepackage{tikz} \usepackage{fouriernc} \usepackage{tkz-graph} \begin{document} \begin{tikzpicture} \SetGraphUnit{5} \Vertex[x=0, y=10]{0 points}; \Vertex[x=0, y=5]{1 point}; \Vertex[x=0, y=0]{Win}; \Vertex[x=5, y=5]{Lose}; \Edge[style ={->}, label={$1/3$}]({0 points})({1 point}); \Edge[style ={->}, label={$1/3$}]({1 point})({Win}); \Edge[style ={->}, label={$1/6$}]({0 points})({Lose}); \Edge[style ={->}, label={$1/6$}]({1 point})({Lose}); \Loop[style ={->}, label={$1/2$}, labelstyle={fill=white}]({0 points}); \Loop[style ={->}, label={$1/2$}, labelstyle={fill=white}]({1 point}); \Loop[style ={->}, label={$1$}, dir=EA, labelstyle={fill=white}]({Lose}); \Loop[style ={->}, label={$1$}, labelstyle={fill=white}]({Win}); \end{tikzpicture} \end{document}
You don’t even have to specify the locations of the vertices; you can throw caution to the wind and have LaTeX decide where to place them! (I am a bit too much of a perfectionist for that.)
One slight issue I had was that the documentation for this package (at least on my computer, as retrieved by texdoc) was in French. Fortunately, I seem to have retained enough knowledge since I took the French language exam as a grad student that I could read most of the documentation. | 2021-05-15 17:33:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8017733097076416, "perplexity": 4099.4371782918815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990551.51/warc/CC-MAIN-20210515161657-20210515191657-00601.warc.gz"} |
https://physicsoverflow.org/user/Kai+Li/history | # Recent history for Kai Li
6
months
ago
received upvote on question In quantum mechanics(QM), can we define a high-dimensional "spin" angular momentum other than the ordinary 3D one?
6
months
ago
question answered In quantum mechanics(QM), can we define a high-dimensional "spin" angular momentum other than the ordinary 3D one?
4
years
ago
question answered How to determine the orientation of the massive Dirac Hamiltonian?
5
years
ago
received upvote on question A question on the Chern number and the winding number?
5
years
ago
question is edited A question on the Chern number and the...
5
years
ago
received upvote on question A question on the Chern number and the winding number?
5
years
ago
received upvote on question A question on the Chern number and the winding number?
5
years
ago
received upvote on question A question on the Chern number and the winding number?
5
years
ago
question answered A question on the Chern number and the winding number?
5
years
ago
posted a question A question on the Chern number and the...
5
years
ago
received upvote on question How to understand the entanglement in a lattice fermion system?
5
years
ago
received upvote on comment A simple conjecture on the Chern number of a 2-level Hamiltonian $H(\mathbf{k})$?
5
years
ago
received upvote on answer A simple conjecture on the Chern number of a 2-level Hamiltonian $H(\mathbf{k})$?
5
years
ago
received upvote on answer A simple conjecture on the Chern number of a 2-level Hamiltonian $H(\mathbf{k})$?
5
years
ago
received upvote on comment A simple conjecture on the Chern number of a 2-level Hamiltonian $H(\mathbf{k})$?
5
years
ago
edited a comment A simple conjecture on the Chern number of a 2-level Hamiltonian $H(\mathbf{k})$?
5
years
ago
edited a comment A simple conjecture on the Chern number of a 2-level Hamiltonian $H(\mathbf{k})$?
5
years
ago
edited a comment A simple conjecture on the Chern number of a 2-level Hamiltonian $H(\mathbf{k})$?
5
years
ago
edited a comment A simple conjecture on the Chern number of a 2-level Hamiltonian $H(\mathbf{k})$?
5
years
ago
edited a comment A simple conjecture on the Chern number of a 2-level Hamiltonian $H(\mathbf{k})$? | 2021-09-20 20:38:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9120624661445618, "perplexity": 1244.8455873706057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057091.31/warc/CC-MAIN-20210920191528-20210920221528-00286.warc.gz"} |
http://www.acmerblog.com/hdu-2919-adding-sevens-4611.html | 2014
02-23
A seven segment display, similar to the one shown on the right, is composed of seven light-emitting elements. Individually on or off, they can be combined to produce 127 different combinations, including the ten Arabic numerals. The figure below illustrates how the ten numerals are displayed.
7-seg displays (as they’re often abbreviated) are widely used in digital clocks, electronic meters, and calculators.
A 7-seg has seven connectors, one for each element, (plus few more connectors for other electrical purposes.) Each element can be turned on by sending an electric current through its pin. Each of the seven pins is viewed by programmers as a single bit in a 7-bit number, as they are more comfortable dealing with bits rather than electrical signals. The figure below shows the bit assignment for a typical 7-seg, bit 0 being the right-most bit.
For example, in order to display the digit 1, the programmer knows that only bits 1 and 3 need to be on, i.e. the 7-bit binary number to display digit 1 is “0001010", or 10 in decimal. Let’s call the decimal number for displaying a digit, its display code, or just code for short. Since a 7-seg displays 127 different configurations, display codes are normally written using 3 decimal places with leading zeros if necessary, i.e. the display code for digit 1 is written as 010.
In a 9-digit calculator, 9 7-seg displays are stacked next to each other, and are all controlled by a single controller. The controller is sent a sequence of 3n digits, representing n display codes, where 0 < n < 10 . If n < 9 , the number is right justified and leading zeros are automatically displayed. For example, the display code for 13 is 010079 while for 144 it is 010106106
Write a program that reads the display codes of two numbers, and prints the display code of their sum.
Your program will be tested on one or more test cases. Each test case is specified on a single line in the form of A +B = where both A and B are display codes for decimal numbers a and b respectively where 0 < a , b < a + b < 1, 000, 000, 000 . The last line of the input file is the word “BYE” (without the double quotes.)
Your program will be tested on one or more test cases. Each test case is specified on a single line in the form of A +B = where both A and B are display codes for decimal numbers a and b respectively where 0 < a , b < a + b < 1, 000, 000, 000 . The last line of the input file is the word “BYE” (without the double quotes.)
010079010+010079=
106010+010=
BYE
010079010+010079=010106106
106010+010=106093
#include<iostream>
#include<cstdio>
#include<cstdlib>
#include<cstring>
#include<string>
#include<queue>
#include<algorithm>
#include<map>
#include<iomanip>
#define INF 99999999
using namespace std;
const int MAX=30+10;
int hash1[200],hash2[10];
char a[MAX],b[MAX];
void Map(){
hash2[0]=63,hash1[63]=0;
hash2[1]=10,hash1[10]=1;
hash2[2]=93,hash1[93]=2;
hash2[3]=79,hash1[79]=3;
hash2[4]=106,hash1[106]=4;
hash2[5]=103,hash1[103]=5;
hash2[6]=119,hash1[119]=6;
hash2[7]=11,hash1[11]=7;
hash2[8]=127,hash1[127]=8;
hash2[9]=107,hash1[107]=9;
}
int main(){
Map();
int lena,lenb,sum=0,i,j,p,temp=1;
while(~scanf("%s",a),strcmp(a,"BYE")){
i=j=sum=0,temp=1;
while(a[i++] != '+');
lena=i-1;
while(a[i] != '=')b[j++]=a[i++];
lenb=j;
for(i=lena-3,j=lenb-3;i>=0 && j>=0;i-=3,j-=3){
p=(a[i]-'0')*100+(a[i+1]-'0')*10+a[i+2]-'0';
sum+=hash1[p]*temp;
p=(b[j]-'0')*100+(b[j+1]-'0')*10+b[j+2]-'0';
sum+=hash1[p]*temp;
temp*=10;
}
while(i>=0){
p=(a[i]-'0')*100+(a[i+1]-'0')*10+a[i+2]-'0';
sum+=hash1[p]*temp;
temp*=10;
i-=3;
}
while(j>=0){
p=(b[j]-'0')*100+(b[j+1]-'0')*10+b[j+2]-'0';
sum+=hash1[p]*temp;
temp*=10;
j-=3;
}
printf("%s",a);
while(temp>sum)temp/=10;
while(temp){
printf("%03d",hash2[sum/temp]);
sum=sum%temp;
temp=temp/10;
}
cout<<endl;
}
return 0;
}
1. 第2题,TCP不支持多播,多播和广播仅应用于UDP。所以B选项是不对的。第2题,TCP不支持多播,多播和广播仅应用于UDP。所以B选项是不对的。 | 2017-08-19 20:27:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41124647855758667, "perplexity": 811.0061321363581}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105922.73/warc/CC-MAIN-20170819201404-20170819221404-00124.warc.gz"} |
https://projecteuclid.org/euclid.ijde/1485313253 | International Journal of Differential Equations
On Carlson's Type Removability Test for the Degenerate Quasilinear Elliptic Equations
Abstract
Carlson's type theorem on removable sets for $\alpha$-Holder continuous solutions is investigated for the quasilinear elliptic equations $\text{div} A\lleefftt(x,u,\nabla u\rrriiighttt)=0,$ having degeneration $\omega$ in the Muckenhoupt class. In partial, when $\alpha$ is sufficiently small and the operator is weighted $p$-Laplacian, we show that the compact set $E$ is removable if and only if the Hausdorff measure ${\Lambda }_{\omega }^{-p+(p-1)\alpha }(E)=0$.
Article information
Source
Int. J. Differ. Equ., Volume 2011 (2011), Article ID 198606, 23 pages.
Dates
Accepted: 13 August 2011
First available in Project Euclid: 25 January 2017
https://projecteuclid.org/euclid.ijde/1485313253
Digital Object Identifier
doi:10.1155/2011/198606
Mathematical Reviews number (MathSciNet)
MR2847602
Zentralblatt MATH identifier
1237.35075
Citation
Mamedov, Farman I.; Quliyev, Aslan D.; Mirheydarli, Mirfaig M. On Carlson's Type Removability Test for the Degenerate Quasilinear Elliptic Equations. Int. J. Differ. Equ. 2011 (2011), Article ID 198606, 23 pages. doi:10.1155/2011/198606. https://projecteuclid.org/euclid.ijde/1485313253
References
• A. Kufner, Weighted Sobolev Spaces, A Wiley-Interscience Publication, John Wiley & Sons, New York, NY, USA, 1985.
• T. Kilpeläinen, “Weighted Sobolev spaces and capacity,” Annales Academiae Scientiarum Fennicae. Series A I. Mathematica, vol. 19, no. 1, pp. 95–113, 1994.
• L. Carleson, Selected Problems on Exceptional Sets, vol. 13 of Van Nostrand Mathematical Studies, D. Van Nostrand, Princeton, NJ, USA, 1967.
• J. Mateu and J. Orobitg, “Lipschitz approximation by harmonic functions and some applications to spectral synthesis,” Indiana University Mathematics Journal, vol. 39, no. 3, pp. 703–736, 1990.
• D. C. Ullrich, “Removable sets for harmonic functions,” The Michigan Mathematical Journal, vol. 38, no. 3, pp. 467–473, 1991.
• R. Harvey and J. Polking, “Removable singularities of solutions of linear partial differential equations,” Acta Mathematica, vol. 125, pp. 39–56, 1970.
• A. V. Pokrovskiĭ, “Removable singularities of solutions of second-order elliptic equations in divergent form,” Rossiĭskaya Akademiya Nauk, vol. 77, no. 3, pp. 424–433, 2005.
• N. N. Tarkanov, Laurent Series for Solutions of Elliptic Systems, Nauka, Novosibirsk, Russia, 1991.
• A. D. Kuliev and F. I. Mamedov, “On the nonlinear weight analogue of the Landis-Gerver's type mean value theorem and its applications to quasi-linear equations,” Proceedings of Institute of Mathematics and Mechanics. Academy of Sciences of Azerbaijan, vol. 12, pp. 74–81, 2000.
• T. Kilpeläinen and X. Zhong, “Removable sets for continuous solutions of quasilinear elliptic equations,” Proceedings of the American Mathematical Society, vol. 130, no. 6, pp. 1681–1688, 2002.
• T. Mäkäläinen, “Removable sets for Hölder continuous p-harmonic functions on metric measure spaces,” Annales Academiæ Scientiarium Fennicæ, vol. 33, no. 2, pp. 605–624, 2008.
• S. Campanato, “Proprietà di hölderianità di alcune classi di funzioni,” Annali della Scuola Normale Superiore di Pisa, vol. 17, pp. 175–188, 1963.
• M. Giaquinta, Multiple Integrals in the Calculus of Variations and Nonlinear Elliptic Systems, vol. 105 of Annals of Mathematics Studies, Princeton University Press, Princeton, NJ, USA, 1983.
• F. I. Mamedov and R. A. Amanov, “On some properties of solutions of quasilinear degenerate equations,” Ukrainian Mathematical Journal, vol. 60, no. 7, pp. 1073–1098, 2008.
• E. M. Landis, Second Order Equations of Elliptic and Parabolic Type, vol. 171 of Translations of Mathematical Monographs, American Mathematical Society, Providence, RI, USA, 1998.
• M. de Guzmán, Differentiation of Integrals in R$_{n}$, vol. 481 of Lecture Notes in Mathematics, Springer, Berlin, Germany, 1975.
• T. Sjödin, “A note on capacity and Hausdorff measure in homogeneous spaces,” Potential Analysis, vol. 6, no. 1, pp. 87–97, 1997. | 2020-04-07 01:50:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 7, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39878755807876587, "perplexity": 1976.9609091999414}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371662966.69/warc/CC-MAIN-20200406231617-20200407022117-00186.warc.gz"} |
http://math.stackexchange.com/questions/165936/laplace-transform-s-sufficiently-large | # Laplace Transform $s$ sufficiently large [duplicate]
Possible Duplicate:
How does partial fraction decomposition avoid division by zero?
I've got a question about the domain restriction of $s$ when taking the Laplace Transform. I think my question can be best illustrated with an example.
Let's suppose we have $$\frac{d^2x}{dt} + \frac{dx}{dt} = e^{3t} + e^{7t}\\ \mathcal{L}\left\{\frac{d^2x}{dt} + \frac{dx}{dt}\right\} = \mathcal{L}\{e^{3t} + e^{7t}\}$$ Assuming the initial conditions are zero. $$s^2Y(s) + sY(s) = \frac{1}{s-3} + \frac{1}{s-7} \,\forall\, s > 7\\ Y(s)s(s + 1) = \frac{1}{s-3} + \frac{1}{s-7}\\ Y(s) = \frac{1}{s(s-3)(s+1)} + \frac{1}{s(s-7)(s+1)}\\ Y(s) = \frac{A}{s} + \frac{B}{s-3} + \frac{C}{s+1} + \frac{D}{s} + \frac{E}{s-7} + \frac{F}{s+1}$$ The reason why I wrote that $s > 7$ is because $s$ must be sufficiently large for the Laplace Transform to converge, and in the case, $s > 7$ to ensure that $e^{7t}$ does infact converge as $t \to \infty$.
Now, let's say that I want to find $A$, $B$ and $C$.
So I'd say $$1 \equiv A(s-3)(s+1) + Bs(s+1) + Cs(s-3)$$ Now to find $A$, I'd have to let $s \to 0$; however, above I said $s > 7$ (otherwise the integral wouldn't converge and we wouldn't even be able to get to this stage).
How does this work? Is it even possible to find the partial fraction coefficients given that $s > 7$?
Thank you.
Edit:
This would make sense to me: $$\frac{d^2x}{dt} + \frac{dx}{dt} = e^{3t} + e^{7t}\\ \mathcal{L}\left\{\frac{d^2x}{dt} + \frac{dx}{dt}\right\} = \mathcal{L}\{e^{3t} + e^{7t}\}$$ Assuming the initial conditions are zero. $$s_1^2Y(s) + s_1Y(s) = \frac{1}{s_2-3} + \frac{1}{s_3-7}$$ Where $s_1 > 0$, $s_2 > 3$ and $s_3 > 7$.
$$Y(s)s_1(s_1 + 1) = \frac{1}{s_2-3} + \frac{1}{s_3-7}\\ Y(s) = \frac{1}{s_1(s_2-3)(s_1+1)} + \frac{1}{s_1(s_3-7)(s_1+1)}\\ Y(s) = \frac{A}{s_1} + \frac{B}{s_2-3} + \frac{C}{s_1+1} + \frac{D}{s_1} + \frac{E}{s_3-7} + \frac{F}{s_1+1}\\ \therefore 1 \equiv A(s_2-3)(s_1+1) + Bs_1(s_1+1) + Cs_1(s_2-3)$$ And now, to find $A$, I can let $s_1 \to 0^+$ because $s_1$ is defined for all numbers greater than zero.
This would make sense to me, but I'm not sure if it correct. It seems to me that when we do the Laplace Transform and let $s \to$ some undefined value, it is almost a bit of a fluke that it works out.
-
## marked as duplicate by Pedro Tamaroff, Jennifer Dylan, William, rschwieb, ArkamisSep 22 '12 at 15:47
Related. In fact, I think the answers there are valid here, too. – Pedro Tamaroff Jul 3 '12 at 1:30
This sort of technicality gets brushed aside in lectures, but can lead to confusion.
The function $s \mapsto Y(s)$ is defined and analytic for $\mathcal{Re}(s) >7$, but is equal to the rational function given above which is analytic everywhere except at its poles. You can factor/expand the rational function any way you want (algebraically) and it will remain the same function, which will still equal $Y$ when $\mathcal{Re}(s) >7$.
There are three quantities involved above; First (with slight abuse of notation) is $Y(s)$, which is defined for $\mathcal{Re}(s) >7$, second is the rational function $q_1(s) = \frac{1}{s(s-3)(s+1)} + \frac{1}{s(s-7)(s+1)}$, and third is the rational function $q_2(s) = \frac{A}{s} + \frac{B}{s-3} + \frac{C}{s+1} + \frac{D}{s} + \frac{E}{s-7} + \frac{F}{s+1}$.
Direct computation of the Laplace Transforms shows that $Y(s) = q_1(s)$, whenever $\mathcal{Re}(s) >7$.
The functions $q_1, q_2$ are defined on $\Delta = \mathbb{C}\setminus \{-1, 0, 3, 7 \}$ (note that if $\mathcal{Re}(s) >7$, then $s \in \Delta$), and you can establish (using any argument you like, such as letting $s\to 0$) that $q_1(s) = q_2(s)$, whenever $s \in \Delta$. Note that this equality is true regardless of how you ended up with $q_1$, and that the identify is true regardless of other conditions such as $\mathcal{Re}(s) >7$.
It therefore follows that $Y(s) = q_2(s)$, whenever $\mathcal{Re}(s) >7$.
-
I sort of understand what you're saying; however, it really doesn't make sense to me that $s > 7$, yet later on, I let $s \to 0$. I would've thought that the value as $s \to 0$ would just return some undefined behavior because the function is not valid there... – user968243 Jul 3 '12 at 3:34
I can't work out how to edit the above comment... Anyway, I've added to the question how I believe it works. It'd be great if you could shed some light on this. Thank you. – user968243 Jul 3 '12 at 3:48
@user968243 Consider visiting the link I provide in the comments. Bill gives a neat answer. – Pedro Tamaroff Jul 3 '12 at 3:48
I looked at the link that you posted. I thought that when you try to find the partial fraction coefficients you equate the numerators and take the limit as the numerator approaches the poles. If I understand, what copper.hat and Bill are basically saying is that we can let $s$ approach some value for which it is undefined and then we can solve for constants. Then we can use those constants in formulating an equivalent equation (despite those constants being obtained using undefined values of $s$). And basically, this has all been proven to work and is a property of polynomials and algebra. – user968243 Jul 3 '12 at 4:19
@user968243: I have added further elaboration, I hope it works for you. – copper.hat Jul 3 '12 at 6:12 | 2014-12-20 22:06:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9660626649856567, "perplexity": 236.6043375861685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770400.105/warc/CC-MAIN-20141217075250-00138-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://zbmath.org/?q=an%3A0858.34023 | ## Limit cycles of a cubic Kolmogorov system.(English)Zbl 0858.34023
The authors consider limit cycles which bifurcate from a critical point in case of cubic Kolmogorov systems of the form $\dot x=x(x-2y+2)(Ax+y+B),\quad \dot y=y(2x-y-2)(Dx+y+C).\tag{1}$ The authors use the computer algebra procedure FINDETA which was described by the first two authors [J. Symb. Comput. 9, No. 2, 215-224 (1990; Zbl 0702.68072)] to compute the focal values at the critical point $$(2,2)$$ and show that for certain cubic Kolmogorov systems (1), four and not more than four limit cycles can bifurcate from a critical point in the first quadrant; moreover, three fine foci of order one can coexist and a limit cycle can bifurcate from each.
### MSC:
34C05 Topological structure of integral curves, singular points, limit cycles of ordinary differential equations 92D25 Population dynamics (general) 34C23 Bifurcation theory for ordinary differential equations
Zbl 0702.68072
Full Text:
### References:
[1] Hirsch, M. W., Systems of differential equations that are competitive or cooperative. V: Convergence in 3-dimensional systems, J. Differential Equations, 80, 94-106 (1989) · Zbl 0712.34045 [2] Ye, Y.; Ye, W., Cubic Kolmogorov differential systems with two limit cycles surrounding the same focus, Ann. Differential Equations, 1, 2, 201-207 (1985) · Zbl 0597.34020 [3] Coleman, C. S., Hilbert’s 16th problem: How many cycles?, (Lucas, W., Differential Equations Models, Volume 1 (1978), Springer-Verlag), 279-297 [4] Lloyd, N. G.; Pearson, J. M., REDUCE and the bifurcation of limit cycles, J. Symbolic Comput., 9, 215-224 (1990) · Zbl 0702.68072 [5] Lloyd, N. G.; Pearson, J. M., Algorithmic derivation of centre conditions (1994), University of Wales: University of Wales Aberystwyth, Preprint · Zbl 0876.34033 [6] Pearson, J. M., Hilbert’s sixteenth problem: An approach using Computer Algebra, (Ph.D. Dissertation (1992), University of Wales: University of Wales Aberystwyth)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2023-02-06 13:52:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7669423818588257, "perplexity": 2376.835659955132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500339.37/warc/CC-MAIN-20230206113934-20230206143934-00785.warc.gz"} |
https://mathematica.stackexchange.com/questions/210711/symbolically-invert-block-matrix?noredirect=1 | # Symbolically invert block matrix [duplicate]
In Mathematica, if I write something like:
Inverse[{{a, b}, {c, d}}]
I get the inverse:
{{d/(-b c + a d), -(b/(-b c + a d))}, {-(c/(-b c + a d)),
a/(-b c + a d)}}
This assumes that a,b,c,d are scalars. How can I tell Mathematica to treat a,b,c,d as matrices instead? So I would like to symbolically obtain a formula for the inverse of a block-matrix. Moreover, I also have additional information such that b == c^T, and that a,d are symmetric. Is there a way to let Mathematica exploit this sort of information to obtain a simpler expression?
This 2x2 example is just to illustrate what I want. My motivation is that I have a somewhat more complicated matrix that has some zero blocks, and I would like to know symbolically if a simple inverse can be obtained.
• – kglr Dec 4 '19 at 10:50 | 2020-12-03 14:15:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.61268550157547, "perplexity": 856.3309550160069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141727782.88/warc/CC-MAIN-20201203124807-20201203154807-00591.warc.gz"} |
http://ocw.usu.edu/Electrical_and_Computer_Engineering/Information_Theory/lecture7_1.htm | ##### Personal tools
•
You are here: Home Arithmetic Coding
# Arithmetic Coding
##### Document Actions
Introduction :: Probability Models :: Applications
## Introduction
Arithmetic coding overcomes some of the problems of Huffman coding, in particular the potential 1 bit surplus problem. It operates as a human might, using information already observed to predict what might be coming, and coding based on the prediction. In addition, the technique explicitly separates the prediction portion from the encoding portion. In AC, a bit sequence is interpreted as an interval on the real line from 0 to 1. For example 01 is interpreted as 0.01...., which corresponds (not knowing what the following digits are) to the interval [0.01, 0.10) (in binary) which is [0.25,0.5) (bse ten). (Make sure understanding on brackets.) A longer string 01101 corresponds to the interval [0.01101, 0.01110). The longer the string, the shorter the interval represented on the real line. Assume we are dealing with an alphabet , where a I is a special symbol meaning "end of transmission.'' The source produces the sequence , and not necessarily i.i.d. We further assume (or model) that there is a predictor which computes, or estimates
which is available at both encoder and decoder. We divide the segment [0,1) into I intervals whose lengths are equal to the probabilities . The first interval is
[0, P ( x 1 = a 1 ))
The second interval is
[ P ( x 1 = a 1 ), P ( x 1 = a 1 )+ P ( x 1 = a 2 )),
and so forth. More generally, to provide for the possibility of considering other symbols than just x 1 , we define the lower and upper cumulative probabilities:
Then, for example, a 2 corresponds to the interval [ Q 1 ( a 2 ), R 1 ( a 2 )). Now we represent the probabilities for the next symbol. Take, for example, the interval for a 1 , and subdivide it into intervals , so that the length of the interval for a 1 a j is proportional to P ( a j |a 1 ). In fact, we take the length of the subinterval for a 1 a j to be
P ( x 1 = a 1 , x 2 = a j ) = P ( x 1 = a 1 ) P ( x 2 = a j |x 1 = a 1 )
Then we note that the sum of the lengths of these subintervals will be
which sure enough is the correct length. More generally, we subdivide each of the intervals for a i a j similarly to have length of
P ( x 1 = a i , x 2 = a j ) = P ( x 1 = a 1 ) P ( x 2 = a j |a 1 = a i ).
Then, we continue subdividing each subinterval for strings of length N . The following algorithm (Mackay, p. 151) shows how to compute the interval [ u , v ) for the string . (Note: this is for demonstration purposes, since it requires infinite precision arithmetic. In practice, the algorithm is arranged so that infinite precision is not required.)
In encoding, the interval is subdivided for each new symbol. To encode the string , we send the binary string whose interval lies within the interval determined by the sequence.
One of the benefits of arithmetic coding is that the worst case redundancy for an entire bit string (which may, for example, consist of an entire file) is at most two bits , assuming the probabilistic model is correct. Given a probabilistic model , the ideal message length for a sequence is . Suppose that is just barely between two binary intervals. Then the next smaller binary intervals contained in are smaller by a factor of 4. This factor of 4 corresponds to bits overhead worst case. | 2017-12-12 10:21:54 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8698107600212097, "perplexity": 599.460208563443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948515313.13/warc/CC-MAIN-20171212095356-20171212115356-00072.warc.gz"} |
https://www.feynmanlectures.caltech.edu/I_06.html | MATHJAX
http://www.feynmanlectures.caltech.edu/I_01.html
If it does not open, or only shows you this message again, then please let us know:
• which browser you are using (including version #)
• which operating system you are using (including version #)
This type of problem is rare, and there's a good chance it can be fixed if we have some clues about the cause. So, if you can, after enabling javascript, clearing the cache and disabling extensions, please open your browser's javascript console, load the page above, and if this generates any messages (particularly errors or warnings) on the console, then please make a copy (text or screenshot) of those messages and send them with the above-listed information to the email address given below.
By sending us information you will be helping not only yourself, but others who may be having similar problems accessing the online edition of The Feynman Lectures on Physics. Your time and consideration are greatly appreciated.
Best regards,
Mike Gottlieb
mg@feynmanlectures.info
Editor, The Feynman Lectures on Physics New Millennium Edition
## 6Probability
(There was no summary for this lecture.)
“The true logic of this world is in the calculus of probabilities.”
—James Clerk Maxwell
### 6–1Chance and likelihood
“Chance” is a word which is in common use in everyday living. The radio reports speaking of tomorrow’s weather may say: “There is a sixty percent chance of rain.” You might say: “There is a small chance that I shall live to be one hundred years old.” Scientists also use the word chance. A seismologist may be interested in the question: “What is the chance that there will be an earthquake of a certain size in Southern California next year?” A physicist might ask the question: “What is the chance that a particular geiger counter will register twenty counts in the next ten seconds?” A politician or statesman might be interested in the question: “What is the chance that there will be a nuclear war within the next ten years?” You may be interested in the chance that you will learn something from this chapter.
By chance, we mean something like a guess. Why do we make guesses? We make guesses when we wish to make a judgment but have incomplete information or uncertain knowledge. We want to make a guess as to what things are, or what things are likely to happen. Often we wish to make a guess because we have to make a decision. For example: Shall I take my raincoat with me tomorrow? For what earth movement should I design a new building? Shall I build myself a fallout shelter? Shall I change my stand in international negotiations? Shall I go to class today?
Sometimes we make guesses because we wish, with our limited knowledge, to say as much as we can about some situation. Really, any generalization is in the nature of a guess. Any physical theory is a kind of guesswork. There are good guesses and there are bad guesses. The theory of probability is a system for making better guesses. The language of probability allows us to speak quantitatively about some situation which may be highly variable, but which does have some consistent average behavior.
Let us consider the flipping of a coin. If the toss—and the coin—are “honest,” we have no way of knowing what to expect for the outcome of any particular toss. Yet we would feel that in a large number of tosses there should be about equal numbers of heads and tails. We say: “The probability that a toss will land heads is $0.5$.”
We speak of probability only for observations that we contemplate being made in the future. By the “probability” of a particular outcome of an observation we mean our estimate for the most likely fraction of a number of repeated observations that will yield that particular outcome. If we imagine repeating an observation—such as looking at a freshly tossed coin—$N$ times, and if we call $N_A$ our estimate of the most likely number of our observations that will give some specified result $A$, say the result “heads,” then by $P(A)$, the probability of observing $A$, we mean $$\label{Eq:I:6:1} P(A)=N_A/N.$$
Our definition requires several comments. First of all, we may speak of a probability of something happening only if the occurrence is a possible outcome of some repeatable observation. It is not clear that it would make any sense to ask: “What is the probability that there is a ghost in that house?”
You may object that no situation is exactly repeatable. That is right. Every different observation must at least be at a different time or place. All we can say is that the “repeated” observations should, for our intended purposes, appear to be equivalent. We should assume, at least, that each observation was made from an equivalently prepared situation, and especially with the same degree of ignorance at the start. (If we sneak a look at an opponent’s hand in a card game, our estimate of our chances of winning are different than if we do not!)
We should emphasize that $N$ and $N_A$ in Eq. (6.1) are not intended to represent numbers based on actual observations. $N_A$ is our best estimate of what would occur in $N$ imagined observations. Probability depends, therefore, on our knowledge and on our ability to make estimates. In effect, on our common sense! Fortunately, there is a certain amount of agreement in the common sense of many things, so that different people will make the same estimate. Probabilities need not, however, be “absolute” numbers. Since they depend on our ignorance, they may become different if our knowledge changes.
You may have noticed another rather “subjective” aspect of our definition of probability. We have referred to $N_A$ as “our estimate of the most likely number …” We do not mean that we expect to observe exactly $N_A$, but that we expect a number near $N_A$, and that the number $N_A$ is more likely than any other number in the vicinity. If we toss a coin, say, $30$ times, we should expect that the number of heads would not be very likely to be exactly $15$, but rather only some number near to $15$, say $12$, $13$, $14$, $15$, $16$, or $17$. However, if we must choose, we would decide that $15$ heads is more likely than any other number. We would write $P(\text{heads})=0.5$.
Why did we choose $15$ as more likely than any other number? We must have argued with ourselves in the following manner: If the most likely number of heads is $N_H$ in a total number of tosses $N$, then the most likely number of tails $N_T$ is $(N-N_H)$. (We are assuming that every toss gives either heads or tails, and no “other” result!) But if the coin is “honest,” there is no preference for heads or tails. Until we have some reason to think the coin (or toss) is dishonest, we must give equal likelihoods for heads and tails. So we must set $N_T=N_H$. It follows that $N_T=$ $N_H=$ $N/2$, or $P(H)=$ $P(T)=$ $0.5$.
We can generalize our reasoning to any situation in which there are $m$ different but “equivalent” (that is, equally likely) possible results of an observation. If an observation can yield $m$ different results, and we have reason to believe that any one of them is as likely as any other, then the probability of a particular outcome $A$ is $P(A)=1/m$.
If there are seven different-colored balls in an opaque box and we pick one out “at random” (that is, without looking), the probability of getting a ball of a particular color is $\tfrac{1}{7}$. The probability that a “blind draw” from a shuffled deck of $52$ cards will show the ten of hearts is $\tfrac{1}{52}$. The probability of throwing a double-one with dice is $\tfrac{1}{36}$.
In Chapter 5 we described the size of a nucleus in terms of its apparent area, or “cross section.” When we did so we were really talking about probabilities. When we shoot a high-energy particle at a thin slab of material, there is some chance that it will pass right through and some chance that it will hit a nucleus. (Since the nucleus is so small that we cannot see it, we cannot aim right at a nucleus. We must “shoot blind.”) If there are $n$ atoms in our slab and the nucleus of each atom has a cross-sectional area $\sigma$, then the total area “shadowed” by the nuclei is $n\sigma$. In a large number $N$ of random shots, we expect that the number of hits $N_C$ of some nucleus will be in the ratio to $N$ as the shadowed area is to the total area of the slab: $$\label{Eq:I:6:2} N_C/N=n\sigma/A.$$ We may say, therefore, that the probability that any one projectile particle will suffer a collision in passing through the slab is $$\label{Eq:I:6:3} P_C=\frac{n}{A}\,\sigma,$$ where $n/A$ is the number of atoms per unit area in our slab.
### 6–2Fluctuations
We would like now to use our ideas about probability to consider in some greater detail the question: “How many heads do I really expect to get if I toss a coin $N$ times?” Before answering the question, however, let us look at what does happen in such an “experiment.” Figure 6–1 shows the results obtained in the first three “runs” of such an experiment in which $N=30$. The sequences of “heads” and “tails” are shown just as they were obtained. The first game gave $11$ heads; the second also $11$; the third $16$. In three trials we did not once get $15$ heads. Should we begin to suspect the coin? Or were we wrong in thinking that the most likely number of “heads” in such a game is $15$? Ninety-seven more runs were made to obtain a total of $100$ experiments of $30$ tosses each. The results of the experiments are given in Table 6–1.1
Table 6–1Number of heads in successive trials of 30 tosses of a coin.
$11$ $16$ $17$ $15$ $17$ $16$ $19$ $18$ $15$ $13$
$100\text{ trials}$
$11$ $17$ $17$ $12$ $20$ $23$ $11$ $16$ $17$ $14$
$16$ $12$ $15$ $10$ $18$ $17$ $13$ $15$ $14$ $15$
$16$ $12$ $11$ $22$ $12$ $20$ $12$ $15$ $16$ $12$
$16$ $10$ $15$ $13$ $14$ $16$ $15$ $16$ $13$ $18$
$14$ $14$ $13$ $16$ $15$ $19$ $21$ $14$ $12$ $15$
$16$ $11$ $16$ $14$ $17$ $14$ $11$ $16$ $17$ $16$
$19$ $15$ $14$ $12$ $18$ $15$ $14$ $21$ $11$ $16$
$17$ $17$ $12$ $13$ $14$ $17$ $\phantom{1}9$ $13$ $19$ $13$
$14$ $12$ $15$ $17$ $14$ $10$ $17$ $17$ $12$ $11$
Looking at the numbers in Table 6–1, we see that most of the results are “near” $15$, in that they are between $12$ and $18$. We can get a better feeling for the details of these results if we plot a graph of the distribution of the results. We count the number of games in which a score of $k$ was obtained, and plot this number for each $k$. Such a graph is shown in Fig. 6–2. A score of $15$ heads was obtained in $13$ games. A score of $14$ heads was also obtained $13$ times. Scores of $16$ and $17$ were each obtained more than $13$ times. Are we to conclude that there is some bias toward heads? Was our “best estimate” not good enough? Should we conclude now that the “most likely” score for a run of $30$ tosses is really $16$ heads? But wait! In all the games taken together, there were $3000$ tosses. And the total number of heads obtained was $1493$. The fraction of tosses that gave heads is $0.498$, very nearly, but slightly less than half. We should certainly not assume that the probability of throwing heads is greater than $0.5$! The fact that one particular set of observations gave $16$ heads most often, is a fluctuation. We still expect that the most likely number of heads is $15$.
We may ask the question: “What is the probability that a game of $30$ tosses will yield $15$ heads—or $16$, or any other number?” We have said that in a game of one toss, the probability of obtaining one head is $0.5$, and the probability of obtaining no head is $0.5$. In a game of two tosses there are four possible outcomes: $HH$, $HT$, $TH$, $TT$. Since each of these sequences is equally likely, we conclude that (a) the probability of a score of two heads is $\tfrac{1}{4}$, (b) the probability of a score of one head is $\tfrac{2}{4}$, (c) the probability of a zero score is $\tfrac{1}{4}$. There are two ways of obtaining one head, but only one of obtaining either zero or two heads.
Consider now a game of $3$ tosses. The third toss is equally likely to be heads or tails. There is only one way to obtain $3$ heads: we must have obtained $2$ heads on the first two tosses, and then heads on the last. There are, however, three ways of obtaining $2$ heads. We could throw tails after having thrown two heads (one way) or we could throw heads after throwing only one head in the first two tosses (two ways). So for scores of $3$-$H$, $2$-$H$, $1$-$H$, $0$-$H$ we have that the number of equally likely ways is $1$, $3$, $3$, $1$, with a total of $8$ different possible sequences. The probabilities are $\tfrac{1}{8}$, $\tfrac{3}{8}$, $\tfrac{3}{8}$, $\tfrac{1}{8}$.
Fig. 6–3.A diagram for showing the number of ways a score of 0, 1, 2, or 3 heads can be obtained in a game of 3 tosses.
Fig. 6–4.A diagram like that of Fig. 6–3, for a game of 6 tosses.
The argument we have been making can be summarized by a diagram like that in Fig. 6–3. It is clear how the diagram should be continued for games with a larger number of tosses. Figure 6–4 shows such a diagram for a game of $6$ tosses. The number of “ways” to any point on the diagram is just the number of different “paths” (sequences of heads and tails) which can be taken from the starting point. The vertical position gives us the total number of heads thrown. The set of numbers which appears in such a diagram is known as Pascal’s triangle. The numbers are also known as the binomial coefficients, because they also appear in the expansion of $(a+b)^n$. If we call $n$ the number of tosses and $k$ the number of heads thrown, then the numbers in the diagram are usually designated by the symbol $\tbinom{n}{k}$. We may remark in passing that the binomial coefficients can also be computed from $$\label{Eq:I:6:4} \binom{n}{k}=\frac{n!}{k!(n-k)!},$$ where $n!$, called “$n$-factorial,” represents the product $(n)(n-1)(n-2)\dotsm(3)(2)(1)$.
We are now ready to compute the probability $P(k,n)$ of throwing $k$ heads in $n$ tosses, using our definition Eq. (6.1). The total number of possible sequences is $2^n$ (since there are $2$ outcomes for each toss), and the number of ways of obtaining $k$ heads is $\tbinom{n}{k}$, all equally likely, so we have $$\label{Eq:I:6:5} P(k,n)=\frac{\tbinom{n}{k}}{2^n}.$$
Since $P(k,n)$ is the fraction of games which we expect to yield $k$ heads, then in $100$ games we should expect to find $k$ heads $100\cdot P(k,n)$ times. The dashed curve in Fig. 6–2 passes through the points computed from $100\cdot P(k,30)$. We see that we expect to obtain a score of $15$ heads in $14$ or $15$ games, whereas this score was observed in $13$ games. We expect a score of $16$ in $13$ or $14$ games, but we obtained that score in $15$ games. Such fluctuations are “part of the game.”
The method we have just used can be applied to the most general situation in which there are only two possible outcomes of a single observation. Let us designate the two outcomes by $W$ (for “win”) and $L$ (for “lose”). In the general case, the probability of $W$ or $L$ in a single event need not be equal. Let $p$ be the probability of obtaining the result $W$. Then $q$, the probability of $L$, is necessarily $(1-p)$. In a set of $n$ trials, the probability $P(k,n)$ that $W$ will be obtained $k$ times is $$\label{Eq:I:6:6} P(k,n)=\tbinom{n}{k}p^kq^{n-k}.$$ This probability function is called the Bernoulli or, also, the binomial probability.
### 6–3The random walk
There is another interesting problem in which the idea of probability is required. It is the problem of the “random walk.” In its simplest version, we imagine a “game” in which a “player” starts at the point $x=0$ and at each “move” is required to take a step either forward (toward $+x$) or backward (toward $-x$). The choice is to be made randomly, determined, for example, by the toss of a coin. How shall we describe the resulting motion? In its general form the problem is related to the motion of atoms (or other particles) in a gas—called Brownian motion—and also to the combination of errors in measurements. You will see that the random-walk problem is closely related to the coin-tossing problem we have already discussed.
First, let us look at a few examples of a random walk. We may characterize the walker’s progress by the net distance $D_N$ traveled in $N$ steps. We show in the graph of Fig. 6–5 three examples of the path of a random walker. (We have used for the random sequence of choices the results of the coin tosses shown in Fig. 6–1.)
What can we say about such a motion? We might first ask: “How far does he get on the average?” We must expect that his average progress will be zero, since he is equally likely to go either forward or backward. But we have the feeling that as $N$ increases, he is more likely to have strayed farther from the starting point. We might, therefore, ask what is his average distance travelled in absolute value, that is, what is the average of $\abs{D}$. It is, however, more convenient to deal with another measure of “progress,” the square of the distance: $D^2$ is positive for either positive or negative motion, and is therefore a reasonable measure of such random wandering.
We can show that the expected value of $D_N^2$ is just $N$, the number of steps taken. By “expected value” we mean the probable value (our best guess), which we can think of as the expected average behavior in many repeated sequences. We represent such an expected value by $\expval{D_N^2}$, and may refer to it also as the “mean square distance.” After one step, $D^2$ is always $+1$, so we have certainly $\expval{D_1^2}=1$. (All distances will be measured in terms of a unit of one step. We shall not continue to write the units of distance.)
The expected value of $D_N^2$ for $N>1$ can be obtained from $D_{N-1}$. If, after $(N-1)$ steps, we have $D_{N-1}$, then after $N$ steps we have $D_N=D_{N-1}+1$ or $D_N=D_{N-1}-1$. For the squares, $$\label{Eq:I:6:7} D_N^2= \begin{cases} D_{N-1}^2+2D_{N-1}+1,\\[2ex] \kern{3.7em}\textit{or}\\[2ex] D_{N-1}^2-2D_{N-1}+1. \end{cases}$$ In a number of independent sequences, we expect to obtain each value one-half of the time, so our average expectation is just the average of the two possible values. The expected value of $D_N^2$ is then $D_{N-1}^2+1$. In general, we should expect for $D_{N-1}^2$ its “expected value” $\expval{D_{N-1}^2}$ (by definition!). So $$\label{Eq:I:6:8} \expval{D_N^2}=\expval{D_{N-1}^2}+1.$$
We have already shown that $\expval{D_1^2}=1$; it follows then that $$\label{Eq:I:6:9} \expval{D_N^2}=N,$$ a particularly simple result!
If we wish a number like a distance, rather than a distance squared, to represent the “progress made away from the origin” in a random walk, we can use the “root-mean-square distance” $D_{\text{rms}}$: $$\label{Eq:I:6:10} D_{\text{rms}}=\sqrt{\expval{D^2}}=\sqrt{N}.$$
We have pointed out that the random walk is closely similar in its mathematics to the coin-tossing game we considered at the beginning of the chapter. If we imagine the direction of each step to be in correspondence with the appearance of heads or tails in a coin toss, then $D$ is just $N_H-N_T$, the difference in the number of heads and tails. Since $N_H+N_T=N$, the total number of steps (and tosses), we have $D=2N_H-N$. We have derived earlier an expression for the expected distribution of $N_H$ (also called $k$) and obtained the result of Eq. (6.5). Since $N$ is just a constant, we have the corresponding distribution for $D$. (Since for every head more than $N/2$ there is a tail “missing,” we have the factor of $2$ between $N_H$ and $D$.) The graph of Fig. 6–2 represents the distribution of distances we might get in $30$ random steps (where $k=15$ is to be read $D=0$; $k=16$, $D=2$; etc.).
The variation of $N_H$ from its expected value $N/2$ is $$\label{Eq:I:6:11} N_H-\frac{N}{2}=\frac{D}{2}.$$ The rms deviation is $$\label{Eq:I:6:12} \biggl(N_H-\frac{N}{2}\biggr)_{\text{rms}}=\tfrac{1}{2}\sqrt{N}.$$
According to our result for $D_{\text{rms}}$, we expect that the “typical” distance in $30$ steps ought to be $\sqrt{30} \approx 5.5$, or a typical $k$ should be about $5.5/2 = 2.75$ units from $15$. We see that the “width” of the curve in Fig. 6–2, measured from the center, is just about $3$ units, in agreement with this result.
We are now in a position to consider a question we have avoided until now. How shall we tell whether a coin is “honest” or “loaded”? We can give now at least a partial answer. For an honest coin, we expect the fraction of the times heads appears to be $0.5$, that is, $$\label{Eq:I:6:13} \frac{\expval{N_H}}{N}=0.5.$$ We also expect an actual $N_H$ to deviate from $N/2$ by about $\sqrt{N}/2$, or the fraction to deviate by \begin{equation*} \frac{1}{N}\,\frac{\sqrt{N}}{2}=\frac{1}{2\sqrt{N}}. \end{equation*} The larger $N$ is, the closer we expect the fraction $N_H/N$ to be to one-half.
In Fig. 6–6 we have plotted the fraction $N_H/N$ for the coin tosses reported earlier in this chapter. We see the tendency for the fraction of heads to approach $0.5$ for large $N$. Unfortunately, for any given run or combination of runs there is no guarantee that the observed deviation will be even near the expected deviation. There is always the finite chance that a large fluctuation—a long string of heads or tails—will give an arbitrarily large deviation. All we can say is that if the deviation is near the expected $1/2\sqrt{N}$ (say within a factor of $2$ or $3$), we have no reason to suspect the honesty of the coin. If it is much larger, we may be suspicious, but cannot prove, that the coin is loaded (or that the tosser is clever!).
We have also not considered how we should treat the case of a “coin” or some similar “chancy” object (say a stone that always lands in either of two positions) that we have good reason to believe should have a different probability for heads and tails. We have defined $P(H)=\expval{N_H}/N$. How shall we know what to expect for $N_H$? In some cases, the best we can do is to observe the number of heads obtained in large numbers of tosses. For want of anything better, we must set $\expval{N_H}=N_H(\text{observed})$. (How could we expect anything else?) We must understand, however, that in such a case a different experiment, or a different observer, might conclude that $P(H)$ was different. We would expect, however, that the various answers should agree within the deviation $1/2\sqrt{N}$ [if $P(H)$ is near one-half]. An experimental physicist usually says that an “experimentally determined” probability has an “error,” and writes $$\label{Eq:I:6:14} P(H)=\frac{N_H}{N}\pm\frac{1}{2\sqrt{N}}.$$ There is an implication in such an expression that there is a “true” or “correct” probability which could be computed if we knew enough, and that the observation may be in “error” due to a fluctuation. There is, however, no way to make such thinking logically consistent. It is probably better to realize that the probability concept is in a sense subjective, that it is always based on uncertain knowledge, and that its quantitative evaluation is subject to change as we obtain more information.
### 6–4A probability distribution
Let us return now to the random walk and consider a modification of it. Suppose that in addition to a random choice of the direction ($+$ or $-$) of each step, the length of each step also varied in some unpredictable way, the only condition being that on the average the step length was one unit. This case is more representative of something like the thermal motion of a molecule in a gas. If we call the length of a step $S$, then $S$ may have any value at all, but most often will be “near” $1$. To be specific, we shall let $\expval{S^2}=1$ or, equivalently, $S_{\text{rms}}=1$. Our derivation for $\expval{D^2}$ would proceed as before except that Eq. (6.8) would be changed now to read $$\label{Eq:I:6:15} \expval{D_N^2}=\expval{D_{N-1}^2}+\expval{S^2}=\expval{D_{N-1}^2}+1.$$ We have, as before, that $$\label{Eq:I:6:16} \expval{D_N^2}=N.$$
What would we expect now for the distribution of distances $D$? What is, for example, the probability that $D=0$ after $30$ steps? The answer is zero! The probability is zero that $D$ will be any particular value, since there is no chance at all that the sum of the backward steps (of varying lengths) would exactly equal the sum of forward steps. We cannot plot a graph like that of Fig. 6–2.
We can, however, obtain a representation similar to that of Fig. 6–2, if we ask, not what is the probability of obtaining $D$ exactly equal to $0$, $1$, or $2$, but instead what is the probability of obtaining $D$ near $0$, $1$, or $2$. Let us define $P(x,\Delta x)$ as the probability that $D$ will lie in the interval $\Delta x$ located at $x$ (say from $x$ to $x+\Delta x$). We expect that for small $\Delta x$ the chance of $D$ landing in the interval is proportional to $\Delta x$, the width of the interval. So we can write $$\label{Eq:I:6:17} P(x,\Delta x)=p(x)\,\Delta x.$$ The function $p(x)$ is called the probability density.
The form of $p(x)$ will depend on $N$, the number of steps taken, and also on the distribution of individual step lengths. We cannot demonstrate the proofs here, but for large $N$, $p(x)$ is the same for all reasonable distributions in individual step lengths, and depends only on $N$. We plot $p(x)$ for three values of $N$ in Fig. 6–7. You will notice that the “half-widths” (typical spread from $x=0$) of these curves is $\sqrt{N}$, as we have shown it should be.
You may notice also that the value of $p(x)$ near zero is inversely proportional to $\sqrt{N}$. This comes about because the curves are all of a similar shape and their areas under the curves must all be equal. Since $p(x)\,\Delta x$ is the probability of finding $D$ in $\Delta x$ when $\Delta x$ is small, we can determine the chance of finding $D$ somewhere inside an arbitrary interval from $x_1$ to $x_2$, by cutting the interval in a number of small increments $\Delta x$ and evaluating the sum of the terms $p(x)\,\Delta x$ for each increment. The probability that $D$ lands somewhere between $x_1$ and $x_2$, which we may write $P(x_1 < D < x_2)$, is equal to the shaded area in Fig. 6–8. The smaller we take the increments $\Delta x$, the more correct is our result. We can write, therefore, $$\label{Eq:I:6:18} P(x_1 < D < x_2)=\sum p(x)\,\Delta x=\int_{x_1}^{x_2}p(x)\,dx.$$ $$\begin{gathered} P(x_1 < D < x_2)=\sum p(x)\Delta x\\[1ex] =\int_{x_1}^{x_2}p(x)\,dx. \end{gathered} \label{Eq:I:6:18}$$
The area under the whole curve is the probability that $D$ lands somewhere (that is, has some value between $x=-\infty$ and $x=+\infty$). That probability is surely $1$. We must have that $$\label{Eq:I:6:19} \int_{-\infty}^{+\infty}p(x)\,dx=1.$$ Since the curves in Fig. 6–7 get wider in proportion to $\sqrt{N}$, their heights must be proportional to $1/\sqrt{N}$ to maintain the total area equal to $1$.
The probability density function we have been describing is one that is encountered most commonly. It is known as the normal or Gaussian probability density. It has the mathematical form $$\label{Eq:I:6:20} p(x)=\frac{1}{\sigma\sqrt{2\pi}}\,e^{-x^2/2\sigma^2},$$ where $\sigma$ is called the standard deviation and is given, in our case, by $\sigma=\sqrt{N}$ or, if the rms step size is different from $1$, by $\sigma=\sqrt{N}S_{\text{rms}}$.
We remarked earlier that the motion of a molecule, or of any particle, in a gas is like a random walk. Suppose we open a bottle of an organic compound and let some of its vapor escape into the air. If there are air currents, so that the air is circulating, the currents will also carry the vapor with them. But even in perfectly still air, the vapor will gradually spread out—will diffuse—until it has penetrated throughout the room. We might detect it by its color or odor. The individual molecules of the organic vapor spread out in still air because of the molecular motions caused by collisions with other molecules. If we know the average “step” size, and the number of steps taken per second, we can find the probability that one, or several, molecules will be found at some distance from their starting point after any particular passage of time. As time passes, more steps are taken and the gas spreads out as in the successive curves of Fig. 6–7. In a later chapter, we shall find out how the step sizes and step frequencies are related to the temperature and pressure of a gas.
Earlier, we said that the pressure of a gas is due to the molecules bouncing against the walls of the container. When we come later to make a more quantitative description, we will wish to know how fast the molecules are going when they bounce, since the impact they make will depend on that speed. We cannot, however, speak of the speed of the molecules. It is necessary to use a probability description. A molecule may have any speed, but some speeds are more likely than others. We describe what is going on by saying that the probability that any particular molecule will have a speed between $v$ and $v+\Delta v$ is $p(v)\,\Delta v$, where $p(v)$, a probability density, is a given function of the speed $v$. We shall see later how Maxwell, using common sense and the ideas of probability, was able to find a mathematical expression for $p(v)$. The form2 of the function $p(v)$ is shown in Fig. 6–9. Velocities may have any value, but are most likely to be near the most probable value $v_p$.
We often think of the curve of Fig. 6–9 in a somewhat different way. If we consider the molecules in a typical container (with a volume of, say, one liter), then there are a very large number $N$ of molecules present ($N\approx10^{22}$). Since $p(v)\,\Delta v$ is the probability that one molecule will have its velocity in $\Delta v$, by our definition of probability we mean that the expected number $\expval{\Delta N}$ to be found with a velocity in the interval $\Delta v$ is given by $$\label{Eq:I:6:21} \expval{\Delta N}=N\,p(v)\,\Delta v.$$ We call $N\,p(v)$ the “distribution in velocity.” The area under the curve between two velocities $v_1$ and $v_2$, for example the shaded area in Fig. 6–9, represents [for the curve $N\,p(v)$] the expected number of molecules with velocities between $v_1$ and $v_2$. Since with a gas we are usually dealing with large numbers of molecules, we expect the deviations from the expected numbers to be small (like $1/\sqrt{N}$), so we often neglect to say the “expected” number, and say instead: “The number of molecules with velocities between $v_1$ and $v_2$ is the area under the curve.” We should remember, however, that such statements are always about probable numbers.
### 6–5The uncertainty principle
The ideas of probability are certainly useful in describing the behavior of the $10^{22}$ or so molecules in a sample of a gas, for it is clearly impractical even to attempt to write down the position or velocity of each molecule. When probability was first applied to such problems, it was considered to be a convenience—a way of dealing with very complex situations. We now believe that the ideas of probability are essential to a description of atomic happenings. According to quantum mechanics, the mathematical theory of particles, there is always some uncertainty in the specification of positions and velocities. We can, at best, say that there is a certain probability that any particle will have a position near some coordinate $x$.
We can give a probability density $p_1(x)$, such that $p_1(x)\,\Delta x$ is the probability that the particle will be found between $x$ and $x+\Delta x$. If the particle is reasonably well localized, say near $x_0$, the function $p_1(x)$ might be given by the graph of Fig. 6–10(a). Similarly, we must specify the velocity of the particle by means of a probability density $p_2(v)$, with $p_2(v)\,\Delta v$ the probability that the velocity will be found between $v$ and $v+\Delta v$.
Fig. 6–10.Probability densities for observation of the position and velocity of a particle.
It is one of the fundamental results of quantum mechanics that the two functions $p_1(x)$ and $p_2(v)$ cannot be chosen independently and, in particular, cannot both be made arbitrarily narrow. If we call the typical “width” of the $p_1(x)$ curve $[\Delta x]$, and that of the $p_2(v)$ curve $[\Delta v]$ (as shown in the figure), nature demands that the product of the two widths be at least as big as the number $\hbar/2m$, where $m$ is the mass of the particle. We may write this basic relationship as $$\label{Eq:I:6:22} [\Delta x]\cdot[\Delta v]\geq\hbar/2m.$$ This equation is a statement of the Heisenberg uncertainty principle that we mentioned earlier.
Since the right-hand side of Eq. (6.22) is a constant, this equation says that if we try to “pin down” a particle by forcing it to be at a particular place, it ends up by having a high speed. Or if we try to force it to go very slowly, or at a precise velocity, it “spreads out” so that we do not know very well just where it is. Particles behave in a funny way!
The uncertainty principle describes an inherent fuzziness that must exist in any attempt to describe nature. Our most precise description of nature must be in terms of probabilities. There are some people who do not like this way of describing nature. They feel somehow that if they could only tell what is really going on with a particle, they could know its speed and position simultaneously. In the early days of the development of quantum mechanics, Einstein was quite worried about this problem. He used to shake his head and say, “But, surely God does not throw dice in determining how electrons should go!” He worried about that problem for a long time and he probably never really reconciled himself to the fact that this is the best description of nature that one can give. There are still one or two physicists who are working on the problem who have an intuitive conviction that it is possible somehow to describe the world in a different way and that all of this uncertainty about the way things are can be removed. No one has yet been successful.
The necessary uncertainty in our specification of the position of a particle becomes most important when we wish to describe the structure of atoms. In the hydrogen atom, which has a nucleus of one proton with one electron outside of the nucleus, the uncertainty in the position of the electron is as large as the atom itself! We cannot, therefore, properly speak of the electron moving in some “orbit” around the proton. The most we can say is that there is a certain chance $p(r)\,\Delta V$, of observing the electron in an element of volume $\Delta V$ at the distance $r$ from the proton. The probability density $p(r)$ is given by quantum mechanics. For an undisturbed hydrogen atom $p(r)=Ae^{-2r/a}$. The number $a$ is the “typical” radius, where the function is decreasing rapidly. Since there is a small probability of finding the electron at distances from the nucleus much greater than $a$, we may think of $a$ as “the radius of the atom,” about $10^{-10}$ meter.
We can form an image of the hydrogen atom by imagining a “cloud” whose density is proportional to the probability density for observing the electron. A sample of such a cloud is shown in Fig. 6–11. Thus our best “picture” of a hydrogen atom is a nucleus surrounded by an “electron cloud” (although we really mean a “probability cloud”). The electron is there somewhere, but nature permits us to know only the chance of finding it at any particular place.
In its efforts to learn as much as possible about nature, modern physics has found that certain things can never be “known” with certainty. Much of our knowledge must always remain uncertain. The most we can know is in terms of probabilities.
1. After the first three games, the experiment was actually done by shaking $30$ pennies violently in a box and then counting the number of heads that showed.
2. Maxwell’s expression is $p(v)=Cv^2e^{-av^2}$, where $a$ is a constant related to the temperature and $C$ is chosen so that the total probability is one. | 2020-10-29 23:00:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 23, "x-ck12": 0, "texerror": 0, "math_score": 0.7815253734588623, "perplexity": 300.3726021127424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107905965.68/warc/CC-MAIN-20201029214439-20201030004439-00048.warc.gz"} |
https://avishek.net/2022/12/27/pytorch-guide-plenoxels-nerf-part-6.html | # Total Internal Reflection
Technology and Art
# Plenoxels and Neural Radiance Fields using PyTorch: Part 6
Avishek Sen Gupta on 27 December 2022
This is part of a series of posts breaking down the paper Plenoxels: Radiance Fields without Neural Networks, and providing (hopefully) well-annotated source code to aid in understanding.
The final code has been moved to its own repository at plenoxels-pytorch.
We continue looking at Plenoxels: Radiance Fields without Neural Networks. In this sequel, we address some remaining aspects of the paper, before concluding with our study of this paper. We specifically consider the following:
Note: We leave this post open-ended, and will append to/modify it as we obtain further results.
• Voxel pruning: This will probably require us to modify our core data structure to be a little more efficient, because it will involve storing all the transmittances of the entire training set per epoch, and then zeroing out candidate voxels.
• Encouraging voxel sparsity: Adding more regularisation terms will encourage sparsity of voxels. In the paper, the Cauchy loss is incorporated to speed up computations.
• Coarse-to-fine resolution scaling: This will be needed to better resolve the fine structure of our training scenes. At this point, we are working with a very coarse low resolution of $$40 \times 40 \times 40$$. We can get higher resolutions than this, but this will need more work, and more computations.
The code for this article can be found here: Volumetric Rendering Code with TV Regularisation
### Demonstration on a real world dataset
For our field-testing we pick a model from the Amazon Berkeley Objects dataset. The ABO Dataset is made available under the Creative Commons Attribution-NonCommercial 4.0 International Public License (CC BY-NC 4.0), and is available here.
We have picked 72 views of a full $${360}^\circ$$ fly-around of the object, and run it through our code. We demonstrate the learning using a single training for the image. We will show more results as they become available.
### Notes on the Code
#### 1. Fixing a Transmittance calculation bug
We derived the transmittance and the consequent colour values as the following:
$$$\boxed{ C_i = \sum_{i=1}^N T_i \left[1 - e^{-\sigma_i d_i}\right] c_i \\ } \label{eq:accumulated-transmittance}$$$ $$$\boxed{ T_i = \text{exp}\left[-\sum_{i=1}^{i-1} \sigma_i d_i \right] } \label{eq:sample-transmittance}$$$
Unfortunately the implementation had a bug where $$T_i=-\sum_{i=1}^{i-1} \text{exp } (\sigma_i d_i)$$. This gave us increasing transmittance with distance. We fixed that in this implementation, so that transmittance is calculated correctly.
#### 2. Refactoring volumetric rendering to use matrices instead of loops
Instead of looping over samples in a particular ray to calculate transmittances and consequent colour values, we refactored the code to use a more matrix approach.
The equation $$\eqref{eq:sample-transmittance}$$ can be written for all $$T_i$$ as:
$T = \text{exp} (-{(\Sigma \odot \Delta)}^T S)$
where $$\odot$$ denotes the Hadamard Product (element-wise product), and the other terms are as follows:
$T = \begin{bmatrix} T_1 && T_1 && \vdots && T_N \end{bmatrix} \\ \Sigma = \begin{bmatrix} \sigma_1 \\ \sigma_1 \\ \vdots \\ \sigma_N \end{bmatrix}, \Delta = \begin{bmatrix} \delta_1 \\ \delta_1 \\ \vdots \\ \delta_N \end{bmatrix}, \\ S = \begin{bmatrix} 1 && 1 && 1 && \ldots && 1 \\ 0 && 1 && 1 && \ldots && 1 \\ 0 && 0 && 1 && \ldots && 1 \\ \vdots && \vdots && \vdots && && \vdots \\ 0 && 0 && 0 && \ldots && 1 \end{bmatrix}$
We call $$S$$ the summing matrix. Similarly, the colour $$C_i$$ for all samples in a ray, can be calculated as:
$C = T.(I' - \text{exp }(- \Sigma \odot \Delta))$
where $$I'$$ is an $$n \times 1$$ matrix, like so:
$I'= \begin{bmatrix} 1 \\ 1 \\ \vdots \\ 1 \end{bmatrix}$
These give us the matrix forms for the volumetric rendering calculations.
$$$\boxed{ C = T.(I' - \text{exp }(- \Sigma \odot \Delta)) \\ T = \text{exp} (-{(\Sigma \odot \Delta)}^T S) } \label{eq:volumetric-formula-matrix}$$$
#### 3. Configuring empty voxels to be light or dark
Some sets of training images have their background as white, some as dark. We make this configurable by introducing the EMPTY object, which can be configured to be either black or white, depending upon the training and rendering situation.
#### Adding the pruned attribute
We add the pruned attribute to voxels. This prevents a voxel parameter from being activated, as well as sets its opacity and spherical harmonic coefficients to zero.
#### Allowing the world to scale up
A scale parameter is introduced in the world; this represents the scale of the voxel in relation to the world itself. A scale of 1, means that a voxel is the same as a unit cube in the world. The world can now scale; this is done through the scale_up() function. The voxel dimensions are doubled, the scale is reduced, and the original voxel is replicated to its newly-spawned 7 neighbours; see the diagram below for an explanation.
Note that though the grid size doubles, the size of the world stays the same; in effect, the voxels halve in each dimension.
### Cauchy Regularisation
The Cauchy Loss Function is introduced in the paper Robust subspace clustering by Cauchy loss function. The Cauchy Loss function in the Plenoxels paper is given as:
$\Delta_C = \lambda_C \sum_{i,k} \text{log}_e (1 + 2 \sigma{(r_i(t_k))}^2)$
Essentially, we sum up the opacity of all samples in a training ray, over all our training rays. Let us explain the rationale for why the Cauchy Loss Function is a good sparsity prior.
TODO: Explain why CLF is used as a sparsity prior.
Reconstruction without Cauchy Regularisation
Reconstruction with Cauchy Regularisation
### Conclusion
This concludes our study of the Plenoxels paper. There is a lot more scope for fine-tuning, but on the whole this survey covers the important aspects. It is a lovely paper, and is a good example of what can be accomplished without resorting directly to Neural Networks.
### References
tags: Machine Learning - PyTorch - Programming - Neural Radiance Fields - Machine Vision | 2023-03-22 23:04:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.9980862140655518, "perplexity": 1232.8544611647862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.97/warc/CC-MAIN-20230322211955-20230323001955-00577.warc.gz"} |
https://cs.stackexchange.com/questions/50742/turing-machine-to-write-number | # Turing Machine to write number
How to construct a single-tape Turing Machine which writes the number 7 in UNARY number system, leaving the tape with a delimiter symbol followed by 7 1s?
So outout would be a tape contains #111111 and blank symbols afterward.
You could simple write a 1 and advance to the next state. Create 7 states, one state for each 1.
Consider input\output/direction the syntax for "if input is on the tape write output and move in direction" | 2019-11-22 10:26:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3267613649368286, "perplexity": 2068.41573559169}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671249.37/warc/CC-MAIN-20191122092537-20191122120537-00141.warc.gz"} |
https://cstheory.stackexchange.com/questions/25203/number-of-points-on-the-interior-of-the-convex-hull-of-a-random-subset | # Number of points on the interior of the convex hull of a random subset
This question is in regards to the following problem:
Suppose you are given a set $S$ of $n$ points in the plane. Let $R$ be a random subset of $S$ of size $r$ with all subsets of size $r$ equally likely. What is the expected number of points of $S$ that lie on the interior of the convex hull or $R$? I.e. What is $E[|S\cap \operatorname{int}(\mathcal{CH}(R))|]$?
I have been looking in Clarkson and Shor's Applications of Random Sampling in Computational Geometry, II, Mulmuley's chapter in the handbook of computational geometry, and related papers. As far as I understand it, these methods all apply to bounding the number of points of $S$ outside the convex hull, but do so by finding the expected sums of the sizes of conflict lists. For instance, the conflict list of an edge in the problem above is the number of points in $S$ that are beyond it, and the sum of the sizes of all conflict lists is $O(n)$. But because the same point may appear in $O(n)$ conflict lists itself, this doesn't seem to say anything about the number of points on the interior.
Any help in understanding the problem, or useful references is appreciated. Unfortunately, I have a sneaking suspicion that the answer is obvious, but find myself a bit stuck. Thanks.
• Doesn't it depend on the shape of $S$? Can't it range from anywhere from $0$ to $c_r|S|$ for some constant $c_r$ that approaches $1$ as $r$ gets large? Are you looking for an algorithm to compute this expected value that's better than the obvious Monte Carlo method? – Peter Shor Jul 10 '14 at 17:57
• (I should say I'm assuming general position.) I would have thought the same for the sum of the conflict lists, but your result applies regardless of the shape of S, so I thought something similar might apply here. If S is in convex position, obviously the value is zero, so that is a sort of best case. Worst case seems to be something like a set S such that it's onion peel convex hulls are all triangles (Repeatedly taking he convex hull and removing the points only removes three at a time.) I guess it doesn't seem obvious to me that it is definitely the case that the expected size is |S|-o(S) – John Jul 10 '14 at 18:04
• If r is fixed (I was assuming that $r$ grew with $|S|$ when I said $|S| - o(|S|)$), I think the worst-case is something like $(1-\frac{3}{r})|S|$ for the worst-case set you describe. – Peter Shor Jul 10 '14 at 18:08
• I'm somewhat interested in the expected running time of the algorithm mentioned in the introduction to your applications of... paper: randomly select a subset of size R, recursively compute convex hull and then fix the remaining O(n) conflicts – John Jul 10 '14 at 18:09
• I.e. How many recursive calls are made for a fixed r. Or maybe for something like r=n/2 – John Jul 10 '14 at 18:10 | 2020-01-23 02:17:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.806858479976654, "perplexity": 166.1480034118169}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250608062.57/warc/CC-MAIN-20200123011418-20200123040418-00036.warc.gz"} |
https://ch.gateoverflow.in/1307/gate-ch-2022-ga-question-2 | A sphere of radius $r$ $\text{cm}$ is packed in a box of cubical shape.
What should be the minimum volume (in $\text{cm}^{3}$) of the box that can enclose the sphere?
1. $\frac{r^{3}}{8}$
2. $r^{3}$
3. $2r^{3}$
4. $8r^{3}$
Let the side of a cube be $a\;\text{cm}.$
$\Rightarrow \boxed{a = 2r}$
$\therefore$ The volume of a box of the cubical shape $= a^{3} = (2r)^{3} = 8r^{3} \;\text{cm}^{3}.$
Correct Answer $:\text{D}$ | 2022-09-29 02:43:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7300502061843872, "perplexity": 394.25061224077984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335303.67/warc/CC-MAIN-20220929003121-20220929033121-00121.warc.gz"} |
https://docs.pybinding.site/en/v0.9.5/plotting/structuremap.html | Structure-mapped data¶
As shown in the previous section, many classes in pybinding use structure plots in a similar way. One class stands out here: StructureMap can be used to map any arbitrary data onto the spatial structure of a model. StructureMap objects are produced in two cases: as the results of various computation functions (e.g. Solver.calc_spatial_ldos()) or returned from Model.structure_map() which can map custom user data.
Download this page as a Jupyter notebook
Draw only certain hoppings¶
Just as before, we can draw only the desired hoppings. Note that smap is a StructureMap returned by Solver.calc_probability().
from pybinding.repository import graphene
plt.figure(figsize=(7, 3))
plt.subplot(121, title="The model")
model = pb.Model(graphene.monolayer(nearest_neighbors=3), graphene.hexagon_ac(1))
model.plot(hopping={'draw_only': ['t']})
plt.subplot(122, title="$|\Psi|^2$")
solver = pb.solver.arpack(model, k=10)
smap = solver.calc_probability(n=2)
smap.plot(hopping={'draw_only': ['t']})
pb.pltutils.colorbar()
Slicing a structure¶
This follows a syntax similar to numpy fancy indexing where we can give a condition as the index.
plt.figure(figsize=(7, 3))
plt.subplot(121, title="Original")
smap.plot(hopping={'draw_only': ['t']})
plt.subplot(122, title="Sliced: y > 0")
upper = smap[smap.y > 0]
upper.plot(hopping={'draw_only': ['t']})
plt.figure(figsize=(7, 3))
plt.subplot(121, title="Original: A and B")
smap.plot(hopping={'draw_only': ['t', 't_nn']})
plt.subplot(122, title="Sliced: A only")
a_only = smap[smap.sublattices == 'A']
a_only.plot(hopping={'draw_only': ['t', 't_nn']})
Mapping custom data¶
The method Model.structure_map() returns a StructureMap where any user-defined data can be mapped to the spatial positions of the lattice sites. The data just needs to be a 1D array with the same size as the total number of sites in the system.
plt.figure(figsize=(6.8, 3))
plt.subplot(121, title="The model")
model = pb.Model(graphene.monolayer(), graphene.hexagon_ac(1))
model.plot()
plt.subplot(122, title="Custom color data: 2x * (y + 1)")
custom_data = 2 * model.system.x * (model.system.y + 1)
smap = model.structure_map(custom_data)
smap.plot()
pb.pltutils.colorbar()
plt.figure(figsize=(6.8, 3))
plt.subplot(121, title="sin(10x)")
smap = model.structure_map(np.sin(10 * model.system.x))
smap.plot()
pb.pltutils.colorbar()
plt.subplot(122, title="cos(5y)")
smap = model.structure_map(np.cos(5 * model.system.y))
smap.plot()
pb.pltutils.colorbar()
Contour plots for large systems¶
For larger systems, structure plots don’t make much sense because the details of the sites and hoppings would be too small to see. Contour plots look much better in this case.
plt.figure(figsize=(6.8, 3))
model = pb.Model(graphene.monolayer(), graphene.hexagon_ac(10))
plt.subplot(121, title="sin(x)")
smap = model.structure_map(np.sin(model.system.x))
smap.plot_contourf()
pb.pltutils.colorbar()
plt.subplot(122, title="cos(y/2)")
smap = model.structure_map(np.cos(0.5 * model.system.y))
smap.plot_contourf()
pb.pltutils.colorbar()
Composing multiple plots¶
Various plotting methods or even different invocations of the same method can be composed to create nice figures. For example, we may want to use different colormaps to distinguish between sublattices A and B when plotting some data on top of the structure of graphene. Below, the first pass plots only the hopping lines, the second pass draws the sites of sublattice A and the third draws sublattice B. The darkness of the color indicates the intensity of the mapped data, while blue/red distinguishes the sublattices.
model = pb.Model(graphene.monolayer(), graphene.hexagon_ac(1))
custom_data = 2 * model.system.x * (model.system.y + 1)
smap = model.structure_map(custom_data)
plt.figure(figsize=(6.8, 3))
plt.subplot(121, title="Regular plot")
smap.plot()
plt.subplot(122, title="Composite plot")
smap.plot(site_radius=0) # only draw hopping lines, no sites
a_only = smap[smap.sublattices == "A"]
a_only.plot(cmap="Blues", hopping={'width': 0}) # A sites, no hoppings
b_only = smap[smap.sublattices == "B"]
b_only.plot(cmap="Reds", hopping={'width': 0}) # B sites, no hoppings | 2020-09-19 19:04:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3954576253890991, "perplexity": 8949.813682588905}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400192783.34/warc/CC-MAIN-20200919173334-20200919203334-00640.warc.gz"} |
https://crypto.stackexchange.com/questions/37927/is-there-a-prng-algorithm-that-allows-to-switch-between-states-directly-not-cal | # Is there a PRNG algorithm that allows to switch between states directly, not calling next()?
Basically I need a function with the following interface:
getRandomBytes(uint128 seed, uint64 state, uint16 byteCount)
Consecutive calls with the same seed and state must give the same results.
What algorithm should I look at?
• Perhaps a stream cipher with nonce support (e.g. AES-CTR, Salsa20, ChaCha, or any eStream contestant)? Or if the output size is limited, a hash should work as well. Jul 24 '16 at 14:10
• What is the difference between seed and state? Do you need to also be able to get the current state?
– otus
Jul 24 '16 at 15:00
• @otus Ah, now I see, yes, I guess you should take the seed and state together. Do you need a 64 bit state, or can more information be transmitted? Jul 24 '16 at 15:09
• @MaartenBodewes Yeah, you're right, I should concatenate them. "State" is basically coordinates. The function is used to generate random world content. Same coordinates = same content. Changing the seed should change the whole world. Jul 24 '16 at 16:47
• @alain_morel: Maybe you can just use a decent keyed hash function to map unique inputs to pseudo-random values. SipHash may work here, but if security is not a requirement I bet even a non-cryptographic hash like MurmurHash would work. This blog entry and this one I just found on Google are illustrative. Jul 26 '16 at 21:54
Probably the easiest one with regards to the protocol is a XOF, an eXtendable Output Function. Two have been defined as part of SHA-3, called SHAKE-128 and SHAKE-256. These have a single input of any length, and can output as many bytes as required.
Of course SHA-3 is relatively new, so not all runtimes / API's may support the SHAKE variants out of the box.
In your case you would feed it the 128 bit seed concatenated with the 64 bit state. There should be no need for two separate variables.
A stream cipher may do the trick; you'd use your "seed" as key and your "state" as nonce, and just output a prefix of the keystream. Currently popular ones are AES in CTR mode and ChaCha20.
• Note that most stream ciphers are designed to be used with random keys. So if your seeds aren't random, you might need to hash them first. Jul 24 '16 at 14:21
This has been said by others, but I wanted to stress it more. The input sizes you are looking at all fit into AES with the (random) seed as the key and the "state" together with a counter $i$ that goes from $0$ and up to $\frac{\rm byteCount}{16}$ concatenated as the input (with additional fixed zeroes). AES is assumed to behave as a pseudorandom permutation and so this will give you a very high quality pseudorandom string each time.
AES is also much faster than using something like SHA-3. However, I want to stress that although SHA-3 and in fact all cryptographic hash functions are designed to have "random looking behavior" this is not their main design principle. They are designed to first and foremost be collision resistant, and then to behave "randomly" as a secondary design principle. In contrast, block ciphers are designed to behave pseudorandomly. As such, they are the preferred primitive of choice in these cases. Note that I am not saying that you cannot use a hash function for such purposes. However, I would typically go for it only when there's a reason not to use something like AES. In this specific case, there is no reasons (and it would be even faster).
Finally, regarding AES vs something like ChaCha20: again, being conservative, AES has undergone much more cryptanalytic scrutiny than ChaCha20 so I prefer it. If you have AES-NI hardware support then it's also well fast enough. If not, and speed is very critical then you may wish to consider something like ChaCha20.
• Thanks for detailed explanation. I tried AES and it's really much faster than SHAKE. Jul 26 '16 at 15:21
• @alain_morel Current Oracle Java and OpenJDK's use AES-NI to speed things up (and the bytecode has been optimized a lot too) so that AES is faster than SHAKE is not really a surprise. Keccak needs more attention and hardware support to get to the same level. In principle the amount of passes = 1 in both cases. Jul 26 '16 at 15:53
• @MaartenBodewes Thanks for information. At first I chose SHAKE because it looked simplier. But after some reading I made AES work (providing predefined initialization vector), and the speed difference is huge. Jul 26 '16 at 16:06
• After more testing it appears that the output of processing 2 similar inputs (i.e. 1 bit difference) differs very little, not by 50%. Is it intended or I'm doing something wrong? Jul 26 '16 at 19:25 | 2022-01-22 20:18:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34424251317977905, "perplexity": 1066.7515286645912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303884.44/warc/CC-MAIN-20220122194730-20220122224730-00626.warc.gz"} |
http://www.skimwp.org/cannot-determine/cannot-determine-no-bounding-box-latex.php | Home > Cannot Determine > Cannot Determine No Bounding Box Latex
# Cannot Determine No Bounding Box Latex
asked 3 years ago viewed 22847 times Related 26Create list of all external files used by master LaTeX document?7Error 'No bounding' box for PDF image11LaTeX Error: Cannot determine size of graphic Incidentally, you can change this in TexShop from the "Typeset" menu. asked 2 years ago viewed 21819 times active 7 months ago Get the weekly newsletter! Figuring out why I'm going over hard-drive quota Java precedence for multiple + and - operators How to tar.gz many similar-size files into multiple archives with a size limit If I news
LaTeX Error: Cannot determine size of graphic in Pictures/logo.png (no BoundingBox)2No BoundingBox in LaTeX for a JPEG figure1Latex \includegraphics error: Cannot determine size of graphic in bleh.jpg ( no BoundingBox )1Figures Requirement Guide Cause latex only supports vector graphics (read: eps) –Stephan202 Apr 8 '09 at 21:36 1 For what it's worth, JPG is quite possibly the worst image format to use when Word or phrase for "using excessive amount of technology to solve a low-tech task" What is a unifier? have a peek at this web-site
Why is using let inside a for loop so slow on Chrome? Draw some mountain peaks Was there no tax before 1913 in the United States? Make sure that is ticked to run pdflatex instead of latex. about us tour help blog chat data legal privacy policy work here advertising info mobile contact us feedback Technology Life / Arts Culture / Recreation Science Other Stack Overflow Server Fault
bmpsize package can also used to replace extractbb. –Leo Liu May 8 '11 at 15:15 5 Thank you for the solution provided. How can a Cleric be proficient in warhammers? asked 5 years ago viewed 223917 times active 9 months ago Visit Chat Linked 8 Error including a .png: Cannot determine size of graphic 1 Latex Error: Cannot determine the size LaTeX Error: Cannot determine size of graphic in Pictures/logo.png (no BoundingBox)1Cannot determine the size of graphic1Error of “cannot determine size of graphics” Hot Network Questions Count trailing truths Why is this
How do I fix this? Working with DVI is often faster and compatible with more packages. The following one worked for me: \usepackage[dvipdfmx]{graphicx} \usepackage{bmpsize} –zeroos May 12 '15 at 20:56 Just to add: specificying natwidth and natheight also solved a problem I had with jpgs http://tex.stackexchange.com/questions/94869/error-including-a-png-cannot-determine-size-of-graphic Now change the extension in the tex file from \includegraphics[width=0.8\textwidth]{tiger.png} to \includegraphics[width=0.8\textwidth]{tiger.eps} share|improve this answer answered May 10 '15 at 9:56 NKN 276115 add a comment| up vote 1 down vote
Thanks. This Site asked 3 years ago viewed 42618 times active 2 years ago 16 votes · comment · stats Linked 77 Cannot determine size of graphic 3 pdflatex, dvips: cannot determine size of Edit: Here is the preamble: \documentclass[dvips,sts]{imsart} \usepackage{graphicx} \usepackage{float} \begin{document} \begin{figure}[h!] \centering \includegraphics[width = \textwidth]{simLinkError.pdf} \caption{Blah} \label{fig:sim1} \end{figure} \end{document} graphics pdf texworks share|improve this question edited Jul 17 '13 at 14:02 egreg It probably has to do with what compiler you are using.
Welcome to TeX.SX! http://skimwp.org/cannot-determine/cannot-determine-size-of-graphic-in-latex.php Why is using let inside a for loop so slow on Chrome? Draw some mountain peaks Word or phrase for "using excessive amount of technology to solve a low-tech task" If a ring R with 1 has characteristic 0. How can I prove its value?
• Type H for immediate help. ...
• Ph.D.
• I typed the following in Texmaker \begin{document} \begin{figure}[!ht] \centering \includegraphics[scale=1]{figure} \end{figure} \end{document} The error I get is !
• share|improve this answer answered Apr 8 '09 at 21:45 Tom 10.3k42941 yeah.
• What to do?
• How to say 'can' in Spanish?
If you use the command pdflatex (making pdf directly) Then the system can read the file and determine its natural size automatically. GraphicsConvertor on a mac will do that for you easily. Whereas a PDF includes DPI and size, a JPEG has only a size in terms of pixels. ( I know this is not the answer you wanted, but it's probably better More about the author Prove that the following statements for a ring R are equivalent: Does The Amazing Lightspeed Horse work, RAW?
share|improve this answer answered Jan 31 at 1:08 IsaacS 21316 add a comment| protected by Community♦ Dec 2 '14 at 13:59 Thank you for your interest in this question. What is the source of your figure? –Hans-Peter E. I get the same errors for both.
## more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed
Tax Free when leaving EU through the different country How to justify Einstein notation manipulations without explicitly writing sums? Can I use verb "to split" in meaning to "to run"? How to justify Einstein notation manipulations without explicitly writing sums? Interconnectivity Actual meaning of 'After all' Can a player on a PC play with a a player on a laptop?
natwidth=... (not width= as that tries to scale to that size but still needs the natural size. Browse other questions tagged pdf latex eps or ask your own question. Kristiansen Sep 17 '13 at 19:29 in some .eps files, the bounding box information is at the end, rather than at the beginning where it really belongs. (and latex click site You are however able to state the natural size of the images using natwidth and natheight which will make latex compile without error.
Figuring out why I'm going over hard-drive quota Java precedence for multiple + and - operators Ĉu oni estas "en" aŭ "sur" foto? How can I trust that this is Google? Is adding the ‘tbl’ prefix to table names really a problem? In fact, I could compile it fine with MacTex on my machine.
The difference between "an old,old vine" and "an old vine" How to justify Einstein notation manipulations without explicitly writing sums? If so, please switch to pdflatex.exe as you are importing PNG file. –kiss my armpit Feb 10 '13 at 18:16 Well, I am using TexMaker and I usually compile This is my pillow Is it safe to use cheap USB data cables? Not the answer you're looking for?
Related 3Included LaTeX figures do not show in dvi but do in pdf3Dimension Preserving JPEG to EPS Conversion1LaTeX porting *.eps images with eps2pdf and german umlauts (mutated vowel)321Inserting a pdf file It is often better to take the JPEG and convert it into a PDF (on a mac) or EPS (on a PC). Join them; it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the Please check that there is no inclusion of epsfig, it is deprecated.
How can a Cleric be proficient in warhammers? more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed I'm working on debian using emacs graphics errors png share|improve this question edited Jan 22 '13 at 15:51 Martin Schröder 11.2k43196 asked Jan 22 '13 at 15:02 user24726 41112 marked as Yet another electrical box fill question Can You Add a Multiple of a Matrix Row to itself? | 2019-03-20 22:04:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4851512014865875, "perplexity": 4544.255435462175}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202471.4/warc/CC-MAIN-20190320210433-20190320232433-00131.warc.gz"} |
http://dominikschmidt.xyz/game-theory-notes/ | These are my lecture notes on the Stanford coursera course on Game Theory.
Normal Form Games
A finite Normal Form Game (or Matrix form, strategic form) of $$n$$ players is a tuple $$\langle N, A, u\rangle$$, where:
• $$N=\{1, \ldots, n\}$$ is a finite set denoting the players (indexed by $$i$$)
• $$A=A_1 \times \ldots \times A_n$$ denotes the action profile where $$A_i$$ denotes the action set for player $$i$$. Each action $$a_i \in A_i$$ is referred to as a pure strategy.
• $$u=(u_1, \ldots, u_n)$$ is a profile of utility functions where $$u_i\colon A → \mathbb{R}$$ is the utility/payoff function for player $$i$$
A game is finite if it takes a finite amount of time to write down (finite number of players, finite number of actions for every players and therefore a finite number of utility values).
A game of two players is called a game of pure competition when both players have exactly opposed interests, i.e. $$\forall a \in A, u_1(a)+u_2(a) = c$$ for some constant $$c$$ (special case zero sum games with $$c = 0$$).
A game is referred to as a game of cooperation when all players have the exact same interests, i.e. $$\forall a\in A, \forall i, j, u_i(a) = u_j(a)$$.
Strategies
A strategy $$s_i$$ for agent $$i$$ is a probability distribution over the actions $$A_i$$.
A strategy is called pure if only one action is played with positive probability, otherwise the strategy is called mixed. The set of actions with positive probability in a mixed strategy is called the support of the strategy.
Let the set of all strategies for $$i$$ be denoted $$S_i$$ and the set of all strategy profiles be denoted $$S=S_1\times \ldots \times S_n$$.
For players following mixed strategies the payoff is then defined in terms of an expectation: $$u_i(s) = \sum_{a\in A} u_i(a) \Pr(a|s)\\ \Pr(a|s) = \prod_{j\in N} s_j(a_j)$$ where $$s \in S$$.
When to play mixed strategies? To confuse opponent (like in matching pennies) or when uncertain about the other's action (like in battle of the sexes).
Best Response
Let $$s_{-i}=\langle s_1, \ldots, s_{i-1}, s_{i+1}, \ldots, s_n\rangle$$ so $$s=(s_i, s_{-i})$$. Then the non-unique best response (for pure or mixed strategies) is defined as $$s_i^\ast \in BR(s_{-i}) \iff \forall s_i \in S_i, u_i(s_i^\ast, s_{-i}) \le u_i(s_i, s_{-i})$$
Nash Equilibrium
The Nash equilibrium is a action profile where no player can increase their expected reward by changing his strategy while other players keep theirs unchanged.
$$s=\langle s_1, \ldots, s_n\rangle$$ is a Nash equilibrium $$\iff \forall i, s_i \in BR(s_{-i})$$.
$$a=\langle a_1, \ldots, a_n\rangle$$ is a pure strategy Nash equilibrium $$\iff \forall i, a_i \in BR(a_{-i})$$.
(Nash 1950) Every finite game has a Nash equilibrium. (not all games have a pure strategy Nash equilibrium however)
A Nash equilibrium in strictly dominant strategies is unique. Therefore the prisoners dilemma has only a single pure strategy NE and no mixed strategy NEs.
Computing Nash Equilibria
Computing Nash equilibria is hard in general but easy if we can guess (or know) the support.
Note: if the results of the above computation weren't probabilities (in the range $$(0, 1)$$) we would know that there exists no equilibrium with the given support.
Current Algorithms (exponential worst case):
• LCP (Linear Complementarity) formulation [Lemke-Howson 1964]
($$s$$ are strategies, $$r$$ are slack variables)
• Support Enumeration Method [Porter et al. 2004]: Enumerate supports using clever heuristics (to curb exponential number of supports) and try to find an equilibrium for the given supports by formulating them as linear programs. A heuristic for searching through different supports is to start by looking at small supports and generally bias our search towards supports that are similar in size for each player.
(sigma is the set of actions in the support for a given player)
Complexity of Finding Nash Equilibria
The decision problem "Does an NE exist" is trivial since by Nash's theorem it is guaranteed to exist. The following decision problems for a game $$G$$ however are NP-complete:
• Does a unique NE exist?
• Does a strictly Pareto efficient NE exist?
• Does an NE exist with a guaranteed payoff or guaranteed social welfare?
• Does an NE exist that includes/excludes specific actions?
The complexity of finding a even a single Nash equilibrium is captured by the complexity class Polynomial Parity Arguments on Directed graphs. PPAD is "between P and NP" and was defined by Papadimitriou in 1994.
• FNP problems are constructive versions of NP problems (F stands for functional)
• TFNP is a subclass of FNP for problems for which a solution is guaranteed to exist (T stands for "total")
• PPAD is a subclass of TFNP where the proofs are based on parity arguments in directed graphs
Theorem: Finding a Nash equilibrium is PPAD-complete.
Domination
Let $$s_i$$ and $$s_i'$$ be two strategies for player $$i$$ and let $$S_{-i}$$ be the set of all possible strategy profiles for the other players.
Then $$s_i$$ strictly dominates $$s_i'$$ if $$\forall s_{-i} \in S_{-i}, u_i(s_i, s_{-i}) > u_i(s_i', s_{-i})$$.
Furthermore $$s_i$$ very weakly dominates $$s_i'$$ if $$\forall s_{-i} \in S_{-i}, u_i(s_i, s_{-i}) \ge u_i(s_i', s_{-i})$$.
A strategy that dominates all others is called dominant. A strategy profile consisting of dominant strategies for every player must be a Nash equilibrium.
"Dominant strategies are powerful from both an analytical point of view and a player’s perspective. An individual does not have to make any predictions about what other players might do, and still has a well-defined best strategy. [..] A basic lesson from the prisoners’ dilemma is that individual incentives and overall welfare need not coincide."[1].
Pareto Optimality
An outcome $$o$$ is said to Pareto-dominate an outcome $$o'$$ if for every agent $$o$$ is at least as good as $$o'$$ and there is some agent who strictly prefers $$o$$ over $$o'$$.
An outcome is Pareto-optimal if it is not Pareto-dominated by any other outcome. Every outcome in a zero-sum game is Pareto-optimal .
The prisoners dilemma is a dilemma for the exact reason that the Nash equilibrium is the only non-Pareto-optimal outcome. (The outcome most likely to happen is the socially worst outcome)
Iterated Removal of Strictly Dominated Strategies
A strictly dominated strategy (a single specific action) can never be a best reply since no matter the opponents action since the dominating strategy is always strictly better. Therefore we can assume that a rational player would never play this strategy and we can remove it from the game. Furthermore we can also remove pure strategies that are strictly dominated by any mixed strategies. By doing so repeatedly (in any order) we have an iterative procedure to simplify complex games.
Performing this iterative algorithm preserves Nash equilibria since in a Nash equilibrium everybody plays a best reply (and we don't remove best replies). It can therefore be used as a preprocessing step before computing an equilibrium.
If after the procedure only a single action profile remains, that profile is the unique NE of the game and the game is called dominance solvable.
Note: we can also iteratively remove weakly dominated strategies but since those can be best replies we may remove equilibria (at least one equilibrium is preserved) and the order of removal may matter.
Maxmin & Minmax Strategies
The maxmin strategy for player $$i$$ is $$\arg \max_{s_i} \,\, \min_{s_{-i}} u_i(s_1, s_{-i})$$ i.e. the strategy that maximizes $$i$$'s worst-case payoff.
The corresponding maxmin value or safety level which is the minimum payoff guaranteed by the maxmin strategy is $$\max_{s_i} \, \min_{s_{-i}} u_i(s_1, s_{-i})$$ The minmax strategy for player $$i$$ is $$\arg \min_{s_i} \,\, \max_{s_{-i}} u_{-i}(s_i, s_{-i})$$ i.e. the strategy that minimizes the opponents best-case payoff.
The corresponding minmax value which is the opponents maximum payoff guaranteed by the minmax strategy is $$\min_{s_i} \, \max_{s_{-i}} u_{-i}(s_i, s_{-i})$$
The Minimax Theorem
In any finite, two-player, zero-sum game, in any Nash equilibrium each player receives a payoff that is equal to both his maxmin value and his minmax value. This value is referred to as the value of the game. Furthermore the set of maxmin strategies coincides exactly with the set of minmax strategies and any maxmin/minmax strategy profiles are Nash equilibria.
The minimax theorem shows us that we can easily find NE for two-player zero-sum games by solving the corresponding minmax problem.
The minmax problem for 2x2 games can be solved by writing down and solving the corresponding maxmin value equations for each player.
The general minmax problem for two players is easily solvable with LP. $$U_1^\ast$$ is the value of the game (the payoff to player 1 in equilibrium), $$s_2$$ is the mixed strategy for player 2 that we want to find, $$k$$ is a pure strategy.
The minmax problem for matching pennies:
Correlated Equilibrium
A correlated equilibrium is a randomized assignment of (potentially correlated) action recommendations to agents, such that nobody wants to deviate.
A correlated equilibrium is a generalization of Nash equilibria since any Nash equilibrium can be expressed as a correlated equilibrium where action recommendations are not correlated.
A CE can for example be used to achieve fair and optimal outcomes in the Battle of the Sexes game.
Extensive Form Games
Extensive form games are an alternative representation of games that makes their temporal structure (sequence/time) explicit.
Perfect Information Extensive Form Games
A (finite) perfect-information game is defined by the tuple $$(N, A, H, Z, \chi, \rho, \sigma, u)$$, where
• $$N$$ is a set of $$n$$ players.
• $$A$$ is a set of actions (not action profiles like in normal form games)
• $$H$$ is a set of choice nodes
• $$\chi$$ is the action function $$\chi\colon H \rightarrow 2^A$$ that assigns a set of possible actions to each choice node
• $$\rho$$ is the player function $$\rho \colon H → N$$ that assigns a player $$i \in N$$ to each choice node (that player that makes that choice)
• $$Z$$ is a set of terminal nodes (disjoint from $$H$$)
• $$\sigma$$ is the successor function \sigma \colon H \times A → H \cup Z that assigns a next choice or terminal node to a choice and action pair (such that the node graph is a tree)
• $$u$$ is a utility function $$(u_1, \ldots, u_n)$$ where $$u_i \colon Z → \mathbb{R}$$ is a utility function for player $$i$$ on the terminal nodes $$Z$$
Pure Strategies in Perfect Information Extensive Form Games
The set of pure strategies for player $$i$$ is the set $$\times_{h \in H, \rho(h) =i} \chi(h)$$ where $$\times$$ is the generalized cross-product.
Theorem: Every perfect information game in extensive form has a PSNE. (In normal form games some games only have mixed strategy Nash equilibria but in extensive form perfect information games due to the sequential nature of the moves and having perfect information, randomization is never a useful strategy)
Subgame Perfection
The subgame of $$G$$ rooted at $$h$$ is the restriction of $$G$$ to the descendents of $$H$$.
The set of subgames of $$G$$ is the set of subgames of $$G$$ rooted at each of the nodes in $$G$$.
A Nash equilibrium $$s$$ is a subgame perfect equilibrium of $$G \iff$$ for any subgame $$G'$$ of $$G$$, the restriction of $$s$$ to $$G'$$ is a Nash equilibrium of $$G'$$.
Every finite extensive form game with perfect recall has a subgame perfect equilibrium.
Computing Subgame Perfect Equilibria
Backwards induction is a recursive algorithm to find subgame perfect equilibria. It starts at any node $$h$$ (e.g. the root of the game) and recursively finds an equilibrium for all its subgames and then the entire game (rooted at $$h$$).
The algorithm below simply computes the payoff under the equilibrium strategy but can be extended to compute the strategy itself. (The function labels each node with a utility vector, the labeling can be seen as an extension of the game's utility function to the non-terminal nodes)
For zero-sum games backwards induction is referred to as the minimax algorithm. The minimax algorithm can be sped up by removing nodes that will never be reached in play (alpha-beta pruning).
Imperfect Information Extensive Form Games
A (finite) imperfect-information game is defined by the tuple $$(N, A, H, Z, \chi, \rho, \sigma, u, I)$$ where $$(N, A, H, Z, \chi, \rho, \sigma, u)$$ is a perfect-information game and $$I = (I_1, \ldots, I_n)$$, where $$I_i = (I_{i, 1}, \ldots, I_{i, k_i})$$ is a partition of player $$i$$'s choice nodes. Each $$I_{i, j}$$ contains one or more choice nodes such that for $$h, h' \in I_{i, j}$$ it holds that $$\chi(h) = \chi(h')$$ and $$\rho(h) = \rho(h')$$. This ensures that players are unable to distinguish these nodes via available actions in the node or via different players playing those nodes. Note that if each $$I_{i, j}$$ contains only a single element the game is a perfect information game.
Pure Strategies in Imperfect Information Extensive Form Games
The set of pure strategies for player $$i$$ is the set $$\times_{I_{i, j} \in I_i} \chi(I_{i, j})$$ where $$\times$$ is the generalized cross-product.
Induced Normal Form
All extensive form games can be converted to an equivalent normal form game. The induced normal form games are generally exponentially larger than the associated extensive-form games. To convert an EF game to normal form simply create a normal form game with the pure strategies in the EF game (as above) as actions in the normal form game, then fill out the matrix with the corresponding outcomes.
Not all normal form games can be converted to perfect information extensive form games due to their nonsequential nature (e.g. matching pennies) but they can always be converted to imperfect information extensive form games.
Performing the conversion NF → IIEF → NF may not return the same game but always returns an equivalent game with the same strategy spaces and equilibria.
Perfect Recall
An imperfect information game is said to have perfect recall if any agent in any information set knows all previously visited information sets and actions they have previously taken.
Behavioral Strategies
In perfect information games and imperfect information games with perfect recall, mixed strategies and behavioral strategies can emulate each other. Otherwise behavioral strategies can offer ways of playing a game that can not be done with mixed strategies.
A mixed strategy in imperfect information games is a probability distribution over pure strategies. Considering a single information set, mixed strategies assign a probability to every action available at nodes in that set. Before the game is played a concrete pure strategy is sampled from the the mixed strategy which is then executed. Notice that this entails the constraint that the agent plays the same action at every node in an information set (actions are not resampled when moving from one node to another node in the same information set).
Behavioral strategies are also probability distributions over pure strategies but they resample from the distribution every time they make a move. Behavioral strategies can have equilibria that are different from equilibria in mixed strategies.
Beyond Subgame Perfection
In incomplete information games there may not be any proper subgames: In the following example the only subgame is the game itself since both player-two choices are in the same information set and therefore not disconnected subgames. In this case subgame perfect equilibria are just general Nash equilibria.
Note: The N choice at the top stands for nature and injects randomization into the game
Here $$(S→N, W→N; \,F)$$ is an equilibrium even though the off-the-equilibrium-path action $$F$$ for player two is a non-credible threat. (It is still a nash equilibrium because if player 1 truly believes player 2 is going to play $$F$$ and player 2 truly believes player 1 is going to play $$S→N, W→N$$ neither player has an incentive to change their action)
$$(S→E, W→N; A)$$ is another (a more credible) Nash equilibrium.
Other equilibrium concepts (sequential equilibrium, perfect bayesian equilibrium) explicitly model players' beliefs about the current state of the game and may be better suited for these kinds of games.
Repeated Games
A repeated game is a game where a single normal form game is played repeatedly for a finite or infinite number of times.
Utility in Infinitely Repeated Games
Given an infinite sequence of payoffs $$(r_j)_{j=1}^\infty$$ for player $$i$$ the average reward is $$\lim_{k→\infty} \sum_{j=1}^k \frac{r_j}{k}$$
and the future discounted reward with a discount factor of $$0 < \beta < 1$$ is
$$\sum_{j=1}^\infty \beta^jr_j$$ The discount factor can be interpreted as expressing that the agent prefers present rewards over future rewards. The future discounted reward is equivalent to the expected reward in a game where the agent cares equally about present and future rewards but there is a probability of $$1-\beta$$ that the game ends in any given round.
Stochastic Games
A stochastic game is a generalization of repeated games. In a stochastic game players play games from a given set of normal form games and can transition to playing another game depending on the previous game played and all the actions taken by the players.
A stochastic game is a tuple $$(Q, N, A, P, R)$$ where
• $$Q$$ is a finite set of states (these implicitly define multiple games)
• $$N$$ is a finite set of $$n$$ players
• $$A=A_1 \times \ldots \times A_n$$ is an action profile where $$A_i$$ is the set of actions available to player $$i$$
• $$P\colon Q \times A \times Q \rightarrow [0, 1]$$ is a transition probability function where $$P(q, a, \hat q)$$ is the probability of transitioning from state $$q$$ to state $$\hat q$$ after executing action profile $$a$$
• $$R=r_1, \ldots, r_n$$ where $$r_i\colon Q \times A → \mathbb{R}$$ is a real valued payoff function for player $$i$$
This definition assumes that all games have the same strategy space (otherwise just more notation).
Stochastic games generalize Markov Decision Processes since an MDP is a single-agent stochastic game.
Fictitious Play
Let $$w(a)$$ be the number of times the opponent has played action $$a$$ and initialize $$w(a)$$ with a zero or non-zero value. Assess opponent's strategy using $$\sigma(a) = \frac{w(a)}{\sum_{a'\in A} w(a')}$$ and best respond to the mixed strategy given by $$\sigma$$.
Theorem: If the empirical distribution of each player's strategies (e.g. percentage of head/tails in matching pennies) converges in fictitious play, then it converges to a Nash equilibrium.
Note that the strategies played by each player may not converge to a single strategy.
Each of the following are sufficient conditions for the empirical frequencies of play to converge in fictitious play:
• The game is zero-sum
• The game is dominance-solvable
• The game is a potential game
• The game is $$2\times n$$ and has generic payoffs
No-regret Learning
Define the regret an agent experiences at time $$t$$ for not having played $$s$$ as the differences between the received reward and reward that the agent would have received under strategy $$s$$: $$R^t(s) = \alpha^t - \alpha^t(s)$$
A learning rule exhibits no regret if for any pure strategy $$s$$ it holds that $$\Pr([\lim_{t→\infty} \inf R^t(s)] \le 0) = 1$$ (in the limit the regret tends to zero).
Regret Matching
Regret matching is a no regret learning rule where the agent plays with probability proportional to its regret for not playing that action in the past.
Regret matching converges to a correlated equilibrium for finite games.
Pure Strategies and Equilibria in Repeated Games
A pure strategy in an infinitely repeated game is a choice of action at every round. Therefore there are infinitely many pure strategies in infinitely repeated games and there may also be infinitely many pure strategy Nash equilibria.
Since Nash's theorem only applies to finite games, there may not be any Nash equilibria in IRGs.
Finding Nash equilibria in IRGs is difficult but the Folk theorem tells us a way to identify which payoff profiles are generated by a Nash equilibrium.
Payoff Profiles
Consider an $$n$$-player game $$G=(N, A, u)$$ and any payoff vector $$r = (r_1, \ldots, r_n)$$. Let $$v_i$$ be player $$i$$'s minmax value.
A payoff profile $$r$$ is enforceable if $$r_i \ge v_i$$.
A payoff profile $$r$$ is feasible (actually achievable on average in a concrete game) if $$r$$ can expressed as a convex rational combination of outcomes i.e. there exist rational $$0 \le \alpha_a \le 1$$ with $$\sum_{a \in A} \alpha_a = 1$$ such that $$r = \sum_{a\in A} \alpha_a u_i(a)$$. Note that this is defined for all players simultaneously and that the alphas must be the same for each player.
Folk Theorem for IR games with Average Rewards
Given any $$n$$-player game consider a payoff profile $$(r_1, \ldots, r_n)$$
1. If $$r$$ is the payoff in any Nash equilibrium then it is enforceable.
2. If $$r$$ is both feasible and enforceable it is the payoff in some Nash equilibrium.
Note that above theorem does not state that a payoff profile in an equilibrium is necessarily feasible. This is because it may not be expressible as a rational convex combination.
See the lecture for the full proof. Short summary of proof
1. Proof by contradiction: Playing the minmax strategy would be a profitable deviation from the Nash equilibrium (contradiction!) since the Nash equilibrium's reward profile is enforceable.
2. By exploiting the rationality in the definition of feasibility we can construct simple strategies for all players that lead exactly to the given reward profile. Furthermore we make the strategies be trigger strategies so that when any player deviates from their strategy the others punish him and he will receive at most his minmax value. Because the reward profile is enforceable deviating from the strategy is not a profitable deviation for any player and the strategy is a therefore Nash equilibrium.
Games with Discounted Rewards
We consider stage games of the form $$(N, A, u)$$ with discount factors $$\beta_1 = \ldots = \beta_n = \beta$$. The payoff from a sequence of actions $$(a_t)_t$$ is $$\sum_t \beta_i^t u_i(a_t)$$.
The set of finite histories is $$H=\bigcup_tH^t$$ where $$H^t = \{h^t : h^t \in A^t\}$$ is the set of histories of length $$t$$. A strategy is a function $$s_i \colon H → \Delta(A_i)$$.
A strategy profile is a subgame perfect Nash equilibrium if it is Nash in every subgame where a subgame in infinitely repeated games is defined as the game starting at any specific round.
Repeatedly playing the stage game NE is always a subgame perfect equilibrium (by backward induction) and is the unique subgame perfect equilibrium if the stage game has a unique NE.
Folk Theorem for IR games with Discounted Rewards
Let $$a=(a_1, \ldots, a_n)$$ be a Nash equilibrium of the stage game.
If there exists an $$a' = (a'_1, \ldots a'_n)$$ such that all $$u_i(a') > u_i(a)$$ then there exists a discount factor $$\beta < 1$$, such that if all $$\beta_i \ge \beta$$, then there exists a subgame perfect equilibrium for the IR game where $$a'$$ is played in every period (as long as we're on the equilibrium path).
Note: the equilibrium strategy is a grim trigger strategy, meaning it plays $$a'$$ as long as no one deviates and switches to forever playing $$a$$ otherwise. Not playing $$a'$$ is then never a profitable deviation for any player as long as $$\beta$$ is large enough (if players don't care about future rewards enough, they don't care if we punish them by playing $$a$$ instead of $$a'$$).
Bayesian Games
Unlike perfect information games, Bayesian games (sometimes called games of incomplete information) can model agents' uncertainty in their and other players payoffs/utility functions.
Definition via Information Sets
A Bayesian game (defined via information sets) is a tuple $$(N, G, P, I)$$ where
• $$N$$ is a set of players
• $$G$$ is a set of games with identical strategy spaces and players $$N$$ (they differ only in their utility functions)
• $$\Pi(G)$$ is the set of probability distributions over games in $$G$$ and $$P \in \Pi(G)$$ is a common prior
• I=(I_1, \ldots, I_n) is a set of partitions of $$G$$, one for each agent
where games in the same information set $$I_{i, j}$$ of $$I_i$$ are indistinguishable for agent $$i$$. Note that players have access to all information in the $$(N,G,P,I)$$ tuple, they just don't know which game is being played.
Agents' beliefs are posteriors, obtained by conditioning the common prior $$P$$ on individual private signals.
Definition via Epistemic Types
A Bayesian game (defined via epistemic types) is a tuple $$(N, A, \Theta, p, u)$$ where
• $$N$$ is a set of agents
• $$A=(A_1, \ldots, A_n)$$, where $$A_i$$ is the sets of actions for player $$i$$
• $$\Theta = (\Theta_1, \ldots, \Theta_n)$$, where $$\Theta_i$$ is the type space of player $$i$$
• $$p\colon \Theta → [0, 1]$$ is a common prior
• $$u=(u_1, \ldots, u_n)$$, where $$u_i\colon A \times \Theta → \mathbb{R}$$ is the utility function for player $$i$$
An agent's type consists of all information and beliefs the agent has that is not common knowledge.
Bayesian Nash Equilibrium
A pure strategy is a function $$s_i\colon \Theta → A_i$$. A mixed strategy is a function $$s_i\colon \Theta → \Pi(A_i)$$. $$s_i(a_i | \theta_i)$$ is the probability that agent $$i$$ plays action $$a_i$$ under the mixed strategy $$s_i$$ given that his type is $$\theta_i$$.
Kinds of expected utility ($$EU$$)
• ex-ante: the agent knows nothing about anyone's actual type
• interim: the agent only knows their actual type
• ex-post: the agent knows all agent's actual types
The interim expected utility for player $$i$$ is: $$EU_i(s|\theta_i) = \sum_{\theta_{-i} \in \Theta_{-i}} p(\theta_{-i}|\theta_i) \sum_{a \in A} \left( \prod_{j\in N} s_j(a_j| \theta_j) \right) u_i(a, \theta_i, \theta_{-i})$$ The ex-ante expected utility for player $$i$$ is: $$EU_i(s) = \sum_{\theta_i \in \Theta_i} p(\theta_i) EU_i(s | \theta_i)$$ A Bayesian Nash equilibrium is a mixed strategy profile $$s$$ such that each player plays a best response, i.e for each $$i$$ and $$\theta_i \in \Theta_i$$: $$s_i \in \arg \max_{s_i'} EU_i(s_i', s_{-i}, \theta_i)$$ This interim maximization based definition is equivalent to the ex-ante based formulation if all $$p(\theta_i) > 0$$ $$s_i \in \arg \max_{s_i'} EU_i(s_i', s_{-i})$$ since maximizing the $$EU$$ for each type also maximizes their weighted average and not maximizing the $$EU$$ for any type that has nonzero probability doesn't maximize the average. (?)
See this example.
References
1. Jackson, Matthew O., A Brief Introduction to the Basics of Game Theory (December 5, 2011 | 2021-04-20 18:45:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8450253009796143, "perplexity": 576.0396581933545}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039490226.78/warc/CC-MAIN-20210420183658-20210420213658-00563.warc.gz"} |
http://mathhelpforum.com/geometry/156483-area-terms-print.html | # Area in terms of ...
• Sep 16th 2010, 09:29 PM
razemsoft21
Area in terms of ...
http://www.mathhelpforum.com/math-he...&thumb=1&stc=1
• Sep 16th 2010, 10:37 PM
RHandford
Is this the complete question or was there any additional informaton?
• Sep 16th 2010, 11:02 PM
razemsoft21
Quote:
Originally Posted by RHandford
Is this the complete question or was there any additional informaton?
Yes it is complete (note that the 2 small circles have the same radius (r) and the big one have radius (R) ).
• Sep 17th 2010, 01:58 AM
Wilmer
5:30am here; need 2 coffees (one for each eye) before I can do anything...
so I'll just make an observation, hoping it makes sense:
make G the top of green triangle , so that we have right triangle GDC
make a=DG, b=DC and c=CG; then r = ab / (a + b + c); also, b = 2R
extend BA to left to point E, CD to left to point F, such that the resulting
rectangle EBCF has diagonal CE, G being on CE
make e=EF=BC, f=BE=CF and g = CE; then R = ef / (e + f + g)
right triangles GDC and EBC are similar
That's it for now...
• Sep 17th 2010, 04:23 PM
Wilmer
Ok, did some work; my idea to create rectangle EBCF helps some, but not much:
by inserting the larger circle on left (tangent to EF, DF and EC), it is earier to see
that by similarity, e / (2R) = a / (2r) : remember that e=EF and a = DG.
With r = ab / (a + b + c) and b = 2R, lots of messy stuff (as interesting as a
root canal performed by a drunk dentist) leads to a = r(2R - r) / (R - r) and
to e = R(2R - r) / (R - r). This then makes these areas:
triangle CDG = Rr(2R - r) / (R - r) and rectangle ABCD = 2R^2(2R - r) / (R - r)
Makes for a neat ratio areaCDG : areaABCD = r : 2R
HOWEVER, all this does not take in consideration the smaller circle seen at corner C,
I noticed that your diagram "intends?" to show circle radius R; but it does not:
what appears is an ellipse! Seems to be only way to accomodate that smaller circle.
So could you CHECK that out please.
As far as I'm concerned, it it nor possible to have that 2nd smaller circle fit in.
I'll believe it is ONLY if YOU can show an example with dimensions.
• Sep 17th 2010, 09:23 PM
razemsoft21
No answer up till now ?
• Sep 18th 2010, 03:53 AM
Wilmer
I've given you plenty for the answer:
"This then makes these areas:
triangle CDG = Rr(2R - r) / (R - r) and rectangle ABCD = 2R^2(2R - r) / (R - r)"
Let J = Rr(2R - r) / (R - r) and K = 2R^2(2R - r) / (R - r)
Green area = J - pi r^2
Blue area = K - J - pi R^2
As I told you, I'm only using the smaller circle inscribed in the triangle.
You didn't answer my questions in last post, so I'm done with this.
• Sep 18th 2010, 11:52 AM
razemsoft21
Thank you any way for trying your best
waiting for other to try ....
• Sep 18th 2010, 05:20 PM
simplependulum
I can show that the right triangle is $7-24-25$ and $r:R = 1:4$ .
Let me include my approach to this problem ( at first I thought the solution was so long but actually it wasn't ! )
Name the last 'unnamed' vertex of the right angled triangle , call it $E$ .
Let $\angle BCE = 2\theta$ then we have
$\frac{R-r}{R+r} = \sin(\theta) ~~~ --(1)$
Consider the right triangle $CDE$ , by using the formula calculating the inradius $r = \frac{S}{p}$ , we obtain $r = \frac{ 2R^2 \sin( \angle DCE ) }{ R( \sin(\angle DCE) + \cos(\angle DCE) + 1 )}$
$= \frac{ R( \sin(\angle DCE ) + \cos(\angle DCE) - 1)}{\cos( \angle DCE) }$
But $\angle DCE = 90^o - 2 \theta$ so
$r = \frac{ R( \sin(2\theta ) + \cos(2\theta ) - 1)}{\sin( 2\theta ) }$
$\frac{r}{R} = \frac{ \sin(2 \theta ) + \cos(2\theta ) - 1}{\sin( 2\theta ) }$
$\frac{R-r}{R} = \frac{ \sin( 2\theta ) - ( \sin(2\theta ) + \cos(2\theta ) - 1)}{ \sin( 2\theta ) } = \frac{ 1 - \cos(2 \theta )}{\sin( \theta )} = \tan( \theta )$
From $(1)$ , we have
$\frac{R-r}{2R} = \frac{ \sin( \theta )}{1 + \sin( \theta )}$ so we have
$\frac{ \tan( \theta )}{2} = \frac{ \sin( \theta )}{1 + \sin( \theta )}$
$2\cos( \theta ) = 1 + \sin(\theta )$
$\sin(\theta ) = \frac{3}{5}$
so we can find the results I mentioned .
• Sep 18th 2010, 11:09 PM
Wilmer
Ahhhh.....I see; agree SP.
The way the question was worded completely mislead me (that's my fault!).
Really, the question (as posed) can be answered this way:
let a = short leg of right triangle, b = other leg, h = rectangle height; P = pi;
ratio green : blue = (ab - 2Pr^2) : [b(2d - a) - 2P(r^2 + R^2)] | 2017-06-26 01:25:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8619396686553955, "perplexity": 2800.4420615234812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320595.24/warc/CC-MAIN-20170625235624-20170626015624-00362.warc.gz"} |
https://idontgetoutmuch.wordpress.com/2008/12/27/skorokhod-representation-of-a-random-variable/ | # Skorokhod Representation of a Random Variable
See 3.12 of Williams.
Let $X^{-}(\omega)$ $= \inf \{ x \mid F(z) \geq \omega \}$ where $F : \mathbb{R}$ $\rightarrow [0,1]$ and
1. $x \leq y \implies F(x) \leq F(y)$.
2. $\lim_{x \rightarrow \infty} F(x)= 1$ and $\lim_{x \rightarrow -\infty} F(x) = 0$.
3. $F$ is right continuous.
Then by the proof $X^{-}(\omega) \leq c \implies \omega \leq F(c)$. Thus $c \not \in \{ y \mid F(y) < \omega \}$ and is therefore an upper bound (if not then $\exists y$ such that $F(y) < \omega$ and $c < y$ but by monotonicity $F(c) \leq F(y)$). Therefore $c \geq \sup \{ y \mid F(y) < \omega \}$.
On the other hand suppose $c < \inf \{ x \mid F(x) \geq \omega \}$ then $F(c) < \omega$ (if not then $F(c) \geq \omega$ but then $\inf \{ x \mid F(x) \geq \omega \}$ is a lower bound for all such $c$ which would imply $\inf \{ x \mid F(x) \geq \omega \} \leq c$). Now $\sup \{ y \mid F(y) < \omega \}$ is an upper bound for any $x$ such that $F(c) < \omega$ so $c \leq \sup \{ y \mid F(y) < \omega \}$. Now suppose $c = \sup \{ y \mid F(y) < \omega \}$ then $F(c + 1/n) \geq \omega$ and by right continuity this implies $F(c) \geq \omega$. Thus we must have $c < \sup \{ y \mid F(y) < \omega \}$ | 2017-03-27 04:40:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 29, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999148845672607, "perplexity": 85.99126096717077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189403.13/warc/CC-MAIN-20170322212949-00102-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://m-clark.github.io/docs/mixedModels/growth_vs_mixed.html | # Introduction
The following compares a structural equation modeling (SEM) approach with wide format data vs. a mixed model, long-form data approach. In a traditional mixed model we have observations clustered within some group structure, for example, test scores collected over time within students, or survey samples of individuals from each state.
In what follows we can see a traditional mixed model as a form of a constrained latent variable model, where the random effects in the mixed model are represented as a latent variable in the SEM context. This is more explicitly laid out in the area of growth curve models, which take a standard setting of a mixed model with longitudinal data in which observations occur over time. However, in the SEM approach we instead take a wide form approach to the data and explicitly model latent variables that reflect the random intercepts and slopes. Growth curve models are highly constrained compared to the usual SEM setting, as most loadings are fixed rather than estimated.
We’ll start by generating some data with the SEM context in mind, then melt it to long form and run a standard mixed model. Following that we’ll compare the two, and eventually add in random slopes.
# Random Intercepts
For the following we start with no additional covariates. In the mixed model framework, a simple random effects model is often depicted as follows:
$y_{i} = µ + u_{j} + e_{i}$
In this case, each observation $$i$$ within a cluster is the result of some overall mean plus a random effect due to belonging to group $$j$$, plus residual noise.
## Data
In the SEM context, we generate ‘observed’ data $$y$$ as if there were a single underlying latent variable $$f$$. Unlike traditional SEM models, here we fix the loading of the latent variable to be 1 for each observed variable. We give the observed $$y$$ variances of 1, 2 and 3 respectively, and set the ‘fixed effect’ intercept to µ.
set.seed(8675309)
n = 1000
lvI = rnorm(n, sd=2) # Our latent variable representing random intercepts
mu = .3 # Intercept
### make the data to conform to a mixed model
y1 = mu + 1*lvI + rnorm(n, sd=1)
y2 = mu + 1*lvI + rnorm(n, sd=sqrt(2))
y3 = mu + 1*lvI + rnorm(n, sd=sqrt(3))
y = data.frame(y1, y2, y3)
head(y)
y1 y2 y3
1 -1.9605780 0.1515718 -0.1737814
2 0.4804558 3.8265794 2.8545217
3 -2.1044920 0.7679583 -2.6376176
4 5.0600987 1.5849592 2.4593212
5 4.0626543 3.5548814 5.7934398
6 2.0400965 0.8187298 5.8198780
# reshape to long for later use with nlme
library(magrittr); library(dplyr); library(tidyr)
ylong = y
ylong %<>%
gather(y, key='variable') %>%
mutate(group = factor(rep(1:n, 3)))
##
## # alternative, generate long form first
## group = factor(rep(1:n, 3))
## # y2 = mu + lvI[group] + rnorm(n*3, sd=rep(sqrt(c(1,2,3)), e=n))
## # ylong = data.frame(variable=rep(paste0('y',1:3), e=n), value=y2, group)
## # head(ylong)
As we can see and would expect, the variances of the observed variables are equal to the variance of the latent variable (the $$u$$ random effect in the mixed model) plus the residual variance. In this case 22 + c(1, 2, 3).
sapply(y, var)
y1 y2 y3
5.349871 6.221397 7.282551
## SEM
In the following we’ll set up the SEM in lavaan with appropriate constraints. We will hold off on results for now, but go ahead and display the graph pertaining to the conceptual model1. At this point we’re simply estimating the variances of the latent variable and the residual variances of the observed data.
To make results comparable to later, we’ll use the settings pertaining to lavaan’s growth function.
library(lavaan)
LVmodel1 = '
I =~ 1*y1 + 1*y2 + 1*y3
'
semres1 = growth(LVmodel1, y)
## Mixed Model
For the mixed model we use the melted data with a random effect for group. The nlme package is used because it will allow for heterogeneous variances for the residuals, which is what we need here.
library(nlme)
nlmemod1 = lme(y ~ 1, random= ~ 1|group, data=ylong, weights=varIdent(form = ~1|variable), method='ML')
## Model Results and Comparison
### SEM
For the SEM approach we get the results we’d expect, and given the data set size the estimates are right near the true values.
summary(semres1)
lavaan (0.5-20) converged normally after 22 iterations
Number of observations 1000
Estimator ML
Minimum Function Test Statistic 0.990
Degrees of freedom 4
P-value (Chi-square) 0.911
Parameter Estimates:
Information Expected
Standard Errors Standard
Latent Variables:
Estimate Std.Err Z-value P(>|z|)
I =~
y1 1.000
y2 1.000
y3 1.000
Intercepts:
Estimate Std.Err Z-value P(>|z|)
y1 0.000
y2 0.000
y3 0.000
I 0.247 0.069 3.560 0.000
Variances:
Estimate Std.Err Z-value P(>|z|)
y1 1.116 0.098 11.405 0.000
y2 2.004 0.126 15.947 0.000
y3 3.072 0.167 18.371 0.000
I 4.219 0.216 19.517 0.000
### Mixed Model
With the mixed model we see the random intercept sd/variance is akin to the latent variable variance, and residual variance is what we’d expect also.
summary(nlmemod1)
Linear mixed-effects model fit by maximum likelihood
Data: ylong
AIC BIC logLik
12562.33 12592.36 -6276.164
Random effects:
Formula: ~1 | group
(Intercept) Residual
StdDev: 2.054055 1.056471
Variance function:
Structure: Different standard deviations per stratum
Formula: ~1 | variable
Parameter estimates:
y1 y2 y3
1.000000 1.340076 1.658953
Fixed effects: y ~ 1
Value Std.Error DF t-value p-value
(Intercept) 0.2466467 0.06929644 2000 3.559299 4e-04
Standardized Within-Group Residuals:
Min Q1 Med Q3 Max
-2.582310510 -0.575948827 -0.001671765 0.577319764 2.865939066
Number of Observations: 3000
Number of Groups: 1000
#### Comparison of latent variable and random effects
It’s a little messy to extract the specific individual variance estimates due to the way nlme estimates them2, but the estimates of the mixed model and the SEM are the same.
coef(nlmemod1$modelStruct$varStruct, unconstrained =FALSE,allCoef=T)*nlmemod1$sigma y1 y2 y3 1.056471 1.415752 1.752636 sqrt(diag(inspect(semres1, 'est')$theta))
y1 y2 y3
1.056469 1.415752 1.752635
Comparing the latent variable scores to the random effects, we are dealing with almost identical estimates.
comparisonDat = data.frame(LV=lavPredict(semres1)[,1], RE=ranef(nlmemod1)[,1] + fixef(nlmemod1))
psych::describe(comparisonDat)
cor(comparisonDat)
LV RE
1 -0.86 -0.86
2 1.70 1.70
3 -1.18 -1.18
4 3.16 3.16
5 3.76 3.76
6 2.14 2.14
7 0.57 0.57
8 1.83 1.83
9 1.71 1.71
10 1.25 1.25
vars n mean sd median trimmed mad min max range skew kurtosis se
LV 1 1000 0.25 1.93 0.31 0.26 1.91 -5.97 7.19 13.17 -0.08 0.05 0.06
RE 2 1000 0.25 1.93 0.31 0.26 1.91 -5.97 7.19 13.17 -0.08 0.05 0.06
LV RE
LV 1 1
RE 1 1
# Random intercepts and slopes
Now we will investigate random intercepts and slopes as in a standard ‘growth curve model’. For this data the y is a repeated measurement over time, otherwise the data is much the same as before. However, we will add a slight positive correlation between intercepts and slopes, and scale time to start at zero so the intercept represents the average baseline value. We’ll also add an additional time point.
## Data
The main difference here is adding a covariate for time and a second latent variable. For this demo, the ‘fixed’ effects in the mixed model sense will be set to .5 and .2 for the intercept and slope for time respectively. The variances of the latent variables are set to one. We add increasing residual variance over time.
set.seed(8675309)
n = 1000
i = .5
s = .2
f = MASS::mvrnorm(n, mu=c(i,s), Sigma=matrix(c(1,.3,.3,1), nrow=2, byrow=T), empirical=T)
f1 = f[,1]
f2 = f[,2]
### make the data to conform to a mixed model
y1 = 1*f1 + 0*(f2) + rnorm(n, sd=1)
y2 = 1*f1 + 1*(f2) + rnorm(n, sd=sqrt(2))
y3 = 1*f1 + 2*(f2) + rnorm(n, sd=sqrt(3))
y4 = 1*f1 + 3*(f2) + rnorm(n, sd=sqrt(4))
y = data.frame(y1, y2, y3, y4)
head(y)
y1 y2 y3 y4
1 0.8233074 0.6094268 -2.9239599 -1.9196902
2 2.4320979 0.8722732 0.4130522 -0.6960228
3 0.9059002 -2.6873153 -4.6821983 -4.7305664
4 0.6309905 2.0462769 4.5094473 6.4255891
5 2.6525856 6.4423691 6.0122283 6.2786173
6 0.3857095 4.3329025 3.8898125 2.1616358
# reshape to long for later use with nlme
ylong = y
ylong %<>%
gather(y, key='variable') %>%
mutate(subject = factor(rep(1:n, 4)),
time = rep(0:3, e=n))
Let’s take a look at what we have.
## Models
For the SEM we now have two latent structures, one representing the random intercepts, another the random slopes for time. For the mixed model we specify both random intercepts and slopes. The graphical model for the SEM is shown.
LVmodel2 = '
I =~ 1*y1 + 1*y2 + 1*y3 + 1*y4
S =~ 0*y1 + 1*y2 + 2*y3 + 3*y4
'
semres2 = growth(LVmodel2, y)
nlmemod2 = lme(y ~ time, data=ylong, random = ~time|subject,
weights=varIdent(form = ~1|variable), method='ML')
semPlot::semPaths(semres2, what='path', whatLabels='est', whatstyle='lisrel')
## Model Results and Comparison
### SEM
For the SEM approach we get the results we’d expect, with estimates right near the true values.
summary(semres2)
lavaan (0.5-20) converged normally after 28 iterations
Number of observations 1000
Estimator ML
Minimum Function Test Statistic 5.918
Degrees of freedom 5
P-value (Chi-square) 0.314
Parameter Estimates:
Information Expected
Standard Errors Standard
Latent Variables:
Estimate Std.Err Z-value P(>|z|)
I =~
y1 1.000
y2 1.000
y3 1.000
y4 1.000
S =~
y1 0.000
y2 1.000
y3 2.000
y4 3.000
Covariances:
Estimate Std.Err Z-value P(>|z|)
I ~~
S 0.317 0.064 4.931 0.000
Intercepts:
Estimate Std.Err Z-value P(>|z|)
y1 0.000
y2 0.000
y3 0.000
y4 0.000
I 0.480 0.043 11.096 0.000
S 0.224 0.038 5.929 0.000
Variances:
Estimate Std.Err Z-value P(>|z|)
y1 1.033 0.110 9.411 0.000
y2 1.922 0.108 17.809 0.000
y3 3.159 0.199 15.840 0.000
y4 3.932 0.347 11.323 0.000
I 1.006 0.114 8.849 0.000
S 0.995 0.069 14.364 0.000
### Mixed Model
With the mixed model we see the between group sd/variance is akin to the latent variable variance, and residual variance is what we’d expect also.
summary(nlmemod2)
Linear mixed-effects model fit by maximum likelihood
Data: ylong
AIC BIC logLik
17107.75 17164.4 -8544.875
Random effects:
Formula: ~time | subject
Structure: General positive-definite, Log-Cholesky parametrization
StdDev Corr
(Intercept) 1.0031982 (Intr)
time 0.9973894 0.317
Residual 1.0161174
Variance function:
Structure: Different standard deviations per stratum
Formula: ~1 | variable
Parameter estimates:
y1 y2 y3 y4
1.000000 1.364544 1.749243 1.951385
Fixed effects: y ~ time
Value Std.Error DF t-value p-value
(Intercept) 0.4797851 0.04324928 2999 11.093483 0
time 0.2242885 0.03783797 2999 5.927604 0
Correlation:
(Intr)
time -0.054
Standardized Within-Group Residuals:
Min Q1 Med Q3 Max
-2.668449980 -0.582027043 -0.006143264 0.582212366 3.137291189
Number of Observations: 4000
Number of Groups: 1000
Let’s compare the estimated residual variances again.
y1 y2 y3 y4
varsMixed 1.016117 1.386537 1.777437 1.982836
varsGrowth 1.016122 1.386542 1.777426 1.982842
Comparing the latent variable scores to the random effects, once again we’re getting similar results. For the latent variable regarding slopes, we’ll subtract out the fixed effect.
I S X.Intercept. time
1 0.38 -0.68 0.38 -0.68
2 1.16 -0.30 1.16 -0.30
3 -0.14 -1.48 -0.14 -1.48
4 0.93 1.43 0.93 1.43
5 2.32 1.62 2.32 1.62
6 1.08 0.92 1.08 0.92
7 0.63 1.01 0.63 1.01
8 1.40 1.76 1.40 1.76
9 1.27 0.43 1.27 0.43
10 0.33 0.82 0.33 0.82
vars n mean sd median trimmed mad min max range skew kurtosis se
I 1 1000 0.48 0.80 0.49 0.48 0.79 -2.21 2.96 5.18 -0.08 -0.06 0.03
S 2 1000 0.22 0.88 0.25 0.21 0.89 -2.83 3.04 5.87 0.08 0.06 0.03
X.Intercept. 3 1000 0.48 0.80 0.49 0.48 0.79 -2.21 2.96 5.18 -0.08 -0.06 0.03
time 4 1000 0.22 0.88 0.25 0.21 0.89 -2.83 3.04 5.87 0.08 0.06 0.03
I X.Intercept.
I 1 1
X.Intercept. 1 1
S time
S 1 1
time 1 1
# Which to use?
It really depends on the model specifics as to which might be best for your situation, but I would suggest defaulting to mixed models for a variety of reasons.
• Ease of implementation: Only very little of special syntax is needed for a mixed model approach relative to standard linear model code. Whereas all SEM programs or packages require special syntax.
• Ease of interpretation/communication: Mixed models are far more commonly used across disciplines, and allow the less familiar to use their standard regression knowledge to interpret them in a straightforward fashion. I’ve often seen people that would have no trouble interpreting the mixed model (at least the fixed effects portion), but get hung up on growth models with additional covariates.
• Time-varying covariates: With time-varying covariates, the multivariate approach has each time point of the dependent variable predicted by each time point of the covariate (akin to nonlinear relationship or interaction with time), and the model gets unwieldy very quickly with only handful of time-varying covariates even if there are few time points, or few covariates with many time points.
• Nonlinear relationships: While one can incorporate nonlinear relationships very easily in the standard setting due to the close ties between mixed and additive models, in the SEM setting it becomes cumbersome (e.g. adding a quadratic slope factor as if one knew the functional form) or unusual to interpret (allowing the slope loadings to be estimated, rather than fixed).
• Correlated residuals: In growth model settings it’s common to specify autocorrelated residuals based on time. The syntax to do so in the SEM framework is very tedious.
• Parallel Processes: Within the Bayesian framework one can incorporate [parallel processes] (multivariate mixed models) via a multivariate outcome fairly easily with the brms package (example with Stan).
• Indirect Effects: Indirect effects can also be incorporated in the standard mixed model framework (example with Stan, see also the mediation package for using lme4 for multilevel mediation).
• Sample sizes: SEM is an inherently large sample technique, and growth curve models can become quite complicated in terms of the number of parameters to be estimated. Obviously large samples are always desirable for either approach, but e.g. where mixed models can be run on clustered data with 30 clusters, it would be a bit odd to use SEM for 30 observations. I have some simulation results here. It may be that one would need at least 50 clusters with many data points within each for the growth curve model to approach the performance of the mixed model, where setting even a few clusters with few time points is okay.
• Balanced data: Growth curve modeling requires balanced data across all time points, and so missing data necessarily has to be estimated or one will potentially lose too much of it. Mixed models do not, but whether one should still estimate the missing values is a matter of debate.
• Other: Mixed models have natural ties to spatial and additive models, as well as a straightforward Bayesian interpretation regarding a prior distribution for the random effects.
In short, my opinion is that growth curve models would probably only be preferred in settings where the model is notably complicated, but the level of complexity would start at the point where the interpretation of such a model would already be very difficult, and theoretical justification hard to come by. SEM has its place, but the standard mixed model approach is very flexible, even in fairly complicated settings, both for ease of implementation and interpretation.
# Summary
As has been demonstrated, we can think of random effects in mixed models as latent variables, and conversely, we can specify most growth models as standard mixed models. Noting the connection may provide additional insight into how to think about random effects ways in which to incorporate their use in modeling.
1. Graph made with the semPlots package.
2. lme actually works by setting the reference group variance to 1, and the coefficients represent the ratios of the other variances to that group. See ?varIdent. | 2017-08-24 03:00:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6308357119560242, "perplexity": 3020.971370504865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886126027.91/warc/CC-MAIN-20170824024147-20170824044147-00202.warc.gz"} |
https://qspace.library.queensu.ca/handle/1974/5227?show=full | dc.contributor.author Ray, Atish en dc.date 2009-09-25 23:59:11.809 dc.date.accessioned 2009-09-26T21:43:20Z dc.date.available 2009-09-26T21:43:20Z dc.date.issued 2009-09-26T21:43:20Z dc.identifier.uri http://hdl.handle.net/1974/5227 dc.description Thesis (Ph.D, Mechanical and Materials Engineering) -- Queen's University, 2009-09-25 23:59:11.809 en dc.description.abstract There exists considerable debate in the texture community about whether grain interactions are a necessary factor to explain the development of deformation textures in polycrystalline metals. Computer simulations indicate that grain interactions play a significant role, while experimental evidence shows that the material type and starting orientation are more important in the development of texture and microstructure. A balanced review of the literature on face-centered cubic metals shows that the opposing viewpoints have developed due to the lack of any complete experimental study which considers both the intrinsic (material type and starting orientation) and extrinsic (grain interaction) factors. In this study, a novel method was developed to assemble ideally orientated crystalline aggregates in 99.99\% aluminum (Al) or copper (Cu) to experimentally evaluate the effect of grain interactions on room temperature deformation texture. Ideal orientations relevant to face-centered cubic rolling textures, Cube $\{100\}\left<001\right>$, Goss $\{110\}\left<001\right>$, Brass $\{110\}\left<1\bar{1}2\right>$ and Copper $\{112\}\left<11\bar{1}\right>$ were paired in different combinations and deformed by plane strain compression to moderate strain levels of 1.0 to 1.5. Orientation dependent mechanical behavior was distinguishable from that of the neighbor-influenced behavior. In interacting crystals the constraint on the rolling direction shear strains ($\gamma_{_{XY}}, \gamma_{_{XZ}}$) was found to be most critical to show the effect of interactions via the evolution of local microstructure and microtexture. Interacting crystals with increasing deformations were observed to gradually rotate towards the S-component, $\{123\}\langle\bar{6}\bar{3}4\rangle$. Apart from the average lattice reorientations, the interacting crystals also developed strong long-range orientation gradients inside the bulk of the crystal, which were identified as accumulating misorientations across the deformation boundaries. Based on a statistical procedure using quaternions, the orientation and interaction related heterogeneous deformations were characterized by three principal component vectors and their respective eigenvalues for both the orientation and misorientation distributions. For the case of a medium stacking fault energy metal like Cu, the texture and microstructure development depends wholly on the starting orientations. Microstructural instabilities in Cu are explained through a local slip clustering process, and the possible role of grain interactions on such instabilities is proposed. In contrast, the texture and microstructure development in a high stacking fault energy metal like Al is found to be dependent on the grain interactions. In general, orientation, grain interaction and material type were found to be key factors in the development of rolling textures in face-centered cubic metals and alloys. Moreso, in the texture development not any single parameter can be held responsible, rather, the interdependency of each of the three parameters must be considered. In this frame-work polycrystalline grains can be classified into four types according to their stability and susceptibility during deformation. en dc.format.extent 53022107 bytes dc.format.mimetype application/pdf dc.language.iso eng en dc.relation.ispartofseries Canadian theses en dc.rights This publication is made available by the authority of the copyright owner solely for the purpose of private study and research and may not be copied or reproduced except as permitted by the copyright laws without written authority from the copyright owner. en dc.subject Rolling texture, cube, brass, copper, goss, grain interaction, plane strain compression, aluminum, quaternion en dc.title Experimental Study of Grain Interactions on Rolling Texture Development in Face-Centered Cubic Metals en dc.type thesis en dc.description.degree PhD en dc.contributor.supervisor Diak, Bradley J. en dc.contributor.department Mechanical and Materials Engineering en dc.degree.grantor Queen's University at Kingston en
| 2020-10-30 19:37:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21001802384853363, "perplexity": 4010.473696579717}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107911229.96/warc/CC-MAIN-20201030182757-20201030212757-00440.warc.gz"} |
https://www.esaral.com/q/if-the-latus-rectum-of-an-ellipse-is-equal-88374/ | If the latus rectum of an ellipse is equal
Question:
If the latus rectum of an ellipse is equal to half of minor axis, then find its eccentricity.
Solution:
Equation of an ellipse $=\frac{x^{2}}{a^{2}}+\frac{y^{2}}{b^{2}}=1$
Whereas
Length of latus rectum $=\frac{2 \mathrm{~b}^{2}}{\mathrm{a}}$
Length of minor axis $=2 b$
$\mathrm{SO}$
$\frac{2 b}{2}=\frac{2 b^{2}}{a}$ (Given)
$A b=2 b^{2}$
$2 b^{2}-a b=0$
$b(2 b-a)=0$
So, $b=0$ or $a=2 b$
$b^{2}=a^{2}\left(1-e^{2}\right)$
$\mathrm{b}^{2}=(2 \mathrm{~b})^{2}\left(1-\mathrm{e}^{2}\right)$
$b^{2}=4 b^{2}\left(1-e^{2}\right)$
$1-e^{2}=\frac{1}{4}$
$\mathrm{e}^{2}=\frac{3}{4}$
$e=\frac{\sqrt{3}}{2}$
Hence, the eccentricity is $\frac{\sqrt{3}}{2}$
$\mathrm{Ab}=2 \mathrm{~b}^{2}$
$2 b^{2}-a b=0$
$b(2 b-a)=0$
So, $b=0$ or $a=2 b$
$b^{2}=a^{2}\left(1-e^{2}\right)$
$b^{2}=(2 b)^{2}\left(1-e^{2}\right)$
$b^{2}=4 b^{2}\left(1-e^{2}\right)$
On rearranging we get
$1-e^{2}=\frac{1}{4}$
$e^{2}=\frac{3}{4}$
$e=\frac{\sqrt{3}}{2}$
Hence, the eccentricity is $\frac{\sqrt{3}}{2}$ | 2022-05-16 19:59:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9098571538925171, "perplexity": 1362.991497375762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662512229.26/warc/CC-MAIN-20220516172745-20220516202745-00740.warc.gz"} |
https://www.risk.net/journal-of-credit-risk/volume-5-number-3-september-2009 | # Journal of Credit Risk
#### Editor's Letter
Ashish Dev
The significant rise in delinquencies and foreclosures that started in the US residential mortgage sector in late 2006 and the effect that rise had on the valuation of structured products that were created out of such residential mortgage loans ended up creating the global economic recession that we are currently in. A somewhat similar crisis is developing in the US commercial mortgage sector. To be sure, the scale is rather different. The US residential mortgage sector is worth about US$11 trillion while the US commercial mortgage sector is worth about US$3.5 trillion. However, to put matters in perspective, the total value of outstanding US commercial mortgage loans has more than tripled in the last decade or so.
Of the US$3.5 trillion, only about US$700 billion has been securitized into commercial mortgage backed securities (CMBSs). Commercial banks in the US hold almost US$2 trillion of commercial real estate loans on their books. Unlike the US residential mortgage market, where only a few large banks hold mortgage loans on their balance sheets, a significant portion of the assets of thousands of community banks in the US is in commercial real estate. Some of them have commercial real estate exposure that exceeds 300% of their total capital. Taking a ratio of exposure to capital (namely, the leverage ratio, which has been a constant impediment to Basel II) is hardly a good measure of risk. But it quickly provides a rough picture of how pervasive the commercial real estate problem can be among community banks. As a consequence, the possible number of bank failures (not the total sum of money involved) and the demand on the Federal Deposit Insurance Corporation could be dramatic. Not all commercial mortgages have similar credit risk characteristics. Typically, bank balance sheets break these loans up into four categories: property loans secured by farms; loans secured by “multifamily” properties, such as apartment buildings or condominiums; construction and land development loans, which are used to acquire land and build newcommercial structures; and non-farm, non-residential loans, which are often associated with already-constructed industrial and office buildings. The delinquency rates in the US for all categories of commercial real estate were negligible only a year ago. By year-end 2008, the rate went up to about 2.5%. Since then, property values have collapsed more than home prices (commercial mortgages are, however, underwritten with much larger possible movement in property prices than are residential mortgages, so this is not a surprise) and vacancy rates are soaring as the recession wears on in the US. The delinquency rate for the construction loan category of commercial real estate had already reached about 12% by year-end 2008. One big difference between residential mortgages and commercial mortgages poses an especially difficult problem for the latter category. Many commercial mortgages amortize only a portion of the principal and therefore require a balloon payment at maturity. This necessitates either a refinancing at maturity or delivery of the remaining principal. Between 2009 and 2012, more than US$1 trillion US commercial real estate loans are expected to mature and will need refinancing. With much tighter underwriting by banks, the cap rates at which refinancing will be available are going to be much higher than those for the current loans.With lower potential income or current net operating income owing to significantly lower occupancy rates in a continuing recession, some of the refinancing will be too expensive, leading to default.
As an asset class, commercial real estate loans exhibit high correlation between the probability of default and the loss given default (PD–LGD correlation) compared with other asset classes. This is because both the propensity to default and the resultant severity of default are dependent on the same commercial building or development. This is especially so in the construction and land development loan category. All else being equal, a higher PD–LGD correlation results in the loss distribution having a fatter tail.
In the end, the macro driver of losses in commercial real estate is how strongly oversupply and recession coincide in terms of time and geography. Owing to more discipline in underwriting aided by better data availability from service providers, the extent of oversupply, as a whole, does not appear to be as severe as in the late 1980s and early 1990s, before the advent of CMBSs. Of course, oversupply is a relative term and also depends on the severity of the recession, which it is still too early to gauge. In this issue we present four full-length research papers and one technical report. The first paper, “Valuing CDOs of bespoke portfolios with implied multi-factor models”, by Rosen and Saunders is on the pricing of bespoke CDOs. The authors address three different types of “bespoke” characteristics: first, where the underlying portfolio and maturity are the same as the reference, but the tranche attachment and/or detachment points are different; second, where the underlying portfolio is the same but the maturity of the portfolio is different from that of observed references; and third, where the underlying portfolio differs from the reference. The model presented is, conceptually, an extension of the implied copula methodology of Hull and White (2006). The authors’CDO valuation framework is based on the application of multifactor credit models in conjunction with weighted Monte Carlo techniques.
In the second paper, “On pricing risky loans and collateralized fund obligations”, by Eberlein et al, the authors derive loan pricing formulas in a Merton-type framework, where the asset value is specified as a process introduced in Carr et al (2007), with negative jumps, that is superposed by a diffusion component. Default is monitored either only at maturity or continuously; the latter assumption requires the non-trivial inversion of the Laplace transform of the distribution of the running minimum of the asset-value process. The model is applied to the pricing of loans and parameter sensitivities, as well as activity rates, are investigated. The model is also calibrated to real data (from General Motors).
The third paper, “Pricing kth-to-default swaps in aLévy-time framework”, is by Mai and Scherer. The paper presents a multivariate credit risk model where dependence on individual default events is incorporated via a stochastic time change. The model is illustrated by the pricing of kth-to-default swaps. The attractive feature of the model is the fact that despite the freedom in specifying the term structures of individual default probabilities, closed-form solutions for the resulting portfolio loss distribution are available. The model can be applied to simultaneously explain individual credit default swap spreads and kth-to-default swap spreads. The dependence structure introduced is thoroughly investigated in the derivations.
In the fourth paper, “CDO pricing with expected loss parametric interpolation”, by Bernis, synthetic CDO transactions are priced based on a polynomial interpolation of the expected loss. This interpolation is done via a polynomial of degree 4 with one additional parameter. The objective is to overcome the incoherence observed when using direct interpolation of base correlations. In practice, many methods for performing such interpolations are ad hoc, and as such do not always preserve regularity or arbitrage conditions. One approach to this problem is to define a single distribution for the tranche loss that recovers all observed prices. In this paper, the author takes a different approach, examining the mapping function from attachment point to expected tranche loss, and establishing required regularity for this mapping. By defining an interpolation method that preserves the needed regularity, the author then guarantees that his interpolated bespoke tranche prices display the desired behavior. The proposed method is compared numerically with other standard methods (base correlation interpolation and random factor loading) on one specific day.
The last paper in this issue is a technical report. A technical report describes a particular practical technique and enumerates situations in which it works well and others in which it does not. Such reports provide extremely useful information to practitioners in terms of saved time and duplication of effort. The contents of technical reports complement the rigorous conceptual and model developments that are presented in the research papers and provide a lot of value for practitioners.
The technical report in this issue, “Double-t copula pricing of structured credit products: practical aspects of a trustworthy implementation”, by Vrins deals with the details of implementing the double-t copula model. The standard one-factor Gaussian copula is commonly used in the industry, but by the very nature of its Gaussian distributional assumption, it suffers from some deficiencies relating to tail behavior: essentially insensitivity of tail dependence to model correlation. The double-t copula was a model proposed by Hull and White (2004) and in different forms by several other researchers. The double-t copula model avoids the undesirable “too light a tail” of the normal copula and fits market prices, especially in recent times, much better than the Gaussian copula does. Intricacies of implementation naturally become more and more important as we go from the much more tractable normal copula to any other specification. As Hull and White will happily admit, their article glossed over some of the technical details in implementing the double-t copula. This article provides the details of an efficient implementation.
REFERENCES
Hull, J., and White, A. (2004).Valuation of a CDO and an n-th to default CDS without Monte Carlo simulation. Journal of Derivatives 12(2), 8–23. Hull, J., and White, A. (2006).Valuing credit derivatives using an implied copula approach. Journal of Derivatives 14(2), 8–28. Carr, P., Geman, H., Madan, D. B., and Yor, M. (2007). Self decomposability and option pricing. Mathematical Finance 17, 31–57.
#### Papers in this issue
##### Double-t copula pricing of structured credit products: practical aspects of a trustworthy implementation
You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial. | 2023-02-02 21:02:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3142419457435608, "perplexity": 1738.172665335588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500041.18/warc/CC-MAIN-20230202200542-20230202230542-00264.warc.gz"} |
https://en.wikibooks.org/wiki/Category_Theory/Subcategories | # Category Theory/Subcategories
Definition (subcategory):
Let ${\displaystyle {\mathcal {C}}}$ be a category. Then a subcategory ${\displaystyle {\mathcal {D}}=(\operatorname {Ob} ({\mathcal {D}}),\operatorname {Mor} ({\mathcal {D}}))}$ of ${\displaystyle {\mathcal {C}}}$ is a category such that ${\displaystyle \operatorname {Ob} ({\mathcal {D}})\subseteq \operatorname {Ob} ({\mathcal {C}})}$ and ${\displaystyle \operatorname {Mor} ({\mathcal {D}})\subseteq \operatorname {Mor} ({\mathcal {C}})}$.
Definition (full):
A subcategory ${\displaystyle {\mathcal {D}}}$ of a category ${\displaystyle {\mathcal {C}}}$ is called full iff for all ${\displaystyle a,b\in {\mathcal {D}}}$, we have
${\displaystyle \operatorname {Hom} _{\mathcal {D}}(a,b)=\operatorname {Hom} _{\mathcal {C}}(a,b)}$.
Proposition (limits are preserved when restricting to a full subcategory):
Let ${\displaystyle {\mathcal {C}}}$ be a category, let ${\displaystyle J:{\mathcal {J}}\to {\mathcal {C}}}$ be a diagram in ${\displaystyle {\mathcal {C}}}$, and let ${\displaystyle {\mathcal {D}}}$ be a full subcategory of ${\displaystyle {\mathcal {C}}}$. Suppose that ${\displaystyle (L,(\phi _{\alpha })_{\alpha \in A})}$ is a limit over ${\displaystyle J}$ in ${\displaystyle {\mathcal {C}}}$ such that ${\displaystyle L}$ and all targets of the ${\displaystyle \phi _{\alpha }}$ are in ${\displaystyle {\mathcal {D}}}$. Then ${\displaystyle (L,(\phi _{\alpha })_{\alpha \in A})}$ is a limit over ${\displaystyle J}$ in ${\displaystyle {\mathcal {D}}}$.
Proof: Certainly, the underlying cone of ${\displaystyle (L,(\phi _{\alpha })_{\alpha \in A})}$ is contained within ${\displaystyle {\mathcal {D}}}$, because the subcategory is full. Now let another cone ${\displaystyle (Q,(\psi _{\alpha })_{\alpha \in A})}$ in ${\displaystyle {\mathcal {D}}}$ over the diagram ${\displaystyle J}$ (which, analogously, is a diagram in ${\displaystyle {\mathcal {D}}}$) be given. By the universal property of ${\displaystyle (L,(\phi _{\alpha })_{\alpha \in A})}$ in ${\displaystyle {\mathcal {C}}}$, there exists a unique morphism ${\displaystyle f:Q\to L}$ which satisfies ${\displaystyle \psi _{\alpha }=\phi _{\alpha }\circ f}$ for all ${\displaystyle \alpha \in A}$. Since ${\displaystyle {\mathcal {D}}}$ is full, ${\displaystyle f}$ is in ${\displaystyle {\mathcal {D}}}$. ${\displaystyle \Box }$
Analogously, we have:
Proposition (colimits are preserved when restricting to a full subcategory):
Let ${\displaystyle {\mathcal {C}}}$ be a category, let ${\displaystyle J:{\mathcal {J}}\to {\mathcal {C}}}$ be a diagram in ${\displaystyle {\mathcal {C}}}$, and let ${\displaystyle {\mathcal {D}}}$ be a full subcategory of ${\displaystyle {\mathcal {C}}}$. Suppose that ${\displaystyle (C,(\phi _{\alpha })_{\alpha \in A})}$ is a colimit over ${\displaystyle J}$ in ${\displaystyle {\mathcal {C}}}$ such that ${\displaystyle C}$ and all domains of the ${\displaystyle \phi _{\alpha }}$ are in ${\displaystyle {\mathcal {D}}}$. Then ${\displaystyle (C,(\phi _{\alpha })_{\alpha \in A})}$ is a colimit over ${\displaystyle J}$ in ${\displaystyle {\mathcal {D}}}$.
Proof: This follows from its "dual" proposition, reversing all arrows in its statement and proof except the direction of the diagram functor. ${\displaystyle \Box }$ | 2019-10-13 22:23:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 53, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9596365690231323, "perplexity": 95.27206382468668}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986648343.8/warc/CC-MAIN-20191013221144-20191014004144-00296.warc.gz"} |
https://electronics.stackexchange.com/questions/81572/role-of-capacitor-when-switch-is-open-in-buck-converter | # Role of Capacitor when switch is open in Buck Converter?
in a buck converter, initially, when switch is closed, then capacitor charges and stores energy. But when the switch is opened, then Inductor current flows through thr free wheeling diode, but what happens to the charge stored in the capacitor?
• Charge isn't stored in a capacitor; energy is stored. Sep 8 '13 at 1:05
In a buck converter, the switch controls energy storage in the inductor. The average of the square wave applied to the filter will be the DC output level (12V @ 41.6% duty cycle = 5V average). The inductor acts as a current source to keep the output capacitor charged.
Depending on the load, switching frequency and inductor size, a fixed-frequency buck converter can operate in one of two modes. If the DC output current level is greater than half of the inductor current ramp, the converter is said to be in CCM (continuous conduction mode); if not, it's said to be in DCM (discontinuous conduction mode).
$I_o > \dfrac{1}{2} \cdot \dfrac{[V_i-V_o]\cdot T_{on}}{L}$ or
$I_o > \dfrac{1}{2} \cdot \dfrac{-V_o\cdot T_{off}}{L}$
In CCM, the inductor is sourcing current the entire time. The inductor current never goes to zero. The capacitor is always being charged by the inductor, so it never has to support the load by itself. The duty cycle will remain essentially fixed regardless of output current once in CCM.
In DCM, because the output current is less than half the inductor ramp current, the only way to regulate properly is to decrease duty cycle. This decrease in duty cycle leads to a third mode of operation, where the switch is off and the inductor has completely discharged.
(Some controllers will operate in what's called critical conduction mode, where instead of operating with fixed frequency and variable duty cycle, it operates with fixed duty cycle and adjusts the frequency to keep the converter exactly at the DCM/CCM threshold.)
• Thanks, and looking at the explanation, CCM and DCM refers to the 2 states of opening of the switches right? Sep 8 '13 at 1:34
• Not quite. See my edit. Sep 8 '13 at 1:57
• @Madmanguruman: Can you please elaborate more details on CCM and DCM
– AKR
Sep 8 '13 at 7:01
• CCM means "Continuous Conduction Mode" When the switch is turned on current builds up in the inductor then when its turned off it decays. In CCM the current is not allowed to fall to zero before the switch is turned back on. in DCM (Discontinuous Conduction Mode) the current is is allowed to fall to zero. In CCM ignoring second order effects the output voltage is the the input voltage times the duty: this limits the minimum load. In DCM the duty is much more load dependant. Sep 8 '13 at 13:24 | 2021-10-16 06:43:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5423932671546936, "perplexity": 1241.9080675759035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583423.96/warc/CC-MAIN-20211016043926-20211016073926-00596.warc.gz"} |
https://homework.cpm.org/category/CCI_CT/textbook/int1/chapter/8/lesson/8.2.1/problem/8-99 | ### Home > INT1 > Chapter 8 > Lesson 8.2.1 > Problem8-99
8-99.
Graph and connect the points $G(−3, 1)$, $H(3, 7)$, and $J(−3, 5)$ to form $ΔGHJ$. What is the area of the triangle?
Use the below eTool to graph and connect the points and solve the problem.
Click the link at right for the full version of the eTool: Int1 8-99 HW eTool. | 2020-07-09 05:23:46 | {"extraction_info": {"found_math": true, "script_math_tex": 4, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7082037925720215, "perplexity": 1670.5555757204986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655898347.42/warc/CC-MAIN-20200709034306-20200709064306-00033.warc.gz"} |
https://marcofrasca.wordpress.com/tag/mass-generation/ | ## Where does mass come from?
16/12/2012
After CERN’s updates (well recounted here, here and here) producing no real news but just some concern about possible Higgs cloning, I would like to discuss here some mathematical facts about what one should expect about mass generation and why we should not be happy with these results, now coming out on a quarterly basis.
The scenario we are facing so far is one with a boson particle resembling more and more the Higgs particle appearing in the original formulation of the Standard Model. No trace is seen of anything else at higher energies, no evidence of supersymmetry. It appears like no new physics is hiding here rather for it we will have to wait eventually the upgrade of LHC that will start its runs on 2015.
I cannot agree with all of this and this is not the truth at all. The reason to not believe all this is strictly based on theoretical arguments and properties of partial differential equations. We are aware that physicists can be skeptical also about mathematics even if this is unacceptable as mathematics has no other way than being true or false. There is nothing like a half truth but there are a lot of theoretical physicists trusting on it. I have always thought that being skeptical on mathematics is just an excuse to avoid to enter into other work. There could always be the risk that one discovers it is correct and then has to support it.
The point is the scalar field. A strong limitation we have to face when working in quantum field theory is that only small coupling can be managed. No conclusive analysis can be drawn when a coupling is just finite and also lattice computations produce confusion. It seems like small coupling only can exist and all the theory we build are in the hope that nature is benign and yields nothing else than that. For the Higgs field is the same. All our analysis are based on this, the hierarchy problem comes out from this. Just take any of your textbook on which you built your knowledge of this matter and you will promptly realize that nothing else is there. Peschin and Schroeder, in their really excellent book, conclude that strong coupling cannot exist in quantum field theory and the foundation of this argument arises from renormalization group. Nature has only small couplings.
Mathematics, a product of nature, has not just small couplings and nobody can impede a mathematician to take these equations and try to analyze them with a coupling running to infinity. Of course, I did it and somebody else tried to understand this situation and the results make the situation rather embarrassing.
These reflections sprang from a paper appeared yesterday on arxiv (see here). In a de Sitter space there is a natural constant having the dimension of energy and this is the Hubble constant (in natural units). It is an emerging result that a massless scalar field with a quartic interaction in such a space develops a mass. This mass goes like $m^2\propto \sqrt{\lambda}H^2$ being $\lambda$ the coupling coming from the self-interaction and $H$ the Hubble constant. But the authors of this paper are forced to turn to the usual small coupling expansion just singling out the zero mode producing the mass. So, great news but back to the normal.
A self-interacting scalar field has the property to get mass by itself. Generally, such a self-interacting field has a potential in the form $\frac{1}{2}\mu^2\phi^2+\frac{\lambda}{4}\phi^4$ and we can have three cases $\mu^2>0$, $\mu^2=0$ and $\mu^2<0$. In all of them the classical equations of motion have an exact massive free solution (see here and Tao’s Dispersive Wiki) when $\lambda$ is finite. These solutions cannot be recovered by any small coupling expansion unless one is able to resum the infinite terms in the series. The cases with $\mu^2\ne 0$ are interesting in that this term gets a correction depending on $\lambda$ and for the case $\mu^2<0$ one can recover a spectrum with a Goldstone excitation and the exact solution is an oscillating one around a finite value different from zero (it never crosses the zero) as it should be for spontaneous breaking of symmetry. But the mass is going like $\sqrt{\lambda}\Lambda^2$ where now $\Lambda$ is just an integration constant. The same happens in the massless case as one recovers a mass going like $m^2\propto\sqrt{\lambda}\Lambda^2$. We see the deep analogy with the scalar field in a de Sitter space and these authors are correct in their conclusions.
The point here is that the Higgs mechanism, as has been devised in the sixties, entails all the philosophy of “small coupling and nothing else” and so it incurs in all the possible difficulties, not last the hierarchy problem. A modern view about this matter implies that, also admitting $\mu^2<0$ makes sense, we have to expand around a solution for $\lambda$ finite being this physically meaningful rather than try an expansion for a free field. We are not granted that the latter makes sense at all but is just an educated guess.
What does all this imply for LHC results? Indeed, if we limit all the analysis to the coupling of the Higgs field with the other fields in the Standard Model, this is not the best way to say we have observed a true Higgs particle as the one postulated in the sixties. It is just curious that no other excitation is seen beyond the (eventually cloned) 126 GeV boson seen so far but we have a big desert to very high energies. Because the very nature of the scalar field is to have massive solutions as soon as the self-interaction is taken to be finite, this also means that other excited states must be seen. This simply cannot be the Higgs particle, mathematics is saying no.
M. Beneke, & P. Moch (2012). On “dynamical mass” generation in Euclidean de Sitter space arXiv arXiv: 1212.3058v1
Marco Frasca (2009). Exact solutions of classical scalar field equations J.Nonlin.Math.Phys.18:291-297,2011 arXiv: 0907.4053v2
## Breaking of a symmetry
05/11/2012
This week-end has been somewhat longer in Italy due to November 1st holiday and I have had the opportunity to read a very fine book by Ian Aitchison: Supersymmetry in Particle Physics – An Elementary Introduction. This book gives a very clear introduction to SUSY with all the computations clearly stated and going into the details of the Minimal Supersymmetric Standard Model (MSSM). This model was originally proposed by Howard Georgi and Savas Dimopolous (see here) and today does not seem to be in good shape due to recent results from LHC. Authors introduce a concept of a “softly” broken supersymmetry to accomodate the Higgs mechanism in the low-energy phenomenology. A “soflty” broken supersymmetry is when the symmetry is explicitly broken using mass terms but keeping renormalizability and invariance under the electroweak symmetry group. The idea is that, in this way, the low-energy phenomenology will display a standard Higgs mechanism with a vacuum expectation value different from zero. This fact is really interesting as we know that in a standard electroweak theory the symmetry cannot be explicitly broken as we lose immediately renormalizability but a supersymmetric theory leaves us more freedom. But why do we need to introduce explicit breaking terms into the Lagrangian of the MSSM? The reason is that SUSY is conveying a fundamental message:
There is no such a thing as a Higgs mechanism.
Indeed, one can introduce a massive contribution to a scalar field, the $\mu-$term, but this has just the wrong sign and, indeed, a spontaneously broken supersymmetry is somewhat of a pain. There are some proposed mechanisms, as F or D breaking fields or some dynamical symmetry breaking, but nothing viable for the MSSM. Given the “softly” breaking terms, then the argument runs smoothly and one recovers two doublets and $\tan\beta$ parameter that some authors are fond of.
The question at the root of the matter is that a really working supersymmetry breaking mechanism is yet to be found and should be taken for granted as we do not observe superpartners at accessible energies and LHC has yet to find one if ever. This mechanism also drives the electroweak symmetry breaking. Indeed, supersymmetry properly recovers a quartic self-interaction term but the awkward quadratic term with a wrong sign gives serious difficulties. Of course, the presence of a quartic term into a scalar field interacting with a fermion field, e.g. a Wess-Zumino model, provides the essential element to have a breaking of supersymmetry at lower energies: This model is reducible to a Nambu-Jona-Lasinio model and the gap equation will provide a different mass to the fermion field much in the same way this happens to chiral symmetry in QCD. No explicit mass term is needed but just a chiral model.
This means that the MSSM can be vindicated once one gets rid of an explicit breaking of the supersymmetry and works out in a proper way the infrared limit. There is a fundamental lesson we can learn here: SUSY gives rise to self-interaction and this is all you need to get masses. Higgs mechanism is not a fundamental one.
Dimopoulos, S., & Georgi, H. (1981). Softly broken supersymmetry and SU(5) Nuclear Physics B, 193 (1), 150-162 DOI: 10.1016/0550-3213(81)90522-8
Marco Frasca (2011). Chiral symmetry in the low-energy limit of QCD at finite temperature Phys. Rev. C 84, 055208 (2011) arXiv: 1105.5274v4
## Back to Earth
01/03/2011
Nature publishes, in the last issue, an article about SUSY and LHC (see here). The question is really simple to state. SUSY (SUperSYmmetry) is a solution to some problems that plagued physics for some time. An important question is the Higgs particle. In order to have the Standard Model properly working, one needs to fine tune the Higgs mass. SUSY, at the price to double all the existing particles, removes this need. But this can be obtained only if a finite parameter space of the theory is considered. This parameter space is what is explored at accelerator facilities like Tevatron and LHC. Tevatron was not able to uncover any SUSY partner for the known particles restricting it. Of course, with LHC opportunities are much larger and, with the recent papers by ATLAS and CMS, the parameter space has become dangerously smaller making somehow more difficult to remove fine tuning for the Higgs mass without fine tuning of the parameters of the SUSY, a paradoxical situation that can be avoided just forgetting about supersymmetry.
But, as often discussed in this blog, there is another way out saving both Higgs and supersymmetry. All the analysis carried out so far about Higgs field are from small perturbation theory and small couplings: This is the only technique known so far to manage a quantum field theory. If the coupling of the Higgs field is large, the way mass generation could happen is different being with a Schwinger-like mechanism. This imposes supersymmetry on all the particles in the model. This was discussed here. But in this way there is no parameter space to be constrainted for fine tuning to be avoided and this is a nice result indeed.
Of course, situation is not so dramatic yet and there is other work to be carried on at CERN, at least till the end of 2012, to say that SUSY is ruled out. Since then, it is already clear to everybody that exciting time are ahead us.
The ATLAS Collaboration (2011). Search for supersymmetry using final states with one lepton, jets, and missing transverse momentum with the ATLAS detector in sqrt{s} = 7 TeV pp
arxiv arXiv: 1102.2357v1
CMS Collaboration (2011). Search for Supersymmetry in pp Collisions at 7 TeV in Events with Jets
and Missing Transverse Energy arxiv arXiv: 1101.1628v1
The ATLAS Collaboration (2011). Search for squarks and gluinos using final states with jets and missing
transverse momentum with the ATLAS detector in sqrt(s) = 7 TeV proton-proton
collisions arxiv arXiv: 1102.5290v1
Marco Frasca (2010). Mass generation and supersymmetry arxiv arXiv: 1007.5275v2
## Mass generation: The solution
26/12/2010
In my preceding post I have pointed out an interesting mathematicalquestion about the exact solutions of the scalar field theory that I use in this paper
$\Box\phi+\lambda\phi^3=0$
given by
$\phi=\mu\left(\frac{2}{\lambda}\right)^\frac{1}{4}{\rm sn}(p\cdot x,i)$
that holds for
$p^2=\mu^2\sqrt{\frac{\lambda}{2}}.$
If you compute the Hamiltonian the energy does not appear to be finite, differently from what the relation dispersion is saying. This is very similar to what happens to plane waves for the wave equation. The way out is to take a finite volume and normalize properly the plane waves. One does this to get the integral of the Hamiltonian finite and all amounts to a proper normalization. In our case where must this normalization enter? The striking answer is: The coupling. This is an arbitrary parameter of the theory and we can properly rescale it to get the right normalization in the Hamiltonian. The final result is a running coupling exactly in the same way as I and others have obtained for the quantum theory. You can see the coupling entering in the right way both in the solution and in the computation of the Hamiltonian.
If you are curious about these computations you can read the revised version of my paper to appear soon on arxiv.
Marco Frasca (2010). Mass generation and supersymmetry arxiv arXiv: 1007.5275v1
## Mass generation in the Standard Model
20/12/2010
The question of the generation of the mass for the particles in the Standard Model is currently a crucial one in physics and is a matter that could start a revolutionary path in our understanding of the World as it works. This is also an old question that can be rewritten as “What are we made of?” and surely ancient greeks asked for this. Today, with the LHC at work and already producing a wealth of important results, we are on the verge to give a sound answer to it.
The current situation is well-known with a Higgs mechanism (but here there are several fathers) that mimics the second order phase transitions as proposed by Landau long ago. In some way, understanding ferromagnetism taught us a lot and produced a mathematical framework to extract sound results from the Standard Model. Without these ideas the model would have been practically useless since the initial formulation due to Shelly Glashow. The question of mass in the Standard Model is indeed a stumbling block and we need to understand what is hidden behind an otherwise exceptionally successful model.
As many of yours could know, I have written a paper (see here) where I show that if the way a scalar field gets a mass (and so also Yang-Mills field) is identical in the Standard Model, forcefully one has a supersymmetric Higgs sector but without the squared term and with a strong self-coupling. This would imply a not-so-light Higgs and the breaking of the supersymmetry the only way to avoid degeneracy between the masses of all the particles of the Standard Model. By my side I would expect these signatures as evidences that I am right and QCD, a part of the Model, will share the same mechanism to generate masses.
Yet, there is an open question put forward by a smart referee to my paper. I will put this here as this is an interesting question of classical field theory that is worthwhile to be understood. As you know, I have found a set of exact solutions to the classical field equation
$\Box\phi+\lambda\phi^3=0$
from which I built my mass generation mechanism. These solutions can be written down as
$\phi(x)=\mu\left(\frac{2}{\lambda}\right)^\frac{1}{4}{\rm sn}(p\cdot x+\theta,i)$
being $sn$ a Jacobi’s elliptic function and provided
$p^2=\mu^2\sqrt{\frac{\lambda}{2}}.$
From the dispersion relation above we can conclude that these nonlinear waves indeed represent free massive particles of finite energy. But let us take a look to the definition of the energy for this theory, one has
$H=\int d^3x\left[\frac{1}{2}(\dot\phi)^2+\frac{1}{2}(\nabla\phi)^2+\frac{\lambda}{4}\phi^4\right]$
and if you substitute the above exact solutions into this you will get an infinity. It appears like these solutions have infinite energy! This same effect is seen by ordinary plane waves but can be evaded by taking a finite volume, one normalizes the solutions with respect to this volume and so you are done. Of course, you can take finite volume also in the nonlinear case provided you put for the momenta
$p_i=\frac{4n_iK(i)}{L_i}$
being $i=x,y,z$ as this Jacobi function has period $4K(i)$ but you should remember that this function is doubly periodic having also a complex period. Now, if you compute for $H$ you will get a dispersion relation multiplied by some factors and one of these is the volume. How could one solve this paradox? You can check by yourselves that these solutions indeed exist and must have finite energy.
My work on QCD is not hindered by this question as I work solving the equation $\Box\phi+\lambda\phi^3=j$ and here there are different problems. But, in any case, mathematics claims for existence of these solutions while physics is saying that there is something not so well defined. An interesting problem to work on.
## Some reflections
28/11/2010
It is a lot of time that I am thinking about the scenario is emerging from our current understanding of reality through physics. There is a lot to be understood yet, mostly the very nature of space and time and a proof of the real number of dimensions our universe emerged from. Notwithstanding these noteworthy missing questions, we can make an idea of what is going on as the depth of our understanding is already quite sensible.
At the dawn of the last century, Albert Einstein put forward an important conclusion: mass and energy are the same thing. Indeed, what Einstein had in mind is a deeper understanding of the concept of mass that is for us an important concern. Where does mass come from? What is it made of? At the start of the last century these questions could not have a proper answer. But having reduced the mass to another concept, like energy, was of paramount importance.
On a similar ground it was relevant to understand the role of another, apparently not reducible, concept: Charge. Today we know that the number of charges, the couplings that make all change around us, will be at last reduced to a single number. Maybe. So far we only know for certain that, thanks to Steven Weinberg, Shelly Glashow and Abdus Salam, we have reduced the number of interacting fields. But strong coupling and gravitational constant are still there disconnected if we limit to our current experimental knowledge. Of course, theoretical physics has gone really far in this area and we hope that LHC will help us to give a way to cut out most of what was done here to get the real understanding of the way things work. Somebody will be happy others won’t but this how our World works.
Today, we have a better understanding of the concept of mass and, while waiting for LHC to confirm us our ideas, we can draw some conclusions about all these questions. The fact that mass is energy is an important clue of the idea that this concept is reducible to more fundamentals concepts and that a mechanism for its emergence must exist. Higgs mechanism goes in this direction as also our ideas emerging from QCD about the mass gap that confirm the idea that mass is not a fundamental concept by itself.
So, we can conclude that, so far, our ideas of the World reduce to two fundamental non-reducible ideas: Energy and charge. The former is just a safety lock with respect to the changes provoked by the latter. So, I leave you with a final question: If things stay in this way, does space-time entail a wider concept to embed them or we can reduce also this to them?
## Mass generation and supersymmetry
30/07/2010
I have uploaded a paper on arxiv with a new theorem of mine. I have already exposed the idea in this blog but, so far, I have had no much time to make it mathematically sound. The point is that the mechanism I have found that gives mass to Yang-Mills and scalar fields implies supersymmetry. That is, if I try to apply it to the simplest gauge theory, in a limit of a strong self-interaction of a massless Higgs field, all the fields entering into the theory acquire identical masses and the couplings settle down to the proper values for a supersymmetric model. Being this result so striking, I was forced to produce a theorem at the classical level, as generally done with the standard Higgs mechanism, and let it widely known. My next step is to improve the presentation and extend this result after a fully quantum treatment. This is possible as I have already shown in the case of a Yang-Mills theory.
My view is that just a mechanism could be seen in Nature to produce masses and I expect that this is the same already seen for QCD. So, supersymmetry is mandatory. This will imply a further effort for people at work to uncover Higgs particle as they should also say to us what kind of self-interaction is in action here and if it is a supersymmetric particle, as it should.
The interesting point is that all the burden of the spectrum of the standard model will rely, not on the mechanism that generates masses but on the part of the model that breaks supersymmetry.
Interesting developments are expected in the future. Higgs is always Higgs but a rather symmetric one. So, stay tuned! | 2015-08-05 04:19:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 31, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6784895062446594, "perplexity": 423.63659364348274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438043060830.93/warc/CC-MAIN-20150728002420-00342-ip-10-236-191-2.ec2.internal.warc.gz"} |